Guest Post by Willis Eschenbach
Looking at a recent article over at Tallbloke’s Talkshop, I realized I’d never done a periodogram looking for possible cycles in the entire Central England Temperature (CET) series. I’d looked at part of it, but not all of it. The CET is one of the longest continuous temperature series, with monthly data starting in 1659. At the Talkshop, people are discussing the ~24 year cycle in the CET data, and trying (unsuccessfully but entertainingly in my opinion) to relate various features of Figure 1 to the ~22-year Hale solar magnetic cycle, or a 5:8 ratio with two times the length of the year on Jupiter, or half the length of the Jupiter-Saturn synodic cycle, or the ~11 year sunspot cycle. They link various peaks to most every possible cycle imaginable, except perhaps my momma’s motor-cycle … here’s their graphic:
Figure 1. Graphic republished at Tallbloke’s Talkshop, originally from the Cycles Research Institute.
First off, I have to say that their technique of removing a peak and voila, “finding” another peak is mondo sketchy on my planet. But setting that aside, I decided to investigate their claims. Let’s start at the natural starting point—by looking at the CET data itself.
Figure 2 shows the monthly CET data as absolute temperatures. Note that in the early years of the record, temperatures were only recorded to the nearest whole degree. Provided that the rounding is symmetrical, this should not affect the results.
Figure 2. Central England Temperature (CET). Red line shows a trend in the form of a loess smooth of the data. Black horizontal line shows the long-term mean temperature.
Over the 350-year period covered by the data, the average temperature (red line) has gone up and down about a degree … and at present, central England is within a couple tenths of a degree of the long-term mean, which also happens to be the temperature when the record started … but I digress.
Figure 3 shows my periodogram of the CET data shown in Figure 2. My graphic is linear in period rather than linear in frequency as is their graphic shown in Figure 1.
Figure 3. Periodogram of the full CET record, for all periods from 10 months to 100 years. Color and size both show the p-value. Black dots show the cycles with p-values less than 0.05, which in this case is only the annual cycle (p=0.03). P-values are all adjusted for autocorrelation. The yellow line shows one-third the length of the ~350 year dataset. I consider this a practical limit for cycle detection. P-values for all but the one-year cycle are calculated after removal of the one-year cycle.
I show the periodogram in this manner to highlight once again the amazing stability of the climate system. One advantage of the slow Fourier transform I use is that the answers are in the same units as the input data (in this case °C). So we can see directly that the average annual peak-to-peak swing in the Central England temperature is about 13°C (23°F).
And we can also see directly that other than the 13°C annual swing, there is no other cycle of any length that swings even a half a degree. Not one.
So that is the first thing to keep in mind regarding the dispute over the existence of purported regular cycles in temperature. No matter what cycle you might think is important in the temperature record, whether it is twenty years long or sixty years or whatever, the amplitude of the cycle is very small, tenths of a degree. No matter if you’re talking about purported effects from the sunspot cycle, the Hale solar magnetism cycle, the synodic cycle of Saturn-Jupiter, the barycentric cycle of the sun, or any other planetasmagorica, they all share one characteristic. If they’re doing anything at all to the temperature, they’re not doing much. Bear in mind that without a couple hundred years of records and sophisticated math we couldn’t even show and wouldn’t even know such tiny cycles exist.
Moving on, often folks don’t like to be reminded about how tiny the temperature cycles actually are. So of course, the one-year cycle is not shown in a periodogram, too depressing. Figure 4 is the usual view, which shows the same data, except starting at 2 years.
Figure 4. Closeup of the same data as in Figure 3. Unlike in Figure 3, statistical significance calculations done after removal of the 1-year cycle. Unlike the previous figure, in this and succeeding figures the black dots show all cycles that are significant at a higher p-value, in all cases 0.10 instead of 0.05. This is because even after removing the annual signal, not one of these cycles is significant at the p-value of 0.05.
Now, the first thing I noticed in Figure 4 is that we see the exact same largest cycles in the periodogram that Tallbloke’s source identified in their Figure 1. I calculate those cycle lengths as 23 years 8 months, and 15 years 2 months. They say 23 years 10 months and 15 years 2 months. So our figures agree to within expectations, always a first step in moving forwards.
So … since we agree about the cycle lengths, are they right to try to find larger significance in the obvious, clear, large, and well-defined 24-year cycle? Can we use that 24-year cycle for forecasting? Is that 24-year cycle reflective of some underlying cyclical physical process?
Well, the first thing I do to answer that question is to split the data in two, an early and a late half, and compare the analyses of the two halves. I call it the bozo test, it’s the simplest of all possible tests, doesn’t require any further data or any special equipment. Figures 5a-b below show the periodograms of the early and late halves of the CET data.
Figure 5a-b. Upper graph shows the first half of the CET data and the lower graph shows the second half.
I’m sure you can see the problem. Each half of the data is a hundred and seventy-five years long. The ~24-year cycle exists quite strongly in the first half of the data, It has a swing of over six tenths of a degree on average over that time, the largest seen in these CET analyses.
But then, in the second half of the data, the 24-year cycle is gone. Pouf.
Well, to be precise, the 24-year peak still exists in the second half … but it’s much smaller than it was in the first half. In the first half, it was the largest peak. In the second half, it’s like the twelfth largest peak or something …
And on the other hand, the ~15 year cycle wasn’t statistically significant at a p-value less than 0.10 in the first half of the data, and it was exactly 15 years long. But in the second half, it has lengthened almost a full year to nearly 16 years, and it’s the second largest cycle … and the second half, the largest cycle is 37 months.
Thirty-seven months? Who knew? Although I’m sure there are folks who will jump up and say it’s obviously 2/23rds of the rate of rotation of the nodes on the lunar excrescences or the like …
To me, this problem over-rides any and all attempts to correlate temperatures to planetary, lunar, or tidal cycles.
My conclusion? Looking for putative cycles in the temperature record is a waste of time, because the cycles appear and disappear on all time scales. I mean, if you can’t trust a 24-year cycle that lasts for one hundred seventy-five years, , then just who can you trust?
w.
De Costumbre: If you object to something I wrote, please quote my words exactly. It avoids tons of misunderstandings.
Data and Code: I’ve actually cleaned up my R code and commented it and I think it’s turnkey. All of the code and data is in a 175k zip file called CET Periodograms.
Statistics: For the math inclined, I’ve used the method of Quenouille to account for autocorrelation in the calculation of the statistical significance of the amplitude of the various cycles. The method of Quenouille provides an “effective n” (n_eff), a reduced count of the number of datapoints to use in the various calculations of significance.
To use the effective n (n_eff) to determine if the amplitude of a given cycle is significant, I first need to calculate the t-statistic. This is the amplitude of the cycle divided by the error in the amplitude. However, that error in amplitude is proportional to

where n is the number of data points. As a result, using our effective N, the error in the amplitude is
where n_eff is the “effective N”.
From that, we can calculate the t-statistic, which is simply the amplitude of the cycle divided by the new error.
Finally, we use that new error to calculate the p-value, which is
p-value = t-distribution(t-statistic , degrees_freedom1 = 1 , degrees_freedom2 = n_eff)
At least that’s how I do it … but then I was born yesterday, plus I’ve never taken a statistics course in my life. Any corrections gladly considered.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


Nice article Willis. There are two CET data sets. Gordon Manley constructed the one from 1659 onwards based on monthly readings and the early part included references to diaries and other observations. A perfectly good technique, much used by other researchers to determine temperatures, especially as the range of CET temperatures is relatively limited.
The other CET data base was compiled by David Parker and uses daily instrumental records and goes back to 1772 and is the preferred one used by the MET Office. This latter one has an allowance for UHI from 1976
A couple of years ago I reconstructed CET to 1538 and am currently going back further through the use of such places as the Met Office library and archives.
http://judithcurry.com/2011/12/01/the-long-slow-thaw/
Within this article is a detailed explanation as to how CET was constructed.
I think around 2010 the average mean temperature for the year was identical to the first year in the older record-1659. As Willis comments it is currently within a few tenths of a degree of the earlier years.
It reached a sharp peak around 2000 and has been in decline since
http://www.metoffice.gov.uk/hadobs/hadcet/
I had discussions with David Parker a couple of months ago. CET uses three stations in central England to work out an average. The previous 3 that were used were felt to be a little ‘warm’ and have been replaced. (we are talking about tenths of a degree here)
Personally I would doubt that the peak reached in 2000 was as great as indicated due to these warmer stations. Anyone living in England would be very depressed to feel they have already lived through the ‘warmest decade ever’ and things are now on the slide.
The 1730’s were, according to Phil Jones, within a few tenths of the peak and this caused him to revise his opinions on the rate of natural variability, which was greater than he had previously believed.
As far as cycles go, I struggle to find them, although long term weather patterns undoubtedly resurface. For example warm prevailing westerly winds are frequently replaced by colder easterlies for identifiable periods, which then switch back again, and there is no doubt we have decades that are wetter or drier than others. I can’t see however that there is much to warrant the word ‘cycles’ in these,.
I don’t know if any other individual country data sets also make an allowance for UHI. I don’t believe there is any in the ‘global’ data sets.
tonyb
Not 100% sure about the Fourier transform here ….. but it seems to me that the sunspot cycle is slightly variable …. less sunspots mean longer period … and the Little Ice Age occupied a large portion of the record. so wouldn’t that translate into a wider frequency peak or as you say, a peak with a shoulder on it …..
I have a question;
Where in the raw data is the “Little Ice Age” where the Thames froze over enough for markets to be held on the ice on a regular basis?
I dis-agree ONLY with the part of your conclusion regarding the excercise being a waste of time.
I thank you for your time researching and posting!
🙂
Concentration of mass of the planets relative to the ecliptic has an impact on Earth’s climate.
http://oi57.tinypic.com/a1gua.jpg
milodonharlani says:
May 8, 2014 at 5:59 pm
Thanks, milodon, I have no quarrel with that. The problem, as always, is that we have “pseudo-cycles” that fluctuate “quasi-regularly” at all time scales. For example, the ice ages have persisted for a million years. And as you point out, they are correlated with the “Milankovich cycles”.
Bear in mind, however, that it’s been a half billion (500 million) years since life took over in the Cambrian explosion. Out of all of that time, only a million years or so have had such Milankovich-related temperature oscillations … despite the fact that the Milankovich variations went on for all of that 500 million years.
So although the ice-age/inter-glacial oscillation seems solid and fixed and inexorable to us, and has been going on for a million years, it appeared without warning after 499 million years with no such regular ice-age/inter-glacial oscillation. Similarly, there is no reason to assume that that cycling will last forever …
My point is simple. Nature at its chaotic finest in the form of the global temperature doesn’t generally (1 time out of 500) do regular astronomically related ice ages. We don’t know why they have appeared suddenly, with no previous history of this kind of what you call “quasi-regular” behavior. We don’t know how long they will last.
Finally, if I had my way, I would ban terms like “quasi-regularly” and “pseudo-cycles” and “approximately 60-year cycles” from scientific discussions. I don’t know what those terms mean. Is a 51-year cycle approximately 60 years? Is a cycle that exists everywhere but in one short section of the record a “pseudocycle”?
The problem is not “pseudo” or “quasi” cycles. The problem that we have very real cycles that appear out of nothingness, exist for a while, and then disappear, or at least drop back down into the weeds. They are not “pseudo-cycles”, any more than the ~24-year cycle in the first hundred years of the record is a “pseudo-cycle”. It is a very real cycle in the measurements … except that in the second half of the data it disappears entirely.
Much appreciated,
w.
Ray Tomes says:
May 8, 2014 at 7:41 pm
Thanks, Ray, good to hear from you. I thought I explained my calculation of the p-values. It’s also in my code. Not sure what your question is.
Regarding your calculations, did you adjust for autocorrelation, and if so how?
w.
Low solar activity has an effect on blocking the polar vortex.
http://www.swpc.noaa.gov/SolarCycle/Ap.gif
http://oi61.tinypic.com/15cgc8x.jpg
http://oi59.tinypic.com/28wecyo.jpg
http://oi61.tinypic.com/swdz6e.jpg
The CET annual temperature spectrum doesn’t reveal anything in particular, so I suggest a different approach.
CET is highly influenced by a conflict between the ‘warming’ from the N. Atlantic SST, and ‘cooling’ from the Icelandic low atmospheric pressure, two totally different beasts.
If one is looking for the solar input, month of June, time of the summer solstice and the highest insolation is the set of data to look at, beside its 350 year long trend is rather interesting. In the earlier centuries solar cycles were a bit longer, so when data is split, the effect may show. Month of June however specific, it is after all only one of 12, but beside being a clear case of cherry picking, it does give another perspective.
Ray Tomes says:
May 8, 2014 at 7:36 pm
Thanks, Ray, explanations are always welcome.
While this is possible in theory, and works well where there are sharp spectral peaks, in natural datasets the peaks tend to be wide, sometimes quite wide. At that point it becomes a judgement call as you what you are removing … and of course removing something that isn’t there is equivalent to adding a spurious signal. As a result, I prefer to avoid the procedure.
Glad to, you’ll see what I mean. In my view you just need a more accurate probe … let me recommend the slow Fourier transform. Here is my periodogram of 350+ years of pseudo-data composed of two sine waves with cycles of 23.9 and 22.2 years.

As you can see, the two peaks stand out clearly without any need to “remove the major component”.
(Yes, I see the ringing, and I know I can filter it out to get a more accurate answer … I’m working on understanding the best way to do that, and experimenting with some new ideas I have in that regard.)
w.
Willis said:
“The problem is not “pseudo” or “quasi” cycles. The problem that we have very real cycles that appear out of nothingness, exist for a while, and then disappear, or at least drop back down into the weeds. They are not “pseudo-cycles”, any more than the ~24-year cycle in the first hundred years of the record is a “pseudo-cycle”. It is a very real cycle in the measurements … except that in the second half of the data it disappears entirely. ”
I agree with that, which is why I tend not to contribute in climate cycle threads.
tonyb said:
“As far as cycles go, I struggle to find them, although long term weather patterns undoubtedly resurface. For example warm prevailing westerly winds are frequently replaced by colder easterlies for identifiable periods, which then switch back again, and there is no doubt we have decades that are wetter or drier than others. I can’t see however that there is much to warrant the word ‘cycles’ in these,.”
With which I also agree.
So, here is the rub:
There is a constant interplay between bottom up oceanic and top down solar influences on the global energy budget with each exerting a negative system response to the other and each having independent variations with a cyclical component.
The effect on the climate is to shift global air circulation around as necessary to maintain a steady transmission of solar energy through the Earth system but with variations about the mean as the adjustment processes work first one way and then the other.
Individual regions are heavily affected by their location relative to the nearest jet steam track or climate zone boundary with the types of changes that tonyb notes.
The average global temperature oscillates about the mean but not a lot as Willis notes.
There are infinite possibilities for local and regional weather / climate variations over time and that, together with internal system chaotic variability, obscures any short term cycles there may be and heavily modulates longer term cycles, sometimes supplementing and sometimes offsetting them.
Even the longer term cycles come and go as Willis notes in connection with that relatively recent ice age / interglacial cycling.
We are concerned about short term variations about the mean for the global energy budget.
Currently, the periodicity most relevant to us is that of the apparent 1000 to 1500 year variability observed within the current interglacial.
Looking at historical records and paleological evidence as tonyb does so well we can see that natural variations are enough to account for the observations of climate change during the 20th century and the start of the 21st.
There is no need to invoke any anthropological component on the basis of the data currently available and it is that incorrect invocation that causes the models to diverge from observations over time.
Willis, thanks for that. You know that I am your cycle man. I KNOW there is a 20-24 year cycle. It is called the Hale Nicholson cycle. But you won’t get a good look at it if you look at means. Mainly because there is weather. Also, I am pretty sure that at some stage in the records they went from measuring 4 or 6 times a day (physical observation) taking an average for the day to automatic recording, once a second. Apart from that there is accuracy over time, which is more critical if you keep looking at means.
I am not familiar with your analysis, so I cannot do it myself, but I will take a bet with you that if you were to look at maximum temperatures, in CET, you will find the elusive cycle that we are all talking about.
If the error equation were true for a quantity such as temperature, we’d able able to get accurate global temperature measurements by getting all 7 billion of us to stick a finger in the air and take a temperature measurement.
Don Easterbrook says:
May 8, 2014 at 8:44 pm
Thanks for the suggestion, Don. I’ve thought about doing a whole pile of things. Right now, I’m trying to improve and sharpen my tools.
Regards,
w.
“But at my back, I always hear,
Times winged chariot drawing near …”
gymnosperm says:
May 8, 2014 at 9:53 pm
Sadly, regarding the real import of numbers and probabilities, humans are indeed blind and deaf. As a result, we invented statistics to keep ourselves from being misled by apparent
Mmm … I wouldn’t describe it in those terms. What I’ve done is look for cycles, and invite people to point out the ones I’ve missed.
However, I didn’t say that the 10Be flux has “nothing to do with cosmic rays”. Since 10Be is generated inter alia by cosmic rays, that’s nonsense. What I said is that I could find no ~11-year sunspot cycles in the 10Be data, a very different thing. I invited people to show me where they were, using your choice of statistical tools … to date, neither you nor anyone has shown such a thing.
I also didn’t say that volcanoes “have no effect on climate”. What I have shown is that the effect on global temperatures is localized and short-lived. And I didn’t use periodicity analysis for that at all, different statistical tools entirely.
I also have not said that there are “no discernible cycles in climate”, that’s madness. I have measured the periods of the cycles that exist, and discussed them at length. They are quite real … and yet they appear and disappear without notice. That’s the problem.
In doing these analyses, I’ve used a variety of statistical tools. So no, I don’t wonder if the tools I’m using are at fault. I’ve invited people to use their tools. They can’t find what I can’t find. Makes me think it’s not the tools …
And when Joe Sixpack argues with a statistician, I’m going with the statistician …
The problem is that humans are famous for seeing patterns where none exist. We are pattern-recognizing mechanisms, we can’t stop ourselves. Our only protection against this is statistics.
Regards,
w.
Ray Tomes says:
May 8, 2014 at 7:45 pm
While there is indeed a beat frequency between the two, you should run the numbers before you make the claims. It turns out that with those two frequencies, one 70% the amplitude of the other, the amplitude of the smaller half of the beat frequency is about 50% of the amplitude of the larger half of the beat frequency. This is a long ways from “almost cancel each other exactly” … and also a ways from what we see in the figures of the head post.
w.
Willis, UK kept to F until the last few years, and on their weather sites, they still do so have both temps c & F. Anyway, we do know this.
Willis, I don’t know it this is possible, or whether I can explain it either, but here goes:
Can you take the de-seasonalised CET record and re-scale it such that each period of the temperature record that corresponds to one solar cycle is equal length in the temperature record? Having done this, re-run the cycle search. (not sure what the units are after doing this).
I believe the sunspot cycle record is reasonably well documented over the CET record period.
The reason for doing this is that I would expect that the 23-ish year peak, if it really is the Hale cycle in the data, would become much stronger in the re-scaled data, but diminished if it isn’t the Hale cycle but something else entirely.
Willis,
Your bozo test, and specifically the failing of it by the CET 22 year got me thinking. One weakness of doing a Fourier transform over a certain length of data is that if the signal undergoes a step in phase along the length, this will reduce the peak size, extremely so with 180 degrees jumps.
For most physical system, step-wise changes simply don’t happen, but suppose you have some kind of resonant system that gets started by some first process and damped after a while by some other process, then the resonant periods are generally not in phase. Doing an FT over an entire length of time containing a number of resonant periods will give you weak peaks.
One existing solution to still detect the peaks is doing a time-windowed FT, and looking at the spectrum as a function of time, but then you have to look at 3D plots where peaks easily get lost. So what if you do this, but then average all the *magnitudes* at different times for each frequency, *ignoring the phase*, and arriving again at a 2D spectrum-like plot? Jumps or creep in phase will have minimal effect (depending on your window size).
Maybe a nice addition to your slow FT — it will make it more sensitive to detecting a signal with a certain frequency per se, irrespective of its phase. (I’m not saying that this will change the outcome of your analysis, though).
Frank
Truthseeker. I think it’s important to note that the Thames frost fairs were not just a peculiarity of the low temperatures found during the little ice age but just as much a result of the structure of the old London Bridge ( removed 1825 ). The bridge had so much structure in the water that it acted as a weir. It slowed the pace of the water greatly and also prevented saline tidal waters from getting past upstream. The bridge also acted as a buffer to collect ice that formed, allowing for more ice to form in the slow, fresh waters upstream of the bridge.
The same temperatures today would not cause the Thames to freeze. It is a much faster, developed river.
Willis
This is the amplitude of the cycle divided by the error in the amplitude
If I understand your SFT then it is akin to a Fourier Series using a linear sequence of periods (rather than linear range of frequencies).
Then the amplitude corresponds to the coefficients(?). In which case t-stat is…
t = coefficient/standard error of coefficient
…and if I understand you then this is what you’ve done.
But the result of the t-stat depends on how you’re computing your coefficients; for each period: as paired operations or as a multivariate stat. If you do a multivariate regression, you’re not guaranteed to get the same t-statistic as you’d get if you did each fit separately – the other thing to wary of is co-linearity which may be an issue with using linear ranges of periods and when assessing overtones of fundamental frequencies.
You may have already address all this.
In my comment above I referred to the CET June.
Here is the periodiogram
http://www.vukcevic.talktalk.net/CET-June-spec.htm
As accuracy of data improves with the time, number of spectral components falls radically. The 1800-1920 and 1920-2013 section are selected to represent two distinct solar magnetic periods.
It can be seen that solar magnetic cycle period (it varies with the length of the solar cycles) of about 22 years and the ~17.5 years periods are present within all sections.
Anyone keen on the solar magnetic cycles will have a major problem to overcome ‘Svalgaard test’, while 17.5 year is going to be even more problematic one. It ‘originates’ deep within earth’s interior, and basically is time angular momentum disturbances within the core propagates from the equatorial to the polar latitudes (according to JPL’s Dr. J. Dickey) and surprisingly it reappears in the N. Atlantic tectonics.
The average of recent winters (2008/9, 2009/10 and 2010/11) shows cold conditions over northern Europe and the United States and mild conditions over Canada and the Mediterranean
associated with anomalously low and even record low values of the NAO. This period also had easterly anomalies in the lower stratosphere. Given our modelling result, these cold winters were
probably exacerbated by the recent prolonged and anomalously low solar minimum. On decadal timescales the increase in the NAO from the 1960s to 1990s, itself known to be strongly connected to changes in winter stratospheric circulation, may also be partly explained by the upwards trend in solar activity evident in the open solar-flux record. There could also be confirmation of a leading role for the `top-down’ influence of solar variability on surface
climate during periods of prolonged low solar activity such as the Maunder minimum if the ultraviolet variations used here also apply to longer timescales.
The solar effect presented here contributes a substantial fraction of typical year-to-year variations in near-surface circulation, with shifts of up to 50% of the interannual variability (Fig. 1a,b).
This represents a substantial shift in the probability distribution for regional winter climate and a potentially useful source of predictability. Solar variability is therefore an important factor
in determining the likelihood of similar winters in future. However, mid-latitude climate variability depends on many factors, not least internal variability, and forecast models that simulate
all the relevant drivers are needed to estimate the range of possible winter conditions.
http://yly-mac.gps.caltech.edu/z_temp/4%20soozen/zjunk/solar%20cycle%20master%20/Ineson2011%20*%20.pdf
steveta_uk says:
May 9, 2014 at 1:39 am
……
I’ve done something similar some time ago, laborious and no strong conclusive result.
However, I’ve been tracking the CET daily max, and found there is often variable ~27 days pseudo-cycle, the most recent well defined is found during 4 months in the second part of 2012. Normally daily temps variability is subject to multitude of factors, but ~ 27 days would be related to either the lunar tides or a solar factor (Bartel rotation), the Sun has a magnetic lump, or as solar people reffer to it ‘preferable sunspot longitude’.
see HERE
A superficial look at amplitudes would suggest the change is related either to the TSI or solar magnetic, rather than the Atlantic tides.
One problem is a variable delay (0 and 7 days), however the amplitude ‘oscillations’ between 3 and 4 degrees C pp is rather large to be totally ignored.
@vukcevic
can you perhaps check the CET max data for the 20-24 year Hale cycle?
A book I’ve found very interesting is ‘Climate Change’ by William James Burroughs. His C.V. includes seven years at the UK National Physical Laboratory researching atmospheric physics. His comments on the CET are I think relevant here.
On p107 of the first edition of his book (published in 2001) he comments ‘The CET series confirms the exceptionally low temperatures of the 1690s and, in particular, the cold late springs of this decade. Equally striking is the sudden warming from the 1690s to the 1730s. In less than forty years the conditions went from the depths of the Little Ice Age to something comparable to the warmest decades of the twentieth century. This balmy period came to a sudden halt with the extreme cold of 1740 and a return to colder conditions, especially in the winter half of the year.
He also comments:
‘A more striking feature is the general evidence of interdecadal variability. So, the poor summers of the 1810s are in contrast to the hot ones of the late 1770s and early 1780s, and around 1800. The same interdecadal variability showed up in more recent data. The 1880s and 1890s were marked by more frequent cold winters, while both the 1880s and 1910s had more than their fair share of cool wet summers.’