Cycling in Central England

Guest Post by Willis Eschenbach

Looking at a recent article over at Tallbloke’s Talkshop, I realized I’d never done a periodogram looking for possible cycles in the entire Central England Temperature (CET) series. I’d looked at part of it, but not all of it. The CET is one of the longest continuous temperature series, with monthly data starting in 1659. At the Talkshop, people are discussing the ~24 year cycle in the CET data, and trying (unsuccessfully but entertainingly in my opinion) to relate various features of Figure 1 to the ~22-year Hale solar magnetic cycle, or a 5:8 ratio with two times the length of the year on Jupiter, or half the length of the Jupiter-Saturn synodic cycle, or the ~11 year sunspot cycle. They link various peaks to most every possible cycle imaginable, except perhaps my momma’s motor-cycle … here’s their graphic:

tallbloke CET periodicityFigure 1. Graphic republished at Tallbloke’s Talkshop, originally from the Cycles Research Institute

First off, I have to say that their technique of removing a peak and voila, “finding” another peak is mondo sketchy on my planet. But setting that aside, I decided to investigate their claims. Let’s start at the natural starting point—by looking at the CET data itself.

Figure 2 shows the monthly CET data as absolute temperatures. Note that in the early years of the record, temperatures were only recorded to the nearest whole degree. Provided that the rounding is symmetrical, this should not affect the results.

CET 1659-April 2014Figure 2. Central England Temperature (CET). Red line shows a trend in the form of a loess smooth of the data. Black horizontal line shows the long-term mean temperature.

Over the 350-year period covered by the data, the average temperature (red line) has gone up and down about a degree … and at present, central England is within a couple tenths of a degree of the long-term mean, which also happens to be the temperature when the record started … but I digress.

Figure 3 shows my periodogram of the CET data shown in Figure 2. My graphic is linear in period rather than linear in frequency as is their graphic shown in Figure 1.

sft cet full record including yearlyFigure 3. Periodogram of the full CET record, for all periods from 10 months to 100 years. Color and size both show the p-value. Black dots show the cycles with p-values less than 0.05, which in this case is only the annual cycle (p=0.03). P-values are all adjusted for autocorrelation. The yellow line shows one-third the length of the ~350 year dataset. I consider this a practical limit for cycle detection. P-values for all but the one-year cycle are calculated after removal of the one-year cycle.

I show the periodogram in this manner to highlight once again the amazing stability of the climate system. One advantage of the slow Fourier transform I use is that the answers are in the same units as the input data (in this case °C). So we can see directly that the average annual peak-to-peak swing in the Central England temperature is about 13°C (23°F).

And we can also see directly that other than the 13°C annual swing, there is no other cycle of any length that swings even a half a degree. Not one.

So that is the first thing to keep in mind regarding the dispute over the existence of purported regular cycles in temperature. No matter what cycle you might think is important in the temperature record, whether it is twenty years long or sixty years or whatever, the amplitude of the cycle is very small, tenths of a degree. No matter if you’re talking about purported effects from the sunspot cycle, the Hale solar magnetism cycle, the synodic cycle of Saturn-Jupiter, the barycentric cycle of the sun, or any other planetasmagorica, they all share one characteristic. If they’re doing anything at all to the temperature, they’re not doing much. Bear in mind that without a couple hundred years of records and sophisticated math we couldn’t even show and wouldn’t even know such tiny cycles exist.

Moving on, often folks don’t like to be reminded about how tiny the temperature cycles actually are. So of course, the one-year cycle is not shown in a periodogram, too depressing. Figure 4 is the usual view, which shows the same data, except starting at 2 years.

sft CET full recordFigure 4. Closeup of the same data as in Figure 3. Unlike in Figure 3, statistical significance calculations done after removal of the 1-year cycle. Unlike the previous figure, in this and succeeding figures the black dots show all cycles that are significant at a higher p-value, in all cases 0.10 instead of 0.05. This is because even after removing the annual signal, not one of these cycles is significant at the p-value of 0.05.

Now, the first thing I noticed in Figure 4 is that we see the exact same largest cycles in the periodogram that Tallbloke’s source identified in their Figure 1. I calculate those cycle lengths as 23 years 8 months, and 15 years 2 months. They say 23 years 10 months and 15 years 2 months. So our figures agree to within expectations, always a first step in moving forwards.

So … since we agree about the cycle lengths, are they right to try to find larger significance in the obvious, clear, large, and well-defined 24-year cycle? Can we use that 24-year cycle for forecasting? Is that 24-year cycle reflective of some underlying cyclical physical process?

Well, the first thing I do to answer that question is to split the data in two, an early and a late half, and compare the analyses of the two halves. I call it the bozo test, it’s the simplest of all possible tests, doesn’t require any further data or any special equipment. Figures 5a-b below show the periodograms of the early and late halves of the CET data.

sft CET first half record sft CET second half recordFigure 5a-b. Upper graph shows the first half of the CET data and the lower graph shows the second half.

I’m sure you can see the problem. Each half of the data is a hundred and seventy-five years long.  The ~24-year cycle exists quite strongly in the first half of the data, It has a swing of over six tenths of a degree on average over that time, the largest seen in these CET analyses.

But then, in the second half of the data, the 24-year cycle is gone. Pouf.

Well, to be precise, the 24-year peak still exists in the second half … but it’s much smaller than it was in the first half. In the first half, it was the largest peak. In the second half, it’s like the twelfth largest peak or something …

And on the other hand, the ~15 year cycle wasn’t statistically significant at a p-value less than 0.10 in the first half of the data, and it was exactly 15 years long. But in the second half, it has lengthened almost a full year to nearly 16 years, and it’s the second largest cycle … and the second half, the largest cycle is 37 months.

Thirty-seven months? Who knew? Although I’m sure there are folks who will jump up and say it’s obviously 2/23rds of the rate of rotation of the nodes on the lunar excrescences or the like …

To me, this problem over-rides any and all attempts to correlate temperatures to planetary, lunar, or tidal cycles.

My conclusion? Looking for putative cycles in the temperature record is a waste of time, because the cycles appear and disappear on all time scales. I mean, if you can’t trust a 24-year cycle that lasts for one hundred seventy-five years, , then just who can you trust?

w.

De Costumbre: If you object to something I wrote, please quote my words exactly. It avoids tons of misunderstandings.

Data and Code: I’ve actually cleaned up my R code and commented it and I think it’s turnkey. All of the code and data is in a 175k zip file called CET Periodograms.

Statistics: For the math inclined, I’ve used the method of Quenouille to account for autocorrelation in the calculation of the statistical significance of the amplitude of the various cycles. The method of Quenouille provides an “effective n” (n_eff), a reduced count of the number of datapoints to use in the various calculations of significance.

quenouille_n_effective

To use the effective n (n_eff) to determine if the amplitude of a given cycle is significant, I first need to calculate the t-statistic. This is the amplitude of the cycle divided by the error in the amplitude. However, that error in amplitude is proportional to

error prop to 1 over n minus 1

where n is the number of data points. As a result, using our effective N, the error in the amplitude is

error prop to 1 over n minus 1 v2

where n_eff is the “effective N”.

From that, we can calculate the t-statistic, which is simply the amplitude of the cycle divided by the new error.

Finally, we use that new error to calculate the p-value, which is

p-value = t-distribution(t-statistic , degrees_freedom1 = 1 , degrees_freedom2 = n_eff)

At least that’s how I do it … but then I was born yesterday, plus I’ve never taken a statistics course in my life. Any corrections gladly considered.

159 thoughts on “Cycling in Central England

  1. I appreciate your going over stuff like this, because I’m lazy. If I cared enough to go dig I would most likely by robot jesus deceive myself into thinking I’d found something. Looking at that chaos feels like drowning to me. But I’ve begun to grasp why you and AW find the cyclomaniacs uninteresting.
    Thanks Willis.

  2. except perhaps my momma’s motor-cycle

    My wife has a cycle.

    & I have four (rideable, with a couple of frames besides).

    They’re great for getting around a city.

    My wife’s cycle is also good for predicting when I should keep my opinion to myself.

  3. Much appreciated post Willis. Always good to have the thinking cells poked again. I’m not sure i trust the CET record so much as to draw too many conclusions myself. I have an image of a man holding up a wet finger and making an educated guess for the first half of it ;)

    Perhaps I’m just being grumbly. I have a fasting blood test in the morning. I’m hungry and could murder a glass of Prosecco :D

    Whilst from previous brief conversations with you it’s clear you and I have some areas of interest in common I do think that your idea of cycling in England and mine are very different things ;)

    https://pbs.twimg.com/media/BlXseTKIEAA-dkT.jpg:large

    Forgive the levity. It’s nearly 2am here and I need sugar ;)

  4. I have an image of a man holding up a wet finger and making an educated guess for the first half of it ;)

    Hence the use of the term “Digital Forecast” on the meteorology web-sites.

  5. Call me a maniac if you want, but the fact that climatic cycles exist is as close to indisputable as you can get in paleo proxy data. These (pseudo-) cycles indubitably have fluctuated at least quasi-regularly during the Pleistocene, first on a roughly 40,000 year glacial/interglacial basis, then more recently at ~100,000 year intervals, with interglacials varying in length from a few thousand to tens of thousands of years’ duration, apparently under control of orbital mechanics.

    These cycles were first proposed by Adhemar, Croll & others in the 19th century, then Milankovitch while a PoW during WWI, & finally convincingly confirmed by Hays, Imbrie & Shackleton in 1976, as better paleoclimatic data became available, & not shown false since.

    Hays, J. D.; Imbrie, J.; Shackleton, N. J. (1976). “Variations in the Earth’s Orbit: Pacemaker of the Ice Ages”. Science 194 (4270): 1121–1132

    A good case can also be made for much longer term icehouse/hothouse cycles in Earth’s climatic history.

    Maintaining the existence of climatic cycles on shorter time scales (decennial, centennial or millennial) during the Holocene or previous interglacials may well however be the maniacal sun-stroked maunderings of Maunder Minimum temperature eddy advocates, but count me among them. Along with Eddy. And Lamb. And Herschel.

  6. milodonharlani says:
    May 8, 2014 at 5:59 pm

    Beg pardon Milodonharlani. There are some cycles. Sun comes up roughly every 24 hours. Seasons are awfully regular. Tides are basically predictable. So on. The graphs of ice ages I’ve seen look impressively regular to me, while I don’t want to take a stand and defend that position with you, I’m not about to quarrel with it right now.
    I didn’t mean for my statement to be construed that broadly.

  7. Stark Dickflüssig says:
    May 8, 2014 at 5:31 pm
    …My wife’s cycle is also good for predicting when I should keep my opinion to myself.

    Amen, brother!!

  8. Next up willis. Paleo cyclemania.
    Wrt your method. Intuitive. Straightforward. And easy to interprete.

  9. Willis, I have watched with interest as you grapple with this analysis and tools for exploring your ideas. A few suggestions…

    Google the Lomb Periodogram – I think you will find this interesting.

    Rather than splitting the dataset in two, randomize the order of the dataset (which should yield NO cycles by construction) and redo your analysis. Repeat several times and see how often you get a peak of that size at 24yrs. If it happens often, you can write off the observed peak as noise.

    One issue you have is that Fourier analysis is best suited for identifying regular behaviors. Climate is not regularly periodic, so is there a better tool for identifying irregular regularity, if you will? I’d suggest that you look at a simple Wavelet Transform.

  10. The periodicity analysis is always intriguing. I’d be interested to see you lower the split point (in terms of years) to see how far back you can get until the 24 year signal spikes up in the latter portion of the signal. That might give insight as to why it’s happening (such as it being in a time span that had particular precision limitations).

  11. Willis, you wrote “First off, I have to say that their technique of removing a peak and voila, “finding” another peak is mondo sketchy on my planet”, so I want to explain a little more. When a method is used, as I do, that interpolates the spectrum between the FFT points so as to find the accurate frequency the resulting spectrum does not just have a sharp peak at a point. Rather, it has a peak with the height dropping off each side and going slightly negative and then oscillating positive and negative with diminishing amplitude as the frequency changes. Removal of this component will allow any other peaks to be seen which are hiding in the shadow of that peak. In this case the second peak can already be seen as a broadened shoulder on one side. If you have any doubt about this, I suggest generating a data set of the length of the one used here with real periods of 23.9 years and 22.2 years (of about 70% the amplitude I think). Then do a spectral analysis of this and remove the major component to see what remains.

  12. Wiillis, I don’t know how you calculate your probabilities of cycles. I don’t think that they should be linear in relation to degrees amplitude. I use Bartel’s Test which is the best test for cycles accuracy. When I use rate of change of temperature then the 23.9 year cycle has p=0.002 and the 15.15 year cycle has p=0.02 so both are significant. If only straight temperature is used then the 23.9 year cycle has p=0.01. The remaining Hale period when I remove the 23.9 year cycle is not significant, which is why I say that “maybe”.

  13. Figure 3: Beautiful noise !

    Figure 4 through 5ab: The Phantoms Of The Opera !

    :-)

  14. Willis, if there are both 23.9 and ~22.2 year cycles in temperature, then they will form beats with a period of approximately 300+ years. Therefore they will support each other and almost cancel each other exactly as you found in your two halves.

  15. Purely for completeness sake, how does the 1 year cycle fare under the “bozo” test?

  16. Steven Mosher says:
    May 8, 2014 at 6:58 pm

    Next up willis. Paleo cyclemania.
    Wrt your method. Intuitive. Straightforward. And easy to interprete.

    No kidding. I just looked into some of the basics and the problems and gave myself a headache. There seems to be some order there, but it’s not simple, cut, and dried. How do you handle the situation when periodicity changes for no apparent reason?

    Heck I sometimes think I find meaningful patterns in my breakfast cheerios, maybe I should stick to that.

  17. Willis,

    You’re not the smartest guy in the room.

    You’re the smartest guy ’round these parts.

  18. Note that in the early years of the record, temperatures were only recorded to the nearest whole degree.
    I would be surprised if it were that small. You have your series in degC but the best to hope for in early records would be [2] degF. Also are not the early records proxies, like diary entries, rather than real temperature measurements?

  19. Oops, read 2 degF in above, in the belief that early thermometers are calibrated at 2 deg intervals

  20. Willis,
    Now that you’ve done CET, have you thought about doing the GISP2 ice core, which has thousands of years of record?
    The CET data show ‘reoccurring events’ that probably wouldn’t show up in any precise cycles but they do match similar recurring events in the GISP2, 10Be, 14C, and solar records.
    Best regards,
    Don

  21. And another correction….
    The old thermometers that I have seen with the 2degF scale are not as early as I had thought – they are early 19th century, like this one,

    (zoom in to see the scale)
    and Fahrenheit did not even invent his scale till 1724, so how were early measurements taken and to what accuracy? This is relevant as the 23 year peak is only visible in Willis’ analysis in the early data.

  22. CET is essentially one dimensional data. The cycles are 3d. It isn’t really surprising that an apparent cycle, even one of long duration, could be overwritten for a while. Since you play a stringed instrument, you have a real understanding of harmonics, and sympathetic harmonics.

    Statistics are tools for the blind (and the deaf). So far your statistics have shown that volcanoes have no effect on climate, 10Be flux has nothing to do with cosmic rays, and there are no discernible cycles in temperature. Now I can find no fault with your technique, but at some point don’t you have to wonder if some fault lies in the statistical tools themselves?

    When statistics argue with fish, personally, I’m going with the fish.

  23. Now I can find no fault with your technique, but at some point don’t you have to wonder if some fault lies in the statistical tools themselves?

    Humans are great at seeing patterns. So good, in fact, that we can see patterns where there are none. & as Feynman pointed out: the easiest person to fool is yourself. The beauty of statistics (properly applied) is that it shows us that there are not patterns where we tell ourselves there are patterns.

    Like a great many other things, statistics can only show us what is not, it can never conclusively show us what is (no matter how hard some people would try to say otherwise).

  24. From your graph it seem s a small change in average temperatures can have large effect such as the Thames freezing over on a regular bascis?

  25. Nice article Willis. There are two CET data sets. Gordon Manley constructed the one from 1659 onwards based on monthly readings and the early part included references to diaries and other observations. A perfectly good technique, much used by other researchers to determine temperatures, especially as the range of CET temperatures is relatively limited.

    The other CET data base was compiled by David Parker and uses daily instrumental records and goes back to 1772 and is the preferred one used by the MET Office. This latter one has an allowance for UHI from 1976

    A couple of years ago I reconstructed CET to 1538 and am currently going back further through the use of such places as the Met Office library and archives.

    http://judithcurry.com/2011/12/01/the-long-slow-thaw/

    Within this article is a detailed explanation as to how CET was constructed.

    I think around 2010 the average mean temperature for the year was identical to the first year in the older record-1659. As Willis comments it is currently within a few tenths of a degree of the earlier years.

    It reached a sharp peak around 2000 and has been in decline since

    http://www.metoffice.gov.uk/hadobs/hadcet/

    I had discussions with David Parker a couple of months ago. CET uses three stations in central England to work out an average. The previous 3 that were used were felt to be a little ‘warm’ and have been replaced. (we are talking about tenths of a degree here)

    Personally I would doubt that the peak reached in 2000 was as great as indicated due to these warmer stations. Anyone living in England would be very depressed to feel they have already lived through the ‘warmest decade ever’ and things are now on the slide.

    The 1730’s were, according to Phil Jones, within a few tenths of the peak and this caused him to revise his opinions on the rate of natural variability, which was greater than he had previously believed.

    As far as cycles go, I struggle to find them, although long term weather patterns undoubtedly resurface. For example warm prevailing westerly winds are frequently replaced by colder easterlies for identifiable periods, which then switch back again, and there is no doubt we have decades that are wetter or drier than others. I can’t see however that there is much to warrant the word ‘cycles’ in these,.

    I don’t know if any other individual country data sets also make an allowance for UHI. I don’t believe there is any in the ‘global’ data sets.

    tonyb

  26. Not 100% sure about the Fourier transform here ….. but it seems to me that the sunspot cycle is slightly variable …. less sunspots mean longer period … and the Little Ice Age occupied a large portion of the record. so wouldn’t that translate into a wider frequency peak or as you say, a peak with a shoulder on it …..

  27. I have a question;

    Where in the raw data is the “Little Ice Age” where the Thames froze over enough for markets to be held on the ice on a regular basis?

  28. I dis-agree ONLY with the part of your conclusion regarding the excercise being a waste of time.
    I thank you for your time researching and posting!

    :)

  29. Concentration of mass of the planets relative to the ecliptic has an impact on Earth’s climate.

  30. milodonharlani says:
    May 8, 2014 at 5:59 pm

    Call me a maniac if you want, but the fact that climatic cycles exist is as close to indisputable as you can get in paleo proxy data. These (pseudo-) cycles indubitably have fluctuated at least quasi-regularly during the Pleistocene, first on a roughly 40,000 year glacial/interglacial basis, then more recently at ~100,000 year intervals, with interglacials varying in length from a few thousand to tens of thousands of years’ duration, apparently under control of orbital mechanics.

    Thanks, milodon, I have no quarrel with that. The problem, as always, is that we have “pseudo-cycles” that fluctuate “quasi-regularly” at all time scales. For example, the ice ages have persisted for a million years. And as you point out, they are correlated with the “Milankovich cycles”.

    Bear in mind, however, that it’s been a half billion (500 million) years since life took over in the Cambrian explosion. Out of all of that time, only a million years or so have had such Milankovich-related temperature oscillations … despite the fact that the Milankovich variations went on for all of that 500 million years.

    So although the ice-age/inter-glacial oscillation seems solid and fixed and inexorable to us, and has been going on for a million years, it appeared without warning after 499 million years with no such regular ice-age/inter-glacial oscillation. Similarly, there is no reason to assume that that cycling will last forever …

    My point is simple. Nature at its chaotic finest in the form of the global temperature doesn’t generally (1 time out of 500) do regular astronomically related ice ages. We don’t know why they have appeared suddenly, with no previous history of this kind of what you call “quasi-regular” behavior. We don’t know how long they will last.

    Finally, if I had my way, I would ban terms like “quasi-regularly” and “pseudo-cycles” and “approximately 60-year cycles” from scientific discussions. I don’t know what those terms mean. Is a 51-year cycle approximately 60 years? Is a cycle that exists everywhere but in one short section of the record a “pseudocycle”?

    The problem is not “pseudo” or “quasi” cycles. The problem that we have very real cycles that appear out of nothingness, exist for a while, and then disappear, or at least drop back down into the weeds. They are not “pseudo-cycles”, any more than the ~24-year cycle in the first hundred years of the record is a “pseudo-cycle”. It is a very real cycle in the measurements … except that in the second half of the data it disappears entirely.

    Much appreciated,

    w.

  31. Ray Tomes says:
    May 8, 2014 at 7:41 pm

    Wiillis, I don’t know how you calculate your probabilities of cycles. I don’t think that they should be linear in relation to degrees amplitude. I use Bartel’s Test which is the best test for cycles accuracy. When I use rate of change of temperature then the 23.9 year cycle has p=0.002 and the 15.15 year cycle has p=0.02 so both are significant. If only straight temperature is used then the 23.9 year cycle has p=0.01. The remaining Hale period when I remove the 23.9 year cycle is not significant, which is why I say that “maybe”.

    Thanks, Ray, good to hear from you. I thought I explained my calculation of the p-values. It’s also in my code. Not sure what your question is.

    Regarding your calculations, did you adjust for autocorrelation, and if so how?

    w.

  32. The CET annual temperature spectrum doesn’t reveal anything in particular, so I suggest a different approach.
    CET is highly influenced by a conflict between the ‘warming’ from the N. Atlantic SST, and ‘cooling’ from the Icelandic low atmospheric pressure, two totally different beasts.
    If one is looking for the solar input, month of June, time of the summer solstice and the highest insolation is the set of data to look at, beside its 350 year long trend is rather interesting. In the earlier centuries solar cycles were a bit longer, so when data is split, the effect may show. Month of June however specific, it is after all only one of 12, but beside being a clear case of cherry picking, it does give another perspective.

  33. Ray Tomes says:
    May 8, 2014 at 7:36 pm

    Willis, you wrote “First off, I have to say that their technique of removing a peak and voila, “finding” another peak is mondo sketchy on my planet”, so I want to explain a little more.

    Thanks, Ray, explanations are always welcome.

    When a method is used, as I do, that interpolates the spectrum between the FFT points so as to find the accurate frequency the resulting spectrum does not just have a sharp peak at a point. Rather, it has a peak with the height dropping off each side and going slightly negative and then oscillating positive and negative with diminishing amplitude as the frequency changes. Removal of this component will allow any other peaks to be seen which are hiding in the shadow of that peak. In this case the second peak can already be seen as a broadened shoulder on one side.

    While this is possible in theory, and works well where there are sharp spectral peaks, in natural datasets the peaks tend to be wide, sometimes quite wide. At that point it becomes a judgement call as you what you are removing … and of course removing something that isn’t there is equivalent to adding a spurious signal. As a result, I prefer to avoid the procedure.

    If you have any doubt about this, I suggest generating a data set of the length of the one used here with real periods of 23.9 years and 22.2 years (of about 70% the amplitude I think). Then do a spectral analysis of this and remove the major component to see what remains.

    Glad to, you’ll see what I mean. In my view you just need a more accurate probe … let me recommend the slow Fourier transform. Here is my periodogram of 350+ years of pseudo-data composed of two sine waves with cycles of 23.9 and 22.2 years.

    As you can see, the two peaks stand out clearly without any need to “remove the major component”.

    (Yes, I see the ringing, and I know I can filter it out to get a more accurate answer … I’m working on understanding the best way to do that, and experimenting with some new ideas I have in that regard.)

    w.

  34. Willis said:

    “The problem is not “pseudo” or “quasi” cycles. The problem that we have very real cycles that appear out of nothingness, exist for a while, and then disappear, or at least drop back down into the weeds. They are not “pseudo-cycles”, any more than the ~24-year cycle in the first hundred years of the record is a “pseudo-cycle”. It is a very real cycle in the measurements … except that in the second half of the data it disappears entirely. ”

    I agree with that, which is why I tend not to contribute in climate cycle threads.

    tonyb said:

    “As far as cycles go, I struggle to find them, although long term weather patterns undoubtedly resurface. For example warm prevailing westerly winds are frequently replaced by colder easterlies for identifiable periods, which then switch back again, and there is no doubt we have decades that are wetter or drier than others. I can’t see however that there is much to warrant the word ‘cycles’ in these,.”

    With which I also agree.

    So, here is the rub:

    There is a constant interplay between bottom up oceanic and top down solar influences on the global energy budget with each exerting a negative system response to the other and each having independent variations with a cyclical component.

    The effect on the climate is to shift global air circulation around as necessary to maintain a steady transmission of solar energy through the Earth system but with variations about the mean as the adjustment processes work first one way and then the other.

    Individual regions are heavily affected by their location relative to the nearest jet steam track or climate zone boundary with the types of changes that tonyb notes.

    The average global temperature oscillates about the mean but not a lot as Willis notes.

    There are infinite possibilities for local and regional weather / climate variations over time and that, together with internal system chaotic variability, obscures any short term cycles there may be and heavily modulates longer term cycles, sometimes supplementing and sometimes offsetting them.

    Even the longer term cycles come and go as Willis notes in connection with that relatively recent ice age / interglacial cycling.

    We are concerned about short term variations about the mean for the global energy budget.

    Currently, the periodicity most relevant to us is that of the apparent 1000 to 1500 year variability observed within the current interglacial.

    Looking at historical records and paleological evidence as tonyb does so well we can see that natural variations are enough to account for the observations of climate change during the 20th century and the start of the 21st.

    There is no need to invoke any anthropological component on the basis of the data currently available and it is that incorrect invocation that causes the models to diverge from observations over time.

  35. Willis, thanks for that. You know that I am your cycle man. I KNOW there is a 20-24 year cycle. It is called the Hale Nicholson cycle. But you won’t get a good look at it if you look at means. Mainly because there is weather. Also, I am pretty sure that at some stage in the records they went from measuring 4 or 6 times a day (physical observation) taking an average for the day to automatic recording, once a second. Apart from that there is accuracy over time, which is more critical if you keep looking at means.
    I am not familiar with your analysis, so I cannot do it myself, but I will take a bet with you that if you were to look at maximum temperatures, in CET, you will find the elusive cycle that we are all talking about.

  36. If the error equation were true for a quantity such as temperature, we’d able able to get accurate global temperature measurements by getting all 7 billion of us to stick a finger in the air and take a temperature measurement.

  37. Don Easterbrook says:
    May 8, 2014 at 8:44 pm

    Willis,
    Now that you’ve done CET, have you thought about doing the GISP2 ice core, which has thousands of years of record?

    Thanks for the suggestion, Don. I’ve thought about doing a whole pile of things. Right now, I’m trying to improve and sharpen my tools.

    Regards,

    w.

    “But at my back, I always hear,
    Times winged chariot drawing near …”

  38. gymnosperm says:
    May 8, 2014 at 9:53 pm

    CET is essentially one dimensional data. The cycles are 3d. It isn’t really surprising that an apparent cycle, even one of long duration, could be overwritten for a while. Since you play a stringed instrument, you have a real understanding of harmonics, and sympathetic harmonics.

    Statistics are tools for the blind (and the deaf).

    Sadly, regarding the real import of numbers and probabilities, humans are indeed blind and deaf. As a result, we invented statistics to keep ourselves from being misled by apparent

    So far your statistics have shown that volcanoes have no effect on climate, 10Be flux has nothing to do with cosmic rays, and there are no discernible cycles in temperature. Now I can find no fault with your technique, but at some point don’t you have to wonder if some fault lies in the statistical tools themselves?

    Mmm … I wouldn’t describe it in those terms. What I’ve done is look for cycles, and invite people to point out the ones I’ve missed.

    However, I didn’t say that the 10Be flux has “nothing to do with cosmic rays”. Since 10Be is generated inter alia by cosmic rays, that’s nonsense. What I said is that I could find no ~11-year sunspot cycles in the 10Be data, a very different thing. I invited people to show me where they were, using your choice of statistical tools … to date, neither you nor anyone has shown such a thing.

    I also didn’t say that volcanoes “have no effect on climate”. What I have shown is that the effect on global temperatures is localized and short-lived. And I didn’t use periodicity analysis for that at all, different statistical tools entirely.

    I also have not said that there are “no discernible cycles in climate”, that’s madness. I have measured the periods of the cycles that exist, and discussed them at length. They are quite real … and yet they appear and disappear without notice. That’s the problem.

    In doing these analyses, I’ve used a variety of statistical tools. So no, I don’t wonder if the tools I’m using are at fault. I’ve invited people to use their tools. They can’t find what I can’t find. Makes me think it’s not the tools …

    When statistics argue with fish, personally, I’m going with the fish.

    And when Joe Sixpack argues with a statistician, I’m going with the statistician …

    The problem is that humans are famous for seeing patterns where none exist. We are pattern-recognizing mechanisms, we can’t stop ourselves. Our only protection against this is statistics.

    Regards,

    w.

  39. Ray Tomes says:
    May 8, 2014 at 7:45 pm

    Willis, if there are both 23.9 and ~22.2 year cycles in temperature, then they will form beats with a period of approximately 300+ years. Therefore they will support each other and almost cancel each other exactly as you found in your two halves.

    While there is indeed a beat frequency between the two, you should run the numbers before you make the claims. It turns out that with those two frequencies, one 70% the amplitude of the other, the amplitude of the smaller half of the beat frequency is about 50% of the amplitude of the larger half of the beat frequency. This is a long ways from “almost cancel each other exactly” … and also a ways from what we see in the figures of the head post.

    w.

  40. Willis, UK kept to F until the last few years, and on their weather sites, they still do so have both temps c & F. Anyway, we do know this.

  41. Willis, I don’t know it this is possible, or whether I can explain it either, but here goes:

    Can you take the de-seasonalised CET record and re-scale it such that each period of the temperature record that corresponds to one solar cycle is equal length in the temperature record? Having done this, re-run the cycle search. (not sure what the units are after doing this).

    I believe the sunspot cycle record is reasonably well documented over the CET record period.

    The reason for doing this is that I would expect that the 23-ish year peak, if it really is the Hale cycle in the data, would become much stronger in the re-scaled data, but diminished if it isn’t the Hale cycle but something else entirely.

  42. Willis,

    Your bozo test, and specifically the failing of it by the CET 22 year got me thinking. One weakness of doing a Fourier transform over a certain length of data is that if the signal undergoes a step in phase along the length, this will reduce the peak size, extremely so with 180 degrees jumps.

    For most physical system, step-wise changes simply don’t happen, but suppose you have some kind of resonant system that gets started by some first process and damped after a while by some other process, then the resonant periods are generally not in phase. Doing an FT over an entire length of time containing a number of resonant periods will give you weak peaks.

    One existing solution to still detect the peaks is doing a time-windowed FT, and looking at the spectrum as a function of time, but then you have to look at 3D plots where peaks easily get lost. So what if you do this, but then average all the *magnitudes* at different times for each frequency, *ignoring the phase*, and arriving again at a 2D spectrum-like plot? Jumps or creep in phase will have minimal effect (depending on your window size).

    Maybe a nice addition to your slow FT — it will make it more sensitive to detecting a signal with a certain frequency per se, irrespective of its phase. (I’m not saying that this will change the outcome of your analysis, though).

    Frank

  43. Truthseeker. I think it’s important to note that the Thames frost fairs were not just a peculiarity of the low temperatures found during the little ice age but just as much a result of the structure of the old London Bridge ( removed 1825 ). The bridge had so much structure in the water that it acted as a weir. It slowed the pace of the water greatly and also prevented saline tidal waters from getting past upstream. The bridge also acted as a buffer to collect ice that formed, allowing for more ice to form in the slow, fresh waters upstream of the bridge.
    The same temperatures today would not cause the Thames to freeze. It is a much faster, developed river.

  44. Willis

    This is the amplitude of the cycle divided by the error in the amplitude

    If I understand your SFT then it is akin to a Fourier Series using a linear sequence of periods (rather than linear range of frequencies).

    Then the amplitude corresponds to the coefficients(?). In which case t-stat is…

    t = coefficient/standard error of coefficient

    …and if I understand you then this is what you’ve done.

    But the result of the t-stat depends on how you’re computing your coefficients; for each period: as paired operations or as a multivariate stat. If you do a multivariate regression, you’re not guaranteed to get the same t-statistic as you’d get if you did each fit separately – the other thing to wary of is co-linearity which may be an issue with using linear ranges of periods and when assessing overtones of fundamental frequencies.

    You may have already address all this.

  45. In my comment above I referred to the CET June.
    Here is the periodiogram

    http://www.vukcevic.talktalk.net/CET-June-spec.htm

    As accuracy of data improves with the time, number of spectral components falls radically. The 1800-1920 and 1920-2013 section are selected to represent two distinct solar magnetic periods.
    It can be seen that solar magnetic cycle period (it varies with the length of the solar cycles) of about 22 years and the ~17.5 years periods are present within all sections.
    Anyone keen on the solar magnetic cycles will have a major problem to overcome ‘Svalgaard test’, while 17.5 year is going to be even more problematic one. It ‘originates’ deep within earth’s interior, and basically is time angular momentum disturbances within the core propagates from the equatorial to the polar latitudes (according to JPL’s Dr. J. Dickey) and surprisingly it reappears in the N. Atlantic tectonics.

  46. The average of recent winters (2008/9, 2009/10 and 2010/11) shows cold conditions over northern Europe and the United States and mild conditions over Canada and the Mediterranean
    associated with anomalously low and even record low values of the NAO. This period also had easterly anomalies in the lower stratosphere. Given our modelling result, these cold winters were
    probably exacerbated by the recent prolonged and anomalously low solar minimum. On decadal timescales the increase in the NAO from the 1960s to 1990s, itself known to be strongly connected to changes in winter stratospheric circulation, may also be partly explained by the upwards trend in solar activity evident in the open solar-flux record. There could also be confirmation of a leading role for the `top-down’ influence of solar variability on surface
    climate during periods of prolonged low solar activity such as the Maunder minimum if the ultraviolet variations used here also apply to longer timescales.
    The solar effect presented here contributes a substantial fraction of typical year-to-year variations in near-surface circulation, with shifts of up to 50% of the interannual variability (Fig. 1a,b).
    This represents a substantial shift in the probability distribution for regional winter climate and a potentially useful source of predictability. Solar variability is therefore an important factor
    in determining the likelihood of similar winters in future. However, mid-latitude climate variability depends on many factors, not least internal variability, and forecast models that simulate
    all the relevant drivers are needed to estimate the range of possible winter conditions.

    http://yly-mac.gps.caltech.edu/z_temp/4%20soozen/zjunk/solar%20cycle%20master%20/Ineson2011%20*%20.pdf

  47. steveta_uk says:
    May 9, 2014 at 1:39 am
    ……
    I’ve done something similar some time ago, laborious and no strong conclusive result.
    However, I’ve been tracking the CET daily max, and found there is often variable ~27 days pseudo-cycle, the most recent well defined is found during 4 months in the second part of 2012. Normally daily temps variability is subject to multitude of factors, but ~ 27 days would be related to either the lunar tides or a solar factor (Bartel rotation), the Sun has a magnetic lump, or as solar people reffer to it ‘preferable sunspot longitude’.
    see HERE
    A superficial look at amplitudes would suggest the change is related either to the TSI or solar magnetic, rather than the Atlantic tides.
    One problem is a variable delay (0 and 7 days), however the amplitude ‘oscillations’ between 3 and 4 degrees C pp is rather large to be totally ignored.

  48. A book I’ve found very interesting is ‘Climate Change’ by William James Burroughs. His C.V. includes seven years at the UK National Physical Laboratory researching atmospheric physics. His comments on the CET are I think relevant here.
    On p107 of the first edition of his book (published in 2001) he comments ‘The CET series confirms the exceptionally low temperatures of the 1690s and, in particular, the cold late springs of this decade. Equally striking is the sudden warming from the 1690s to the 1730s. In less than forty years the conditions went from the depths of the Little Ice Age to something comparable to the warmest decades of the twentieth century. This balmy period came to a sudden halt with the extreme cold of 1740 and a return to colder conditions, especially in the winter half of the year.
    He also comments:
    ‘A more striking feature is the general evidence of interdecadal variability. So, the poor summers of the 1810s are in contrast to the hot ones of the late 1770s and early 1780s, and around 1800. The same interdecadal variability showed up in more recent data. The 1880s and 1890s were marked by more frequent cold winters, while both the 1880s and 1910s had more than their fair share of cool wet summers.’

  49. “Well, the first thing I do to answer that question is to split the data in two, an early and a late half, and compare the analyses of the two halves.”
    _______________________________________________

    I think you could save yourself some work and come to more complete and clear results if you just ran wavelet analysis on the record. Instead of comparing just one half with another, you can get a clear picture of which period is present at which time and how they appear, disappear, and shift as time passes. Each of your graphs can be obtained by averaging wavelet analysis output over time, but why look at averages if you can get the whole picture?
    Wavelet analysis tools for R are available, too.

    http://cran.r-project.org/web/packages/wavelets/index.html

  50. tony b: “For example warm prevailing westerly winds are frequently replaced by colder easterlies for identifiable periods, which then switch back again”

    vukcevic: “CET is highly influenced by a conflict between the ‘warming’ from the N. Atlantic SST, and ‘cooling’ from the Icelandic low atmospheric pressure, two totally different beasts.”

    Unintentional consensus ?

  51. The change in temperature of less than 1 degree is indeed much less than the annual variation, but the point of interest then is the date at which reliable germination of various seeds will take place. This is something which may indeed be of relevance, as it could see changes of 30 – 45 days (possibly variation of 60 days between coldest year and mildest year) in terms of germination between years, which may have very significant ramifications for agriculture.

    Obviously also, the key issue is not an average temperature, but the dates at which crop killing cold can occur in any one year. That can vary up to 120 days in my own lifetime (1975 saw snowfall in May, this past winter we barely had a frost since January 1st).

    It is also the distribution of heat: a very warm and sunny March (by March standards) may warm up soil to temperatures which promote germination, such that a cooler summer thereafter may not affect many things too adversely.

    2013 saw a very cold spring followed by a very warm, dry July and August. The growing season was highly compressed and this year’s season to date is 6 – 7 weeks advanced in many instances (fruit set most notably).

  52. Excellent as always.

    I’ve done analysis of some long
    North American temperature series.

    Cycles come and go. Switch phase
    etc.

    Regardless, the effects are too small
    for any useful forecast purposes.

  53. Willis

    Interesting. Personally, I’m a bit skeptical about the first 60 years or so of the CET. If you check the Wikipedia entry on thermometers, you’ll see that the sealed tube thermometer with a scale and immune to atmospheric pressure wasn’t invented until the 1650s and finely divided thermometer scales didn’t appear until Daniel Fahrenheit started making and selling mercury thermomters in the 1720s. Exactly what were they using to measure temperatures in the early years of the CET? And how reliable are their numbers?

    One might argue that poorly resolved measurements should still average out and one might well be right. Still though it might be well to keep in mind the the data in the early part of the CET might not be measuring the same thing in the data in same way as the later parts.

    And no, I don’t have a clue why dubious thermometry might introduce a 23 year cycle. My bet would be that it can’t/hasn’t. That’s probably coming from something else … or from pure chance.

    Oh yeah, And weren’t sunspots pretty much missing (Maunder minimum) from 1600-1750?

  54. Don K says:
    May 9, 2014 at 5:39 am
    Oh yeah, And weren’t sunspots pretty much missing (Maunder minimum) from 1600-1750?
    ……………
    22 year is the solar magnetic cycle
    1850-1700 few sunspots but magnetic cycle was present, see Svalgaard, lot of papers on that one.
    1750 -1750 strong solar cycles http://sidc.oma.be/silso/yearlyssnplot

  55. It’s like looking for patterns in clouds. If you try hard enough you are bound to find a horse, a boat, or anything else you want to see.

  56. @ Daniel Heyer:
    IMO the point isn’t that the peaks are noise, it is that because the same peaks are not present in all of the dataset they are not predictors for the time beyond the dataset.

    My free-of-charge view (worth every penny) is that this is one of the signatures of a chaotic rather than Newtonian system, periodicity comes and goes in an arbitrary fashion quite unlike planetary orbits and the like.

  57. HenryP says:
    May 9, 2014 at 5:14 am
    ………….
    Hi Henry
    Tony Brown (tony b) is possible the person who knows more about CET that anyone I have encountered on this or any other blog. Having said that see my comment at ‘May 9, 2014 at 3:04 am’.
    The CET annual spectrum (Tav or Tmax) is unremarkable, an idea of external forcing can only be glimpsed around annual solstices, in June-July insolation is the strongest and the Icelandic Low is weakest, in December-January is the other way around, when the IL atmospheric pressure is in charge. During the rest of the year there is meandering from one to the other direction.
    Spectrum wise, summer and winter seasons have very little in common (very similar Tav and Tmax spectra within either season, but very different between two seasons), with only major ( see link) common component around 70 year period.

  58. This is a graphic for a different project that I find myself staring at.

    http://wp.me/a1uHC3-iH

    It shows macro cycles in Ocean cores for the last 5 million years as we transitioned to a full blown ice age. It certainly clarifies that at least on this scale a warmer planet has far less extreme “weather”.

    I’m wondering if there may be some way to devise a spectral inflection point analysis that would derive the period of the signal required to override the reigning regime and produce the new one.

  59. vukcevic says

    http://sidc.oma.be/silso/yearlyssnplot

    henry says
    IMHO that graph does not tell much, especially not about the magnetic solar cycle of 22 years.
    This is the graph that everyone should try to understand

    Note that for the last two Hale cycles (from 1972) you can draw a parabolic and hyperbolic binomial which would show that the lowest point of the solar field strength will be reached around 2016.
    It appears (to me, clearly ) that as the solar polar fields are weakening, more energetic particles are able to escape from the sun to form more ozone, peroxides and nitrogenous oxides at the TOA.
    In turn, more radiation is deflected to space.This is what causes the cooling for the next 3 or 4 decades. Most likely there is some gravitational- and/or electromagnetic force that gets switched every 44 year, affecting the sun’s output (of highly energetic particles).

    Something strange will happen in 2016 on the sun (the poles switch over again?) and (my expectation is that) from 2016 we will slowly cycle back to full polar strengths again 40 years from now. Like a mirror. Amazing, how God created things.
    Mark my words.

  60. “I am not familiar with your analysis, so I cannot do it myself, but I will take a bet with you that if you were to look at maximum temperatures, in CET, you will find the elusive cycle that we are all talking about.”

    You see people here is the problem in a NUTSHELL.

    Absent any Theory of why and how there should be cycles “in the data” one is left with hunting
    EVERYWHERE for ANYTHING.

    Dont find it in global Tave, look in land only. dont find it there, look in SST. dont find it there, llok in CET, dont find it there, look in tmax of CET. dont find it there, look in tree rings, look in sea level, look in swedish sea levels.

    In all this looking folks forget what we know. random shocks can combine to create the appearence of cycles in data.

    Now lets take look for AGW as a different example.

    AGW theory ( the physics ) tells us if we increase Co2 then we can expect the surface
    to warm and the stratosphere to cool. We know exactly where to look, we know what to look for, and we know how to look. And of course when we look this is what we find, a warming surface
    and a cooling stratosphere. And we also know from theory that if the warming was caused by the
    sun, that we would NOT see a cooling stratosphere.

    Is that the end of the story? of course not. The warming of the surface hasnt been as clear
    as one would expect.. the hiatus needs to be explained, But what I would call your attention
    to is the structure of inquiry. Theory directs inquiry. There is no systematic way to just pick up a pile of data and start “looking” for things. Well one can. One can pick the pile of climate data
    and just start mindlessly mining it for cycles and correlations. Guess what you will find them, you must find them. and almost all will be meaningless. What you find will be utterly disconnected from the rest of physics and as such will be useless even if it is true or meaningful.

    So return to the fundamental question. Why should you see any cycles in climate data. What cycles should you see, where EXACTLY to you expect to see them? what physical theory tells you to expect this?

  61. vukcevic says:
    May 9, 2014 at 12:10 am

    The CET annual temperature spectrum doesn’t reveal anything in particular, so I suggest a different approach.
    CET is highly influenced by a conflict between the ‘warming’ from the N. Atlantic SST, and ‘cooling’ from the Icelandic low atmospheric pressure, two totally different beasts.
    If one is looking for the solar input, month of June, time of the summer solstice and the highest insolation is the set of data to look at, beside its 350 year long trend is rather interesting. In the earlier centuries solar cycles were a bit longer, so when data is split, the effect may show. Month of June however specific, it is after all only one of 12, but beside being a clear case of cherry picking, it does give another perspective.

    Thanks, vuk. I just tried that, I took the periodograms for June only full data, early, and late. It shows little difference from the periodograms in the head post—strong ~24-year cycle in the first half, and no 24-year cycle at all in the second half. Go figure.

    w.

  62. Just to leave a suggestion. Try to build a waterfall plot. Window your data with some smallish window (75 years or so) then take your SFT, assign height to a line of colors then move your window over a year. Rinse, Repeat. Plot all of these color lines next to each other across your whole data set. It is sort of like your bozo test but much more fluid. You might find that your 24year peak doesn’t just go away but moves from 24 to 40 with a higher damping. For a 1DOF harmonic, damping is the width of a modal peak at the half power point divided by the peak frequency ((w2-w1)/wn=2z). It will also show you if a peak goes away quickly then you know something changed at about that time. Butterflies in Africa, the Dutch with their windmills, or something take your pick but at least you know what time it happend (within half the cycle rate).
    What programming language are you using? (I’m not much of a programmer but hack my way with reasonable results in Matlab)

  63. It appears in Figure 1 that up until about 1720 – eyeball estimate – temperatures are measured at 1/2-degree intervals. Could that rounding have any effect on the apparent periodicity in the first half of the record? How about rounding the the entire record to that resolution?

  64. The ‘issue’ I see here is just that you are looking for a constant cycle. IFF the cycle at about 22 years were solar induced, we already know that the amplitude of solar output / sunspots / activity has a very long term rise / fall such that each ’22 year cycle’ changes amplitude in a long climb up then a long drift down to max then to min.

    This is also confounded by the way that the cycle length changes with amplitude. Longer at small amplitudes, shorter at large amplitudes.

    Now the ‘kicker’ is that while sunspots (activity / output / …) has an about 11 year average, it is actually bimodal with a peak each side of 11, but not so much AT 11. I would be very surprised if the double-that 22 year cycle were not similarly bi-modal.

    So seeing peaks at 23-24 ish and 15-16 ish is not much of a surprise.

    Seeing them stronger in some section of that data than in another is also not a surprise.

    What would ‘clinch it’ though would be to see if the early data had a solar cycle that did average to 23 ish while the later data did not. To see if there really is any coorelation between the ACTUAL solar cycles and the ACTUAL temperature data (and not the hypothetical average-cycle that does not really exist).

    So, in general, I find your analysis interesting for what it shows. But it does not dismiss a solar ‘cycle’ connection to temperatures since the averages hide more than they display… and the solar ‘cycle’ is not regular.

  65. Stark Dickflüssig says:
    May 8, 2014 at 5:48 pm

    ” I have an image of a man holding up a wet finger and making an educated guess for the first half of it ;)

    Hence the use of the term “Digital Forecast” on the meteorology web-sites.”

    I was convinced by the recent post about the changes in the flowering dates of the cherry trees in Japan over many centuries. These and dates of ice formation and thickness at break-up on the Thames and the like as good proxies for temperatures. Has anyone tried the method of counting the chirps of crickets?

    “To convert cricket chirps to degrees Fahrenheit, count number of chirps in 14 seconds then add 40 to get temperature. For example: 30 chirps + 40 = 70° F”

    http://lifehacker.com/5817534/how-to-tell-the-temperature-from-a-crickets-chirp

    At least these are hard to “fiddle”

  66. Steven Mosher says: “Why should you see any cycles in climate data?”

    Weather data is inherently cyclic at high frequencies. (diurnal and annually) It is a small jump to ask the question: Are there any eigenvectors at longer wavelengths? This can inform your theory and give clues as to where to look. One should always ask: “What is the data doing?” before you try to ask “Why?” If you can’t agree on What then nobody is going to get anywhere with Why, let alone “How do we alter it?” or “Should we?”
    Politicians tend to jump directly to the How with the same answer: “Give us more power”

  67. Based on your analysis, I started thinking about ocean waves. Small ripples build into ocean waves and with the right circumstances can even wind up as rogue waves. As I remember surfers will look for “sets” of waves. I’m surmising that some similar effect may be going on here. Some set of random and periodic processes are creating fluctuations in the temperature that produce these quasi periodic fluctuations. But I have no idea how you would prove that that’s going on. If that’s true, then without knowing what the drivers are I don’t know how to build a testable hypothesis to see if that’s true. Generate a high frequency pseudo series using the same frequencies you found in the actual data and see if the longer waves show up?

  68. @vukcevic
    I am not doubting that your curve adequately describes what is happening now,
    but it does not foresee the change that will happen in 2015 or 2016
    If we follow your curve we would end up with zero solar polar field strength….
    This is the part of puzzle that you miss:
    at some point, it has to cycle back!!
    The whole idea behind this part of creation is, again, just like Willis paper on tropical storms,
    to keep temperature on earth within certain boundaries.

  69. Willis Eschenbach says:
    May 9, 2014 at 9:55 am
    I just tried that, I took the periodograms for June only full data, early, and late. It shows little difference from the periodograms in the head post—strong ~24-year cycle in the first half, and no 24-year cycle at all in the second half. Go figure.

    Hi Willis
    Agree about 24 years, but ~22 years looks OK to me.

    http://www.vukcevic.talktalk.net/CET-June-spec.htm

    Two data sections I selected are related to the sunspot historical records.
    For next project: perhaps you could look at ENSO

  70. HenryP says:
    May 9, 2014 at 12:43 am

    Willis, thanks for that. You know that I am your cycle man. I KNOW there is a 20-24 year cycle. It is called the Hale Nicholson cycle. But you won’t get a good look at it if you look at means. Mainly because there is weather. Also, I am pretty sure that at some stage in the records they went from measuring 4 or 6 times a day (physical observation) taking an average for the day to automatic recording, once a second. Apart from that there is accuracy over time, which is more critical if you keep looking at means.
    I am not familiar with your analysis, so I cannot do it myself, but I will take a bet with you that if you were to look at maximum temperatures, in CET, you will find the elusive cycle that we are all talking about.

    I’d take that bet, but I suspect it was literary exaggeration … in any case, hang on, let me run the analysis … … …

    OK, you’d have lost the bet. I just looked at that very thing, CET max temperatures. I can’t compare it directly to my analysis of the mean temps, because the CET max temps dataset only starts in 1879.

    In any case, the results are quite similar to the latter half of the mean dataset, in that they lack th dominant 24-year cycle as in the full CET mean periodogram shown in FIg. 4, or the early CET mean data shown in Figure 5a.

    Regards,

    w.

    PS—The Hale-Nicholson cycle is the magnetic cycle of the sun, which varies in sync with the sunspots. Since the sunspot cycle varies so widely, if you are looking for traces of that in the climate data, you’d do better to look for that with something like a cross-correlation function.

  71. Steven Mosher says:
    May 9, 2014 at 9:36 am
    Why should you see any cycles in climate data. What cycles should you see, where EXACTLY to you expect to see them?

    In Steven Mosher’s data:
    Richard A. Muller1,2,3, Judith Curry: Decadal Variations in the Global Atmospheric Land Temperatures
    Fig.7 A strong peak is observed in the AMO at 0.110 ± 0.005 cycles/year, corresponding to a period of 9.1 ± 0.4 years, at the 98.3% confidence level. The maximum peak in the PDO occurs at a similar frequency, 0.111 ± 0.006, although with a confidence level of 94%., Page 10
    Hi Steven ,
    That graph looks a bit of a ‘unicorn’ , 98.3 confidence (?!); as you say GO figure

  72. These appearing and disappearing cycles remind me of the motion of one of those toy chaotic pendulums. As you watch the swinging pendulum arm, it occasionally appears to settle into regular periodic motion, but this only lasts for a few cycles and then it’s off to another, completely different cycle. But we already know that weather and climate are chaotic, right?

    I always enjoy your insightful posts, Willis.

  73. ” But we already know that weather and climate are chaotic, right?”

    err no we dont know that. we know that some of the underlying physical systems can be characterized by functions that are chaotic, but we dont know that climate (long term stats)
    or “weather” whatever that is, is in fact chaotic.

    It’s chaotic is an unexamined belief that most skeptics have.
    demonstrating it conclusively is hard. proving it is even harder. But the claim serves a purpose
    so people make it

    REPLY: Now you are arguing that we don’t know if weather is/is not chaotic? Are you smoking something? If Ed Lorenz was around he’d kick you in the butt. I may very well do it for him next time I see you. – Anthony

  74. PaulH says: May 9, 2014 at 6:14 am
    It’s like looking for patterns in clouds. If you try hard enough you are bound to find a horse, a boat, or anything else you want to see.

    While I don’t claim to speak for Willis, I think that that is what he has been ‘getting at’ for some time. It is very easy to fool oneself and a thorough (open to criticism) statistical analysis is one of the few tools we have to distinguish ‘cloud gazing’ from reality.

    One of the problems I see currently is that we have an awful lot of data, fast machines, sophisticated modelling tools and we tend to employ them toward ‘cloud gazing’. We want to find a ‘sheep’ and so we do.

    We can now count, in real time, the number of Ants entering and leaving a nest. We have nothing to compare it to though. If the count falls over the next ten years, is this significant? Has it always been that way or is this something new? We can use Satellites to gauge cloud cover but we have nothing to compare it to in the pre-satellite era.

    We can fool ourselves concerning the stock market, Climate, Ant numbers and any number of ‘data rich’ environments. But I certainly wouldn’t invest my ‘pension pot’ with a company that claimed to have identified ‘patterns’ in the stock market that a simple ‘Willis test’ proved were nonsense.

    Cyclomania is often Astrology not Astrophysics.

    Science becomes an ‘all one can eat buffet’. Ants up, Ants down, Ants left/right and it’s all traced back to ‘politics’.

    I fully expect some ‘scientist’ will publish a paper suggesting that the down cycle he identified in the number of Ants is significantly correlated to stock market moves – thereby ‘proving’ that human commercial activity is killing Ants. What’s the betting that he is an ardent socialist?

    All you can eat buffet. Link Ant numbers to Jupiter/Sun/Moon/stock market/US car production… Cyclomania is all things to all men. Only when Willis casts his eye upon a ‘paper’ do we all start to question what we ‘know to be true’.

  75. steveta_uk says:
    May 9, 2014 at 1:39 am

    Willis, I don’t know it this is possible, or whether I can explain it either, but here goes:

    Can you take the de-seasonalised CET record and re-scale it such that each period of the temperature record that corresponds to one solar cycle is equal length in the temperature record? Having done this, re-run the cycle search. (not sure what the units are after doing this).

    I believe the sunspot cycle record is reasonably well documented over the CET record period.

    The reason for doing this is that I would expect that the 23-ish year peak, if it really is the Hale cycle in the data, would become much stronger in the re-scaled data, but diminished if it isn’t the Hale cycle but something else entirely.

    Thanks, steve. Seems possible. Let me think about it.

    w.

  76. @Willis
    thanks for that analysis, I do appreciate. I agree that I have lost my bet!!!
    I had anticipated/hoped that the weather effect (GH effect due to clouds) would be less in maxima
    However, you must admit that vukcevic’s analysis is undeniably very clear:

    http://www.vukcevic.talktalk.net/CET-June-spec.htm

    even if it is only for only one month of the years
    (which happens to be the month when they have the best (unclouded) weather)
    btw
    Do I notice something at 44 years there?
    That would not surprise me…..
    You have to try and understand this graph, or you will miss it altogether

  77. Steven Mosher says:
    May 9, 2014 at 12:49 pm

    err no we dont know that. we know that some of the underlying physical systems can be characterized by functions that are chaotic, but we dont know that climate (long term stats) … is in fact chaotic.

    (Yeah I removed the weather part, want no part of that controversy)
    Actually it’s nice to hear someone say that. Usually I’m glibly told the opposite without any decent proof, that we know climate is not chaotic, with just dubious analogies as justification that may or may not be applicable. If we actually knew the energy budget with confidence that’d be one thing. Fine, if energy is accumulating in the system there’s going to be warming sooner or later. If you buy that climate sensitivity can’t be zero, then there’s going to be at least a little warming over time, OK. Beyond that? I don’t see how we know one way or the other right now. But I’m always looking for the opportunity to learn something I don’t know here, so feel free to tell me where and why I’m wrong.

  78. Willis says
    PS—The Hale-Nicholson cycle is the magnetic cycle of the sun, which varies in sync with the sunspots. Since the sunspot cycle varies so widely, if you are looking for traces of that in the climate data, you’d do better to look for that with something like a cross-correlation function

    henry says
    as far as I am concerned, having looked at ssn, I don’t believe in it. I can see a linear trend in ssn that seems to be going up, as time progresses. The observation of ssn is subjective due to
    a) people’s eyesight
    b) magnification improved over time
    c) “corrections” applied over time, to correct for a) and b)

    better to leave ssn out of everything
    \is my policy
    anyway

  79. I think there are problems with the “Slow Fourier Transform”.
    One is a question of nomenclature. A Fourier Transform is an analytical concept and is a function that is continuous in frequency. One has to know its existence between minus and plus infinity. As is often stated in DSP 101, ‘one cannot take the Fourier Transform of a real signal – it does not exist’.

    A Discrete Fourier Transform, more properly a complex exponential discrete Fourier series, is limited over the sampling period and its harmonics are multiples of 1/period and extend as an infinite series. In a DFT the other frequencies are not zero – they do NOT EXIST.
    My understanding of a slow Fourier Transform is that it evaluates the Fourier Integral for each term naively, rather than using the recursive nature of the FFT. They should yield identical results.

    What is being done here is not a DFT in the normal accepted sense.

    This may seem an abstract point but it is fundamental to the concept presented here and important in interpreting the results. If you do not get the theory of signal processing correct, you will not understand the results.

    The results of the “Slow Fourier Transform” are, I presume, continuous in frequency because the frequency specified may be an irrational number, although they are computed here for discrete frequencies. In other words, data exists at frequencies that are not harmonics of the fundamental frequency, which in a true DFT do not exist.

    Therefore what is being calculated and how does it relate to the Fourier transform?
    In order to take a Fourier Transform of a function, it has to be a bounded integral, in other words the integral must converge to a value with infinite limits of integration. A sine wave, for example does not have an FT because the integral is not bounded. However, the FT can be approximated by multiplying the sine wave by a window that limits the sine wave to a specific time interval. The resulting spectrum is that of a rectangular pulse that is centred about the fundamental frequency of the sine wave, i.e.: a sin (x)/x function.

    What is being done here approximates to a numerical evaluation of Fourier Transform, as opposed to the series, of the signal multiplied by a time window. In the case of a sine wave, this produces lobes around the harmonic of the signal, as shown clearly in the example above. The magnitude of these lobes depends critically on the number of samples in the time frame, but the essential point is that by this type of processing, one is producing artefacts into the “spectrum”. In a complicated signal, the various components will overlap unpredictably predictably and force correlation between components where none in fact exists.

    This influences the statistical methods used to determine if a particular harmonic is greater than would be expected from a random signal. In the case of using a DFT, this is well understood and the power of each component is a Chi-squared distribution with 2 degrees of freedom (for a Gaussian signal). The distribution is for the “Sow Fourier Transform” strikes me as an extremely difficult problem and this does not appear to have been properly analysed. It is certainly not a matter of simple filtering as you suggest.

    Most signal processing methods stem from formal, analytical theory. This does not seem to have any theory. There are many methods of spectral estimation that are not based on the DFT including, for example, maximum entropy methods and principle component analysis of the autocorrelation matrix. These are mathematically tractable, are widely used and are the results of a great deal of research. For example, see Monson Hayes: “Statistical Digital Signal Processing and Modeling”, Wiley 1996, chapter 8, “Spectrum Estimation”.

  80. Steven Mosher says:
    “So return to the fundamental question. Why should you see any cycles in climate data. What cycles should you see, where EXACTLY to you expect to see them? what physical theory tells you to expect this?”

    Solar grand minimums on average every 10 solar cycles, of variable length and with a drift from the mean periodicity by +/- 1-2 solar cycles. E.g. Dalton was cycles 5&6, and the next minimum was cycles 12-14, and the current one is from SC24. The 20th century, apart from SC14, largely missed out on a grand minimum, while the 19th century had two.
    I will be doing an article on why and where they occur, and where the maximum of each cycle occurs.

  81. Hey, I’ve thought of a name for people who obsessively study climate data looking for regular cycles. I was thinking we could call them ‘Cyclists’?

  82. Just wanted to add that I did a spectral analysis on CET a couple of days ago and found the same 24yr cycle. Since it wasn’t what I was looking for (i.e. a ~60yr cycle) I called it quits. Then I saw this post and decided to replicate the “bozo test”. I just want to report that Willis’s results were replicated using the R “spectrum” function.

    BTW… Peeking at the code, it looks like Willis is fitting a sine wave using linear regression. Kewl!

    All the best, AJ

  83. Duster May 9, 2014 at 10:08 am

    You provide an example of what Willis warns against namely seeing patterns that are not there. What you have noticed is a consequence of taking values tabulated in degF and converting them to degC.
    How they were measured is another matter entirely as back then there were no standardised scales nor accurate thermometers, nor did anyone bother about consistent time-of-day for measurements, siting problems etc. so it is ironic that a fuss is made about these factors now, but the accuracy of the pre ~1750 CET measurements are taken as read.

  84. Anthony when you find the proof that rain hail wind temperature hurricanes lightning. You know the weather is chaotic. Put that proof up here. Lorenz had ideas. Appealing to authority is not proof. Practice skepticism. Question everything as einstein wrote.

    Just show the proof. Start with a definition of weather.
    Ha that will be fun

    REPLY: OK smartass. Weather is the state of the atmosphere at a given place and time that is defined by temperature, dryness, solar insolation, wind, pressure, gravity, and Coriolis force from Earth’s rotation. All of these determine what sort of weather condition will be present at that place and time. A chaotic system is a system which exhibits dynamics over time that are highly sensitive to the initial conditions. After a period of time, the chaotic nature of the system makes any linear extrapolation from the initial starting condition vary in unpredictable ways, making forecasting with linear methods useful only in the time nearest the starting conditions. This isn’t an idea, it’s reality. That’s why weather forecasting breaks down after a few days.

    I don’t have to prove weather is chaotic, it’s already accepted that it is. Lorenz last interview:

    Bulletin of the American Meteorological Society 2013 ; e-View
    doi: Full PDF: http://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-13-00096.1

    Last Interview with Professor Edward Lorenz? – Revisiting the “limits of predictability” – Impact on the Operational and Modeling Communities?

    You however have to prove that weather isn’t chaotic.

    Starting with weather conditions today, tell me:

    1. When we’ll have a thunderstorm over Sacramento, CA that will produce at least 1″ of rain.
    2. Or if that’s too small for you, tell me the next date a Cat3 or greater hurricane will hit the USA East Coast and where it will hit.

    Show your work, including data and code, and if you turn out to be correct, I’ll believe your assertion that weather is not chaotic.

    -Anthony

  85. May I add my two pence, can we say weather is unpredictable and changeable. It’s the same meaning really. There are patterns but occasionally mother nature does something not observed often, like snow in summer. But central England is only one part of England, it is along the Penines, a beautiful area, rather undulating. But you go up to the far North of Scotland and you have a land of the midnight sun. There are ice core records, look at the Russians. There is a news article yesterday in the Daily Telegraph, about retrieving an ice core from the Antarctica, to provide the IPCC with some more bullets. Go for it Willis.

  86. Ryan says:
    May 9, 2014 at 10:06 am

    Just to leave a suggestion. Try to build a waterfall plot. Window your data with some smallish window (75 years or so) then take your SFT, assign height to a line of colors then move your window over a year. Rinse, Repeat. Plot all of these color lines next to each other across your whole data set. It is sort of like your bozo test but much more fluid. You might find that your 24year peak doesn’t just go away but moves from 24 to 40 with a higher damping. For a 1DOF harmonic, damping is the width of a modal peak at the half power point divided by the peak frequency ((w2-w1)/wn=2z). It will also show you if a peak goes away quickly then you know something changed at about that time. Butterflies in Africa, the Dutch with their windmills, or something take your pick but at least you know what time it happend (within half the cycle rate).
    What programming language are you using? (I’m not much of a programmer but hack my way with reasonable results in Matlab)

    Oh, I’m liking that plan, many thanks … someone upthread asked why I don’t do wavelets. Two reasons. First, I can’t figure the errors and the p-values and such of the wavelets. Second, I like methods that I understand from the bottom up. Oh, I understand wavelets in theory. But not the nuts and bolts.

    Give me some time, your plan sounds good. The problem, of course, is the requirement that we need 3x data length to find a cycle. For a 24-year cycle, a 75-year window works. If you want to find a 40 year cycle you need a 120-year window.

    Let me see what I can do … weekend coming up.

    w.

  87. AJ says:
    May 9, 2014 at 4:23 pm

    Just wanted to add that I did a spectral analysis on CET a couple of days ago and found the same 24yr cycle. Since it wasn’t what I was looking for (i.e. a ~60yr cycle) I called it quits. Then I saw this post and decided to replicate the “bozo test”. I just want to report that Willis’s results were replicated using the R “spectrum” function.

    AJ, always good to hear from you. I’m glad to know you’ve replicated my results …

    BTW… Peeking at the code, it looks like Willis is fitting a sine wave using linear regression. Kewl !

    I was quite proud when I dreamed that one up. Before that I was optimizing a sine wave, a very slow process. Instead, I just created a sine wave and a cosine wave, and used linear regression to give the optimum results using the two waves as the independent variable and the data as the dependent variable. Then I could take the peak-to-peak amplitude of the resulting fitted sine wave.

    All the best, AJ

    Indeed, life is good,

    w.

  88. Willis,
    Nice article, nice touch to split the data & compare.
    I’m wondering if the ‘raw’ CET that you use is the best metric to go hunting for cycles.
    There are some known cycles like diurnal that have been averaged out of the data beforehand, then you take more out by autocorrelation correction, but there are still more.
    Colleagues are looking in detail at representative long term Australian sites and finding a reverse correlation between rainfall and Tmax (but not Tmin. You have Tmean from CET). Shorthand, “Water Cools”.
    It is possible to ‘adjust’ the Tmax and hence the Tmean by a stats correction for rainfall. I suggest that it you did this, it would make cycle detection more stark. (How I dislike suggesting an adjustment, but you have already stripped 1 year cycles so what the heck?)
    The rainfall effect on Tmax is quite strong, varies in size of effect from site to site and is usually stats significant where studied so far, not many sites yet, most 100 years or more duration. So it is not easy for CET where sites are averaged together and where I’m not too sure of the extent of rainfall coverage.
    Then one has to think if there are more, similar variables perturbing Tmean. Cloud coverage is an obvious one and I’ve noted your past work on what a difference a cloud 10 minutes late in forming can make. It depends on whether you are looking for all possible classes of cycles in an overarching data set, or whether there is more to be found by stripping out as many known cycles/perturbations as possible, then analysing the corrected set. It’s your choice. (I’d do it myself if I could but I’m fighting debilitating illness presently). Geoff.


  89. BTW… Peeking at the code, it looks like Willis is fitting a sine wave using linear regression. Kewl !

    I was quite proud when I dreamed that one up. Before that I was optimizing a sine wave, a very slow process. Instead, I just created a sine wave and a cosine wave, and used linear regression to give the optimum results using the two waves as the independent variable and the data as the dependent variable. Then I could take the peak-to-peak amplitude of the resulting fitted sine wave.

    Not too original, I am afraid. I use the multiple linear regression technique on sine/cosine pairs in the CSALT model of global warming, described here:

    http://contextearth.com/context_salt_model/

    This is used to predict CO2 AGW to a gnat’s eyelash:

    WUWT’ers would be advised to pay close attention to this kind of thermodynamic modeling, as it lays waste to skeptical arguments.

  90. Bushbunny says

    ‘There are ice core records, look at the Russians. There is a news article yesterday in the Daily Telegraph, about retrieving an ice core from the Antarctica, to provide the IPCC with some more bullets. Go for it Willis.’

    CET is collected from 3 stations geographically some distance apart. Why would collecting an ice core representing a few square inches of the over sensitive polar regions be a better matrix? Indeed, why is so much faith put in ice cores as a global proxy?

    tonyb

  91. Well you are right tony, but actually deep sea cores, have shown up temperature changes over the millennium. What those jokers acting for the IPCC are hoping to find is warming trends. I doubt that proves anything, unless they are measuring the amount of pollen deposited at various times and not some intrepid Antarctic explorer, emptying his tea post out side his tent. LOL. But I was asking Willis to do one for us. I have it in one of my archaeology books but that dates to 1986. Certainly they found a pattern between interglacial and full glacial and mini ice ages.

  92. And guess what, generally ancient people moved around, they didn’t stay anywhere long, they were hunter and gatherers, and followed animals. Fishing came much later. The seas were too low I suspect and too far away.

  93. Sorry for my poor english
    I have not read all the comments
    I think you need to take into accont the period of the moon
    England in an island with hight level of tide
    We can consider the England weather systen as a non linear system subjet of sun variations and moon variations
    Conséquently we can find the sun frequency + or – the frequency or the moon
    Sun period = 22 so frequency = 0.045
    Moon period = 9 so frequency = 0.111
    first case
    0.111 + 0.045 = 0.156 so périod = 6.41
    0.111 – 0.045 = 0.066 so period = 15.15

    You find the 15.146 period

    Congratulation for your site

    Yves LETOILE (France)

  94. WebHubTelescope says:
    May 10, 2014 at 12:05 am

    BTW… Peeking at the code, it looks like Willis is fitting a sine wave using linear regression. Kewl !

    I was quite proud when I dreamed that one up. Before that I was optimizing a sine wave, a very slow process. Instead, I just created a sine wave and a cosine wave, and used linear regression to give the optimum results using the two waves as the independent variable and the data as the dependent variable. Then I could take the peak-to-peak amplitude of the resulting fitted sine wave.

    Not too original, I am afraid.

    Oh, piss off, you nasty little man. Your jealousy is overwhelming your good sense. I came up with the idea myself, and I was proud of it. So sue me. Was I the first man to come up with the idea? Of course not … but I did come up with it independently myself. You are great at trying to tear down something someone else has built, but you never seem to build anything yourself … funny how that works.

    w.

  95. Hi Willis, you wrote: “It turns out that with those two frequencies, one 70% the amplitude of the other, the amplitude of the smaller half of the beat frequency is about 50% of the amplitude of the larger half of the beat frequency. This is a long ways from “almost cancel each other exactly” … and also a ways from what we see in the figures of the head post.”

    Not so, (1.0-0.7) / (1.0+0.7) = 0.176 which is a lot less than 50%.

  96. Hi Willis, you wrote: “Glad to, you’ll see what I mean. In my view you just need a more accurate probe … let me recommend the slow Fourier transform. Here is my periodogram of 350+ years of pseudo-data composed of two sine waves with cycles of 23.9 and 22.2 years.”

    The method that I use for cycles analysis is clearly different to yours. If I use this data it will show a peak with another peak on its shoulder. Perhaps you can separate the two cycles in the real data. With the method that I use, the procedure that I follow is quite sound.

  97. You state that you perform a linear correlation with the signal and a (co) sine wave.

    This strikes me as incorrect and it stems from my earlier comment on the difference between the true Fourier Transform and the Discrete Fourier Transform.

    The definition of the Discrete Fourier Transform for a record, y(n) of N points is:

    Yk=sum(n=0 to N-1)[y(n).exp(-2*pi*j*n*k/N)],
    where k is a the set of integers 0 to N/2+1
    This is simply a correlation between a (co) sine wave of frequency 2*pi*k/N. If we write w=2*pi/N, we get:
    Yk=y(1)*cos(wk*1)+y(2)*cos(wk*2)……., which is analogous to the variance.

    The reason that the DFT works is that it evaluates the sum for all integer values of k:
    Sum(n=0 to N-1)[cos(wkn).cos(wmn)].
    If k=m, this sum is N*pi, because one is summing cos(wkn)^2, while if k does not equal m, it is zero.

    Thus the DFT selects only one frequency, PROVIDED that K is an integer.

    If you do not confine your frequencies to multiples of the fundamental frequency, by performing a correlation, you are evaluating:
    Integral(limits:0 to 2pi)[cos(ft)^2.dt] = pi+sin(4*pi*f)/4f.

    The second half of this result sin(4pif)/4f is only zero when f is a multiple of 1/(signal period), otherwise it has a value for all the frequencies in the signal which are summed together according to this result to give an “amplitude”. In other words all the non harmonic frequencies will contribute.

    Therefore I think that your derivation does not give correct results.

  98. I don’t like the approach of throwing up one’s hands and saying, “It’s chaotic.” There was a comment on a prior thread where a gentleman sought to make a distinction between chaotic and random. Not even sure if his distinction is generally accepted but it was basically that chaos is what we don’t understand and randomness is what has no cause.

    Personally, I don’t even believe in randomness by this definition, but it is too close to what many seem to mean when they invoke chaos.

  99. RC Saumarez says:
    May 10, 2014 at 7:37 am

    You state that you perform a linear correlation with the signal and a (co) sine wave.

    This strikes me as incorrect and it stems from my earlier comment on the difference between the true Fourier Transform and the Discrete Fourier Transform.

    The definition of the Discrete Fourier Transform for a record, y(n) of N points is:

    Yk=sum(n=0 to N-1)[y(n).exp(-2*pi*j*n*k/N)],
    where k is a the set of integers 0 to N/2+1
    This is simply a correlation between a (co) sine wave of frequency 2*pi*k/N. If we write w=2*pi/N, we get:
    Yk=y(1)*cos(wk*1)+y(2)*cos(wk*2)……., which is analogous to the variance.

    The reason that the DFT works is that it evaluates the sum for all integer values of k:
    Sum(n=0 to N-1)[cos(wkn).cos(wmn)].
    If k=m, this sum is N*pi, because one is summing cos(wkn)^2, while if k does not equal m, it is zero.

    Thus the DFT selects only one frequency, PROVIDED that K is an integer.

    If you do not confine your frequencies to multiples of the fundamental frequency, by performing a correlation, you are evaluating:
    Integral(limits:0 to 2pi)[cos(ft)^2.dt] = pi+sin(4*pi*f)/4f.

    The second half of this result sin(4pif)/4f is only zero when f is a multiple of 1/(signal period), otherwise it has a value for all the frequencies in the signal which are summed together according to this result to give an “amplitude”. In other words all the non harmonic frequencies will contribute.

    Therefore I think that your derivation does not give correct results.

    Well, RC, you always seem to think that about my derivations. However, as we saw in the last post, it gives results that are indistinguishable from an FFT with long zero padding … so as usual, you’re swimming upstream against the facts on this one.

    Your complaints in general remind me of the old joke about the Soviet commissar who says to a man trying some new method, “Well, I see it works fine in practice, Comrade … but I assure you, it will never work in theory.”

    w.

  100. Willis Eschenbach says:
    May 8, 2014 at 11:44 pm

    Pseudo in this case refers to cycles that occur at approximately the same interval, not precisely, & which are subject to change, yet controlled by the same variable parameters, at least in part.

    As I mentioned, glaciations are themselves periodic. Icehouse phases in earth history have occurred for much longer than just since the Cambrian. They are pronounced, lengthy & severe in the Pre-Cambrian. The glacial conditions of the past 2.6 million years, ie the Pleistocene, are just the recent low point in the Icehouse that began at the Eocene/Oligocene boundary. There was a tepid Icehouse in the Mesozoic & a super-duper deep cold one in the latter Carboniferous to early Permian. Milankovitch cycles operated during those Icehouses, too.

    Cosmoclimatologists like Svensmark, Shaviv, Veizer, et al., have proposed an explanation for the apparent cyclicality of icehouses. You may not find it convincing, but from whatever cause, the Pleistocene & prior Cenozoic glaciation didn’t just pop up out of nowhere.

    CACA focuses on the very shortest temporal units of climate, ie 30 years to 300 or at most 3000, without looking at the big picture of change on the scales of 30,000, 300,000, 3 million, 30 million, 300 million & 3 billion years. That’s how alarmists gin up “unprecedented” events, by ignoring the billions of years of prior climate proxy data.

  101. gymnosperm says:
    May 10, 2014 at 8:31 am

    I don’t like the approach of throwing up one’s hands and saying, “It’s chaotic.”

    If you have a method for predicting the future evolution of a chaotic system, simple or complex, this would be the time to bring it out.

    And if you don’t have such a system … aren’t you throwing up your hands? Isn’t that what we are forced to do when faced with a chaotic system? Because I don’t know of even one single chaotic system of even medium complexity whose future state can be reliably forecase.

    There was a comment on a prior thread where a gentleman sought to make a distinction between chaotic and random. Not even sure if his distinction is generally accepted but it was basically that chaos is what we don’t understand and randomness is what has no cause.

    Personally, I don’t even believe in randomness by this definition, but it is too close to what many seem to mean when they invoke chaos.

    No, his distinction is purely his own, I’ve never even heard that.

    Some people think a chaotic system means a very complex system. Nothing could be further from the truth. Complex systems may or may not be chaotic, and chaotic systems may or may not be complex.

    The thing that distinguishes chaotic systems is that infinitesimal differences in the starting conditions lead to widely separated trajectories. This is measured with something called the “Lyapunov Exponent”. If the Maximal Lyapunov Exponent (MLE) is positive, this indicates that the system is indeed chaotic. So your anonymous informant is incorrect. Chaos is a specifically defined type of system, which is mathematically distinguishable from randomness via inter alia the MLE, and we understand both to some degree.

    Finally, and most importantly, not only is weather chaotic, so is climate. If it were not, there would be a step change in the Lyapunov Exponent as we looked at longer and longer windows on the weather. But as no less an authority than Mandelbrot has shown, no such jump occurs—so climate is chaotic as well. Steve McIntyre discusses this here.

    And Mosh, you challenged Anthony to show that weather is chaotic. Mandelbrot did just that, examining (per McIntyre) “12 varve series, 27 tree ring series from western U.S. (no bristlecones), 9 precipitation series, 1 earthquake frequency series, 11 river series and 3 Paleozoic sediment series”, in his study “Mandelbrot and Wallis, 1969. Global dependence in geophysical records, Water Resources Research 5, 321-340.” I tracked it down here. It’s a fascinating read, Mosh, and it does exactly what you asked. It shows mathematically that not only is weather chaotic, but climate is as well..

    Regards,

    w.

  102. Ray Tomes says:
    May 10, 2014 at 3:44 am

    Hi Willis, you wrote:

    “It turns out that with those two frequencies, one 70% the amplitude of the other, the amplitude of the smaller half of the beat frequency is about 50% of the amplitude of the larger half of the beat frequency. This is a long ways from “almost cancel each other exactly” … and also a ways from what we see in the figures of the head post.”

    Not so, (1.0-0.7) / (1.0+0.7) = 0.176 which is a lot less than 50%.

    Thanks for quoting what I said, Ray. This is a great example of why quoting is so important, because it turns out we are talking about different things.

    You are talking about the difference in peak (maximum) amplitudes.

    But what I said was that if you take an entire cycle of the beat frequency of the two (about 300 years, as you point out) and divide it into a smaller half and a larger half, the average amplitude of the smaller half is about 50% of the average amplitude of the larger half. How do I know this? I measured it on the actual cycle.

    So both of our statements are correct.

    w.

  103. @Willis Eschenbach.

    You either do mathematics or you don’t.

    Computing a result is generally not proof – mathematical results are obtained by formal analysis.

    If you think I am wrong, kindly refute what I say through proper mathematics rather than hurling insults.

    I suggest you go and read a proper book on signal processing and spectral analysis that is written by a professional who has mastered the problems you are trying to address. I.e.: Openheim & Schafer, Papoulis or Hayes.

    When you have come to grips with their contents, you might see your errors and gain a little intellectual humility.

  104. While we’re on the subject of the CET, I see that it’s well on course for a new record this year. It’s currently more than 2 degrees C above the mean for the year to date. The warmest year was 2006, with an anomaly of 1.35 degrees, followed by 2011 with 1.23 degrees. We could end up with the warmest 3 years out of 350-odd all within the last 9 years. Mind you, that’s with only just over a third of the year gone. At this stage in proceedings, two other recent warm years (1990 and 2007) were warmer than 2014, which is in 3rd place.

    My impression is that it hasn’t been especially warm. Rather, there’s been a complete absence of any cold weather.

  105. Richard Baraclough
    Mind you, that’s with only just over a third of the year gone.
    Henry says
    That’s why I would not take any bets yet. Almost all data sets show it is cooling, globally.
    CET is also showing it is going downhill (from 2003). I hear they had a nice winter this year in the US. Europe might be next.

    Willis
    If you have a method for predicting the future evolution of a chaotic system, simple or complex, this would be the time to bring it out.

    Henry says
    well, I don’t have CET means on my computer here, but I think the difference between the lowest ever average yearly temp. and the highest ever average yearly temperature is only between 1 and 2 degrees C. Please correct me if I am wrong?
    There is your chaos….in absolute terms it is less than the difference between 2 rooms in our house here. I wonder if it is even still worth even talking about such small change…

    However, to understand why this difference is so small you have to try and understand a few things. First, it is “weather” itself that is acting in a way to neutralize big differences and to keep the difference on earth so small, as correctly pointed out (by Willis) here

    http://wattsupwiththat.com/2009/06/14/the-thermostat-hypothesis/

    I would call this earth’s internal protection system to prevent overheating.

    There is a second protection system, that prevents that too much energy enters earth from the outside. It is quite ingeniously fabricated, by the interaction of the sun’s most energetic particles and molecules present in the earth’s atmosphere. By the time you can predict what the next 40 years of this graph

    will look like, you have probably figured it all out,

    but, do let me know if you did.

  106. Hello Henry,

    How are things on the Highveld?

    Yes – you’re right, with 8 months to go it could all change, and although it’s mid-May, and supposedly almost a summer month, it isn’t particularly warm, but still more than 1 degree above average month to date. The absence of colder than average weather continues. All that winter rainfall came on mild south westerly winds. It led to record snowfall across the Scottish Highland above about 900 metres, but barely a flake anywhere in lowland England.

    I have an Excel spreadsheet with all the monthly values of the CET, together with a few analyses of decadal averages, etc. If you’d like a copy, send me an email address. You can have hours of amusement. The coldest year was 1740, at 6.84 degrees, and the warmest was 2006 at 10.82, so a 4-degree extreme range over 350 years. There is even a dataset of daily records dating from the late 19th century, but I haven’t downloaded that.

  107. @Richard
    Hi, after your reply, I am like: do I know you? As you know? we have a good climate here, I think the weather in England is horrible…..
    luckily I can still watch the Eurovision song festival here whilst I am blogging a bit, just trying to educate people…..

    I only downloaded CET max and it shows a difference of 3 degrees C absolute ( 11 and 14). Means’ difference must be smaller than this (value of 3). It seems to me that 1740 was a complete outlier? That is a bit suspicious, to me.

    I also must have CET means somewhere, it might be on another computer. Anyway, I remember that CET means seems to run a bit off the global wave, probably due to the GH effect (i.e. more clouds during a cooling period).

    So, don’t worry (about global warming) when it gets a bit warmer in England, whilst the rest of the world is cooling down….

  108. Willis,

    Finally, and most importantly, not only is weather chaotic, so is climate. If it were not, there would be a step change in the Lyapunov Exponent as we looked at longer and longer windows on the weather. But as no less an authority than Mandelbrot has shown, no such jump occurs—so climate is chaotic as well. Steve McIntyre discusses this here.

    By heck, I’m going to learn something about climate and chaos this weekend! :) Thank you for the reference / lead (Lyapunov exponent) and the link.

  109. HenryP

    I graphed cet in a number of ways here

    http://judithcurry.com/2013/06/26/noticeable-climate-change/

    Many scientists believe it is a good proxy for global,or at least northern hemispherical temperatures

    The 1695 to 1740 period was the most remarkable in the period including as it does a decade,the 1730’s, that were a whisker of the hottest decade in the 1990’s and came from the depths of the LIA in a remarkable hockey stick

    1740 brought this warming to a crashing halt. Phil jones was so struck by it that he wrote a paper on it and this caused him to believe that natural variability was much greater than he had previously believed. If you want to see the paper I can dig it out for you

    Tonyb

  110. Tonyb says:
    May 10, 2014 at 2:19 pm

    Many scientists believe it is a good proxy for global,or at least northern hemispherical temperatures

    The 1695 to 1740 period was the most remarkable in the period including as it does a decade, the 1730′s, that were a whisker of the hottest decade in the 1990′s

    Two important points, illustrated here:

    http://www.vukcevic.talktalk.net/CET1690-1960.htm

  111. Vuk

    That first graph in particular is very interesting, graphing as it does those two periods together.

    Interesting that you have identified volcanoes. I maintain they can have a short term effect and the data seems to indicate a rapid rebound.

    Tonyb

  112. Ray Tomes says:
    May 10, 2014 at 3:50 am

    Hi Willis, you wrote:

    “Glad to, you’ll see what I mean. In my view you just need a more accurate probe … let me recommend the slow Fourier transform. Here is my periodogram of 350+ years of pseudo-data composed of two sine waves with cycles of 23.9 and 22.2 years.”

    The method that I use for cycles analysis is clearly different to yours. If I use this data it will show a peak with another peak on its shoulder. Perhaps you can separate the two cycles in the real data. With the method that I use, the procedure that I follow is quite sound.

    Thanks, Ray. I’ve used the method you describe many times, and it works. The problem I have with it is how to decide the amount of the signal to remove. For example, if the signal is just a single cycle of say 11 years, in the periodogram you get a result that peaks at 11 years, but is also significantly non-zero at 10 years 10 months, or 11 years 6 months, and so on. So if we remove just the 11-year signal, those other signals will still remain …

    w.

  113. milodonharlani says:
    May 10, 2014 at 10:29 am

    Willis Eschenbach says:
    May 8, 2014 at 11:44 pm

    Pseudo in this case refers to cycles that occur at approximately the same interval, not precisely, & which are subject to change, yet controlled by the same variable parameters, at least in part.

    Thanks, milodon. My problem with the term is that people use it to cover a variety of sins. It is a hand-waving term, as when people say they are looking for a 60-year pseudo-cycle, then they go out and look for nothing more or less than a 55-year cycle … if they’re looking for 55-year cycles, then they should say so and leave the “pseudo” out of it.

    As I mentioned, glaciations are themselves periodic. Icehouse phases in earth history have occurred for much longer than just since the Cambrian. They are pronounced, lengthy & severe in the Pre-Cambrian. The glacial conditions of the past 2.6 million years, ie the Pleistocene, are just the recent low point in the Icehouse that began at the Eocene/Oligocene boundary. There was a tepid Icehouse in the Mesozoic & a super-duper deep cold one in the latter Carboniferous to early Permian. Milankovitch cycles operated during those Icehouses, too.

    First, there is nothing “pseudo” about the Milankovich cycles. They are calculable into the far past and the distant future. So I’m not sure why you bring them in.

    Second, as I’ve mentioned, no one knows why the Milankovich cycles suddenly started causing ice ages a million or so years ago. And no one knows either when or if we will slip back into another ice age or if we do, how long it will last.

    Which is exactly my point. These are not “pseudo-cycles”. They are very real cycles that appear and disappear.

    Cosmoclimatologists like Svensmark, Shaviv, Veizer, et al., have proposed an explanation for the apparent cyclicality of icehouses. You may not find it convincing, but from whatever cause, the Pleistocene & prior Cenozoic glaciation didn’t just pop up out of nowhere.

    Yes, and the current glaciation didn’t just “pop out of nowhere”. However, distinguishing causation in climate is never easy. I’m just saying we don’t know why the climate suddenly descended into a Milankovich-driven bi-stable state.

    CACA focuses on the very shortest temporal units of climate, ie 30 years to 300 or at most 3000, without looking at the big picture of change on the scales of 30,000, 300,000, 3 million, 30 million, 300 million & 3 billion years. That’s how alarmists gin up “unprecedented” events, by ignoring the billions of years of prior climate proxy data.

    I’m sure CACA is a cute acronym for something. I’m also sure I don’t know what it stands for. I do know that when people use acronyms without explanation, they lose both traction and politeness points with me. The problem is I now feel like an idiot for not knowing what CACA means, like I’m the only guy who didn’t get the memo … is that the way you want your readership to view you, as someone who makes them feel like a fool? Just askin’ …

    As to the “big big picture of change on the scales of 30,000, 300,000, 3 million, 30 million, 300 million & 3 billion years”, well, we have various proxies for these kinds of things over the years. However, there are a number of problems with proxies.

    The first one is encapsulated in the saying “Trees make poor thermometers.” Bear in mind the ongoing disputes about temperatures actually measured with thermometers … now consider the kinds of issues that abound with proxy data.

    The next problem is dating. Even something as apparently clocklike as the gradual deposition of sediment on the ocean floor is disturbed by compression, subsea slips and slides, and that perennially overlooked favorite, bio-turbidity. And there is no proxy without such dating issues.

    The next problem is confounding variables. Changes in ocean currents affect sediment deposition rates. Precipitation affects tree ring widths. Firn closure time affects the dating of ice cores. Salinity affects the temperature dependence of the Mg/Ca ratios in seashells. The list is endless.

    The next problem is expressed in the aphorism “Everything is connected to everything else, which in turn is connected to everything else … except when it isn’t.”

    For example, as mentioned above, the ice age/air age difference depends on the “firn closure time”, how many years/decades/centuries it takes for enough snow to fall on top to close off and completely encapsulate the air bubbles below. Firn closure time depends on snowfall. Snowfall depends on temperature. Then we use the air contents to estimate the temperature at the time of closure … a time of closure that in turn depends on the very temperature we are trying to mention. I’m sure you can see the problem …

    The next problem is the paucity of proxy data of a given type. The Berkeley Earth dataset contains tens of thousands of records. Compared to that, we have a handful of Mg/Ca proxy temperature datasets.

    As a result of these and other difficulties, while I invariably find such proxies interesting, if they were as good as people seem to think, then we could toss our thermometers and just use the proxies …

    So I view proxies quite differently than I do say the TAO buoy datasets … I don’t ignore them, I try to learn from them, but I don’t trust them one bit.

    My best to you,

    w.

  114. RC Saumarez says:
    May 10, 2014 at 11:03 am

    @Willis Eschenbach.

    You either do mathematics or you don’t.

    Computing a result is generally not proof – mathematical results are obtained by formal analysis.

    If you think I am wrong, kindly refute what I say through proper mathematics rather than hurling insults.

    I suggest you go and read a proper book on signal processing and spectral analysis that is written by a professional who has mastered the problems you are trying to address. I.e.: Openheim & Schafer, Papoulis or Hayes.

    When you have come to grips with their contents, you might see your errors and gain a little intellectual humility.

    Dang, RC, you sure go the long way around to say that you can’t find any errors in my work.

    As to whether you are wrong or right, I fear that’s of no interest to me. Perhaps someone else cares.

    w.

  115. Richard Barraclough says:
    May 10, 2014 at 12:00 pm

    While we’re on the subject of the CET, I see that it’s well on course for a new record this year. It’s currently more than 2 degrees C above the mean for the year to date. The warmest year was 2006, with an anomaly of 1.35 degrees, followed by 2011 with 1.23 degrees. We could end up with the warmest 3 years out of 350-odd all within the last 9 years. Mind you, that’s with only just over a third of the year gone. At this stage in proceedings, two other recent warm years (1990 and 2007) were warmer than 2014, which is in 3rd place.

    My impression is that it hasn’t been especially warm. Rather, there’s been a complete absence of any cold weather.

    Interesting, Richard. Me, I always like to take the longest-term look at the data that I can find, in order to provide a context for the information in question. To use your statement as an example, here’s the CET, with the seasonal monthly variations removed:

    Can’t say I see much cause for concern in that …

    Regards,

    w.

  116. HenryP says:
    May 10, 2014 at 12:40 pm

    Willis
    If you have a method for predicting the future evolution of a chaotic system, simple or complex, this would be the time to bring it out.

    I’d love to say I have one, but as far as I know, nobody has one.

    w.

  117. zootcadillac says:
    May 9, 2014 at 1:51 am

    Thank you for that explanation about the Thames Frost Fairs. Still I am not seeing any obvious signal in the raw data for the little ice age …

  118. @vukcevic

    COME ON, man. with similar trends of 0.03 degree C /annum and similar temps.

    http://www.vukcevic.talktalk.net/CET1690-1960.htm

    surely you must admit that it must be a recurring cycle?
    For which only 2 can apply:

    http://www.nonlin-processes-geophys.net/17/585/2010/npg-17-585-2010.html

    surely this graph

    cannot end up at being zero polar strength, forever?

    something is causing the solar polar field strengths to weaken
    and then, at some point,
    to strengthen/

    what is it?
    Have you not thought about that?

  119. more on the river Thames
    I worked for some decades at the South Bank, and watched where once was the river, now are number of high rise buildings and wide walkways. Further down the river from the SB centre it is even more so. Narrowing the river (from the old Shell HQs down to the new London Bridge by at least 20% or possibly more) has increased substantially velocity, which may preclude easy freezing.
    Further more, the urban effect around the area where once ice fairs were held, in the winter time is about at least 2-3 degree C above a well urbanized SW London suburb from which I commuted to work.

  120. I am a bit disappointed in all of you not being able to come up with a clear reason for the climate change that is coming….
    \
    Surely you can see that the reason for the global cooling that is coming (and will continue coming) is due to what is happening at the TOA?

    http://www.woodfortrees.org/plot/hadcrut4gl/from:1987/to:2015/plot/hadcrut4gl/from:2002/to:2015/trend/plot/hadcrut3gl/from:1987/to:2015/plot/hadcrut3gl/from:2002/to:2015/trend/plot/rss/from:1987/to:2015/plot/rss/from:2002/to:2015/trend/plot/hadsst2gl/from:1987/to:2015/plot/hadsst2gl/from:2002/to:2015/trend/plot/hadcrut4gl/from:1987/to:2002/trend/plot/hadcrut3gl/from:1987/to:2002/trend/plot/hadsst2gl/from:1987/to:2002/trend/plot/rss/from:1987/to:2002/trend

    Most ironically, and for all the wrong reasons, the idea of the IPCC is to divert more money to the poorer countries, around the equator, is exactly what we should do, as these countries {30>x>-30} will get more rain during a cooling period, whereas >{40} it will get a lot drier.
    To prevent famines we must get the farmers (NH) to move more south….!!!!

  121. Hi Ulric
    Thanks for the link.
    Indices of the North Atlantic Oscillation and the Arctic Oscillation show correlations on the day-to-day timescale with the solar wind speed (SWS).
    shame paper is beyond pay wall.
    You might be interested in this and have a go at it yourself:
    For some time I’ve been tracking the CET daily max, and found there is occasionally 27 days pseudo-cycle, the most recent well defined is found during 4 months in the second part of 2012. Normally daily temps variability is subject to multitude of factors, but ~ 27 days would be related to either the lunar tides or a solar factor (Bartel rotation).

    http://www.vukcevic.talktalk.net/SSN-CETd.htm

    Even a superficial look at amplitudes would suggest the change is related either to the TSI or solar magnetic, rather than tidal factor, note the Sun’s magnetic lump or preferable sunspot longitude.
    One problem is a variable delay (0 and 7 days), however the amplitude ‘oscillations’ between 3 and 4 degrees C pp should not to be ignored.
    :

  122. vukcevic says:
    May 11, 2014 at 4:19 am

    more on the river Thames
    I worked for some decades at the South Bank, and watched where once was the river, now are number of high rise buildings and wide walkways. Further down the river from the SB centre it is even more so. Narrowing the river (from the old Shell HQs down to the new London Bridge by at least 20% or possibly more) has increased substantially velocity, which may preclude easy freezing.
    Further more, the urban effect around the area where once ice fairs were held, in the winter time is about at least 2-3 degree C above a well urbanized SW London suburb from which I commuted to work.

    Quite true, vuk. There is one other factor, which is warmed discharge water which has been used for process cooling by various industries. I’ve never looked at the Thames, but in some US rivers and harbors the warm water from power plants and other industrial cooling uses either delays or prevents freezeup.

    w.

  123. Willis & zootcadillac
    Wikipedia quotes number of years that Thames froze, with some interesting observations.
    I was a bit sceptical about the zootcadillac’s: May 9, 2014 at 1:51 am post, but eventually I dug up this 1805 illustration (credit Getty images), which would suggest that zootcadillac is correct.

  124. Willis, I have no idea what the periodicity analysis does, is it possible that it would pick on ~7.4 year periods if they were alternating with say 3 to 4.5 year periods?

    I thought that the 23-24 year signal that switches into a 40+ year signal (see fig 3 on my first link by T. C. Benner), could well be shifts in AMO periodicity.
    Fig 4:

    http://www.aoml.noaa.gov/phod/docs/Zhang_Wang_JGR2013.pdf

  125. From planetary theory I would predict longer term periods at 110.7 years and 69 years, with the 110.7yr signal being more persistent, and the 69yr signal fading at times. The Benner wavelet power spectrum gives 110 and 68 years, with the latter fading through the 19th century.

  126. vukcevic says:
    May 11, 2014 at 4:19 am

    more on the river Thames
    I worked for some decades at the South Bank

    Hello Vuk

    More on that. I remember in the famous winter of 1963, when the Thames froze so solidly near Oxford that someone drove a car across it, and much further downstream, in the Thames estuary, there were ice floes in the sea. An article in one of the papers at the time (it may have been The Times), showed a thermometer trace of the water temperature from a ship which had sailed up the English Channel and into London. From being around freezing point in the Thames estuary, it rose to 10 degrees C in the heart of London, thanks to all the industrial use.

  127. I returned to London in Feb 1963 after 2 1/2 years in Cyprus. Luckily I had bought a sheepskin coat and didn’t get out of almost all summer. I remember it was the coldest London winter since 1947. Yes the Thames did freeze over up to Windsor. And the first time I got chilblains.

  128. Richard Barraclough says:
    May 12, 2014 at 7:29 am
    …….
    Hi Richard
    Most of the heavy industry has probably long gone, but there is lot of other effluent going in. I assume that in 1963, two large power stations Battersea and South Bank (now New Tate) were still active. I worked for many years in a 25 storey high building (ITV, old LWT) that was completed in 1970 on what was mud flat at the low tides, now is separated from the river embankment wall by a 50 yards wide tree lined walkway. It sits on a concrete platform supported by hydraulic jacks, which are from time to time readjusted to keep it levelled.

  129. I remember the Battersea Gardens, a permanent fun fair, opposite the station. But they burned coke not coal, so the sulphur dioxide was removed and no smoke. Well I can’t remember seeing any smoke, and that was in the late 40s and early 50s.

  130. Hi bushbunny
    I was having in mind not the CO2, but release of cooling water back into the river, raising its temperature. I am not familiar with details, but I believe that both Battersea and South Bank stations were built on the River Thames south bank, for both cooling water and barges’ coal delivery, beside being in the centre of London, where the demand for electricity was the greatest.

    See RB’s comment above An article in one of the papers at the time (it may have been The Times), showed a thermometer trace of the water temperature from a ship which had sailed up the English Channel and into London. From being around freezing point in the Thames estuary, it rose to 10 degrees C in the heart of London, thanks to all the industrial use.

  131. Hello Vuc

    Are you at ITV? I know that building well. I have a good friend who works there, and she has invited me in a couple of times to watch “This Morning”, which is the show she works on.

    Next time, I shall be able to astound them with my knowledge of the hydraulic jacks under the building!

  132. HenryP says:

    May 10, 2014 at 1:46 pm

    @Richard
    Hi, after your reply, I am like: do I know you? As you know? we have a good climate here, I think the weather in England is horrible…..

    Hello Henry

    No – I don’t know you, but from one or two things you have written in the past I guessed you live in Pretoria. I was in Jo’burg for a few years, and also on a farm half way between Potch and Klerksdorp. We contributed rainfall readings to the SA Weather Bureau, and were proud of the fact that we had a continuous record going back to 1922. I spent a bit of time looking for trends and cycles in that data, but the extreme variablity was more of an issue. Sometimes one of the winter months would have all its rainfall in an hour or so. You can probably look up the farm – it’s called Bushy Bend.

    I’m back in England for the moment – but as you say – not for the weather!

  133. Hi Richard
    I’d be interested in that rain record, if you have it?
    I know this is my next step after updating my own temp. tables..
    I did find rain dropped about 15% in Wellington, NZ from 1930-1940 compared to the average.
    I have to go now,but we will talk again. We have now 5 months clear skies here. The sun feels hot bu the temp. is not so hot. As I know,the sun is “hotter” in a cooling period. The TOA molecules are protecting us from harm,making it cooler….

  134. Hello Richard
    Yes, I was, since it opened as the LWT’s new HQ, recently retired, lot of fun, long hours and many happy memories. If you ever venture to the car park, you will see a heavy flood protection doors, then the high tides were against concrete wall (still visible from ground floor cafeteria) about 10 feet away. On the river front, of the major original buildings, it is the OXO tower only one still standing, the next door IBM was the Daily mail printing works, and the Kings College across the car park I think was Boots.

  135. I am no expert, but the Thames is mainly fresh water and starts inland. However, in the London basin and port, dolphins and whales have been seen after they made Greater London a smoke free zone. So that has to be mainly salt water I would think.
    When I started work in the Bank of England, Sept.1958 I traveled to work by steam train,(from Potters Bar) they were filthy things. My petticoats had to be changed twice a week, because of a two inch smut dark stain on their hems. This didn’t happen when they introduced diesel trains.

  136. Willis Eschenbach says:
    May 10, 2014 at 8:30 pm

    CACA is Catastrophic Anthropogenic Climate Alarmism, or CACCA if you prefer to throw in Change, as has often been stated on this blog.

    Milankovitch cycles are pseudo or quasi because they aren’t precisely the same number of years in duration, yet are composed of the same orbital mechanical factors, superimposition of which naturally produces differing results. Nevertheless, for the past million years or so, the ~100,000 year cycle has dominated.

    Earth was already in an Icehouse phase long before the onset of the Pleistocene glaciations. Antarctica’s ice sheets started spreading at the Eocene/Oligocene boundary. The key geological event behind the Pleistocene glaciations was the closure of the Isthmus of Panama, sending the planet into depths of iciness not seen since the Carboniferous/Permian Icehouse.

    Climate is more or less cyclic on timescales on the order of billions of years to tens of millennia, from various causes. Icehouses occur at roughly 150 million year intervals, for instance, as shown for the Phanerozoic here (same applies to the Pre-Cambrian):

    http://www.globalwarmingart.com/wiki/File:Phanerozoic_Climate_Change_png

    So I don’t see any reason why climate might not also be cyclic on the order of decades, centuries & millennia. With apologies to you, Dr. Svalgaard & others who disparage cyclomaniacs, IMO the evidence in favor of cycles on these shorter time frames is more persuasive than that against the proposition. Maybe the evidence for chaos is better, but I’m not convinced.

    I’ve predicted that the 30 years 2007 to 2036 will be statistically significantly cooler than 1977 to 2006 (unless reality be adjusted out of existence by the political powers that be), based upon the conclusions I’ve drawn from studying the inadequate data available for 1947 to 1976, 1917 to 1946, 1887 to 1916 & 1857 to 1886. We’ll see. Or someone will see, since I probably won’t be around in 2037.

  137. Milodonharlani says
    I’ve predicted that the 30 years 2007 to 2036 will be statistically significantly cooler than 1977 to 2006

    Henry says
    You are right of course. Your prediction fits mine as well. The cooling has already started from 2002 and is accelerating further down now, as my updates to my own data set seem to confirm.
    In this respect I also begin to doubt the “official” global data sets. There is too much fiddling the data going on now. But I will have the hard evidence from the update of my own data set in a month or so.

    It would be interesting for me to find out how you came to 2036?

  138. Well folks, I like the mention of ice planet, we are and have enjoyed a warmer interstadial or interglacial warm period other than the Mini ice ages. I hope we don’t enter a mini or full glacial period in my life time, although Australia didn’t seem to suffer from glaciers like in the Northern Hemisphere. Oh, the Australian budget was announced last night, and clean energy budget was cut. And they will cut the Carbon tax and mining tax.

Comments are closed.