Sharpening a Cyclical Shovel

Guest Post by Willis Eschenbach

There are a number of lovely folks in this world who know how to use a shovel, but who have never sharpened a shovel. I’m not one of them. I like to keep my tools sharp and to understand their oddities. So I periodically think up and run new tests of some of the tools that I use.

Now, a while ago I invented a variant of Fourier analysis, that I called the “Slow Fourier Transform”. I found out later I wasn’t the first person to invent it—Tamino pointed out that it was first invented thirty years ago, and that it is actually called the “Date-Compensated Discrete Fourier Transform”, or DCDFT (Ferraz-Mello, S. 1981, Astron. J., 86, 619). Figure 1 below shows an example of the DCDFT method in use, a periodogram of the cycles in the sunspots:

periodogram-annual-average-sunspots-1700-2015Figure 1. Periodogram, annual sunspots. The horizontal axis shows length of possible cycles from one to 100 years, and the vertical axis shows the strength of those cycles.

Now, in Figure 1 we can see the familiar 11-year sunspot cycle in the data, along with somewhat weaker sunspot cycles of 10 and 12 years. It also APPEARS that we can see the claimed ~90-year “Gleissberg Cycle”.

However, a deeper examination of the sunspot data shows that the “Gleissberg Cycle” only exists in the first half of the data, and even there it only exists for a couple of cycles. Figure 2 shows a Complete Ensemble Empirical Mode Decomposition of the same sunspot data. The upper graph in Figure 2 shows the underlying empirical modes, and the lower graph shows their frequency:

ceemd-annual-average-sunspot-1700-2015ceemd-periodogram-annual-average-sunspot-1700-2015Figure 2. CEEMD, annual average sunspot numbers. UPPER GRAPH: Panel 1 shows the raw sunspot data. Panels C1 through C7 show the seven empirical modes, in order of increasing period. The final panel shows the residual. If you add the bottom eight panels together, you get the raw data shown in the first panel. LOWER GRAPH: Periodograms of the empirical modes. These show the nature of the individual 

The ~90-year purported “Gleissberg cycle” is shown in empirical mode C6. In the lower graph in Figure 2, we can see that after the 11-year cycles, C6 has the second-strongest cycle in the data … but in the upper graph, we can see that whatever signal exists, it is actually fairly short-lived, dying out after only a couple of cycles.

And that means that my periodogram shown in Figure 1 was misleading me—the peak at around 90 years was not actually significant. It only lasts a couple of cycles.

So I wanted to sharpen my periodogram tool so it would indicate which cycles are statistically significant. In the past I’ve tested my method by looking at periodograms of square waves, and of individual sine waves, combinations of sine waves and the like.

This time I thought “What I want to test next is something totally featureless, something like my imagination of the Cosmic Background Radiation. That would help me distinguish random noise from significant cycles.

Well, of course I don’t have the CBR to test my periodograms with, so here was my plan for generating some random noise.

I generated a series of sine waves at all periods from one year to thousands of years. They all had the same amplitude. Next, I randomized their phases, meaning that they all started at random points in their cycle. I figured, nothing could be more generic and bland than the sum of a bunch of sine waves of equal strength of all possible periods. Then I added them all together, and plotted the result.

Now, I’m not sure what I expected to find. Something like a hum, something kind of soothing. Or perhaps like on the ocean, when you have small wind-ripples on top of a chop on top of a swell with a bigger swell underneath that. Harmony of the spheres kind of thing is what I thought I’d get, complex but smooth like some mathematical BeeGee’s harmony… however, this was not the case at all. Figure 3 below shows a sample of one of the many different results I’ve generated by adding together thousands of sine waves of identical amplitude covering all the periodsbackground-noise-sine-waves

Figure 3. Ten examples of what you get when you add together thousands of sine waves evenly blanketing an entire range of frequencies.

These results were surprising to me for several reasons. The first is their irregular, jagged, spiky nature. I’d figured that because these are the sum of smooth sine waves, the result would be at least smoothish as well … but not so at all.

The next surprise to me was the steepness of the trends. Look at Series 4 at the lower left of Figure 3. Note the size and speed of the rise in the signal. Or check out Series 3. There is a very steep drop in the middle of the record.

The next thing I hadn’t foreseen is the fractal, self-similar nature of the signal. Because it is composed of similar sine waves at all (or at least a wide range) of time scales, the variations at shorter time scales are very similar to variations at larger scales.

I was also not expecting the clear long-term cycles and trends shown in the various random realizations. Regarding the cycles, I had expected that the various sine waves would cancel each other out more than they did, particularly at longer periods.

And regarding the trends, I had thought that because none of the underlying sine waves contained a trend, then as a result the sum of them wouldn’t have much of a trend either. I was wrong on both counts. The signals contain both clear cycles and clear trends.

Another unexpected oddity, although it made sense after I thought about it, is that like a variety of natural climate datasets, these signals all have very high Hurst exponents. The Hurst exponent measures what has been described as the “long-term persistence” of a dataset. Since all of these signals are the sum of unchanging sine waves which assuredly have long-term persistence, it makes perfect sense that these signals also have a high Hurst exponent.

Upon contemplation, I also note that these series are totally deterministic, but with a very long repeat time. For example, the repeat time of all possible periods from 2 to 100 is 6.972038e+40 cycles.

The strangest part of all of this is that the signals look quite lifelike. By that, I mean that they look like a variety of climate-related records.Any one of them could be the El Nino Index, or the temperature of the stratosphere, or any of a number of other datasets.

So after I generated my random datasets composed solely of unvarying sine waves, I used my periodogram function to see what the apparent frequencies of the waves were. Here is a sample of a few of them:

periodogram-random-datasets-length-12800

Figure 4. Periodograms covering waves from one to 3200 cycles, in a dataset of length 12,800.

Now, at the left end of each of the graphs in Figure 4 we can see that the periodograms are accurate, showing all cycles as being the same small size. This is true up to about 100 cycles, or about 1/30 of the length of the dataset. But as we get further and further to the right, where we are looking at longer and longer cycles, we can see that we get larger and larger random peaks in the periodogram. These can be as large as forty or fifty percent of the total peak-to-peak range of the raw signal.

In order to gain a better understanding of what’s going on, I plotted all of the periodograms. Then I calculated the mean and the range of the errors, and developed an equation for how much we can expect in the way of random cycles. Figure 5 shows that result.

periodograms-100-reandom-datasets-12800Figure 5. Periodograms of 100 datasets formed by adding together unvarying sine waves covering all periods up to the length of the dataset, in this case 12,800. Dotted line indicates the level below which we find 95% of the random data.

I also looked at the same situation at various dataset lengths, down to about 200 data points. Here, for example, is the situation regarding a random dataset of length 316, the same length as the annual sunspot record.

periodograms-100-reandom-datasets-316Figure 6. Periodograms of 100 datasets formed by adding together unvarying sine waves covering all periods up to the length of the dataset, in this case 316. Dotted line indicates the level below which we find 95% of the random data.

Now, this has allowed me to develop a simple empirical expression for the 95% confidence limit.  As you can see, the error increases with increasing length of the period in question.

And this is the precise sharpening of the tool that I was looking for. Let me start by revisiting the first figure above, the periodogram of the sunspots, and I’ll use the same error measure of the amplitude of 95% of the random cycles:

periodogram-annual-average-sunspots-1700-2015-error

Figure 7. As in Figure 1, but with the addition of the line showing the extent of 95% of the random errors as described above.

As you can see, this distinguishes the valid signal at 11 years from the two-cycle fluctuation at 88 years. If you compare this to Figure 6, you can see that a cycle at 88 years needs to be quite large in order to be statistically significant.

Now, I mentioned above that the random datasets generated by this method look very similar to natural datasets. As evidence of this, Series 7 in Figure 3 above is not a random dataset like the others. Series 7 is actually the detrended record of the historical variations in ∆14C, which I discussed in my previous post … compare that actual observational record to say Series 2. There’s not a lot of difference.

And this brings me to the reason for this post. I’ll start by quoting from my previous post linked just above, which discussed the results of a gentleman posting as “Javier”, who in turn used the results of Cliverd et al. If you have not read that post, please do so, as it is central to these findings. In that previous post I’d said:

Let me recapitulate the bidding. To get from the inverted 14C record shown in Figure 3 to the record used by Clilverd et al, they have

  • thrown away three-quarters of the data, 
  • removed a purported linear trend of unknown origin from the remainder, 
  • subtracted a 7000-year cycle of unknown origin , and 
  • ASSERTED that the remainder represents solar variations with an underlying 2,300 year period …

The series shown as “Series 7” above is the result of the first two of those steps. As you can see, there is claimed to be a 7000-year signal that they say is “possibly caused by changes in the carbon system itself”. However, there is no reason to believe that this is anything other than a random variation, particularly since it does not appear in the three-quarters of the data that they’ve thrown away … but let’s set that aside for the moment and look at the result of subtracting the purported 7,000-year cycle from the ∆14C data. Here is the periodogram of that result:

periodogram-delta-14c-calibration-no-errorFigure 8. Periodogram of the ∆14C data after removal of a linear trend of unknown origin and a 7,000 year cycle of unknown origin.

Note that this seems to indicate a cycle of about 960 years, and another at about 2200 years … but are they statistically significant?

In the comments to my post, Javier replied and said that I was wrong, that there indeed is a ~2400-year cycle in the ∆14C data. I pointed out to him that a CEEMD (Complete Ensemble Empirical Mode Decomposition) shows that in fact what exists is several cycles of about 2100 years in length, and then sort of a cycle of 2700 years length, and then another short cycle. This result is seen in the empirical mode C9 below:

ceemd-intcal13-14cFigure 9. CEEMD of the ∆14C data after removal of the linear trend and a 7,000 year cycle. Panel 1 shows the raw ∆14C data. Panels C1 through C9 show the nine empirical modes, in order of increasing period. The final panel shows the residual. If you add the bottom eleven panels together, you recover the raw data shown in the first panel.

In empirical mode C9 above you can see the situation I described, with short cycles at the start and end and a long cycle in the middle.

Mode C8 is also interesting, as it has a clear regular ~1000-year cycle at the beginning. Strangely, it tapers off over the period of record to, well, almost nothing. Again, I see this as evidence that this is simply a random fluctuation rather than a true underlying cycle.

In my discussion with Javier, I held that in neither case are we seeing any kind of true underlying cyclicity. And my thanks to Javier for his spirited defense, as it was this question that has led me to sharpen my periodogram tool.

And to complete the circle, Figure 10 below shows what my newly honed periodogram tool says about the ∆14C data:

periodogram-delta-14c-calibration-errorFigure 10. As in Figure 8, periodogram of the ∆14C data after removal of a linear trend of unknown origin and a 7,000 year cycle of unknown origin, but this time with the addition of the line showing the limit of 95% of the cycles created by the addition of sine waves. 

I note that neither the ~ 1,000-year nor the 2,400-year cycles exceed the range of 95% of the random data. It also bears out the CEEMD analysis, in that the ~1000 year period shows more complete cycles, and more regular cycles, than the 2400 year period. As a result, it is closer to significance than the ~2400 year cycle.

Conclusions? Well, my conclusion is that while it is possible that the ~ 88-year “Gleissberg cycle” in the sunspots, and the ~1,000-year cycle and the ~ 2400-year cycle in the ∆14C data may be real, solid, and persistent, I find no support for those claims in the data that we have at hand. The CEEMD analysis shows that none of these signals are either regular or sustained … and this conclusion is supported by my analysis of the random data. The fluctuations that we are seeing are not distinguishable from random fluctuations.

Anyhow, that’s what I got when I sharpened my shovel … comments, questions, and refutations welcome.

My best to everyone, and my thanks again to Javier,

w.

As Always: I, like most folks, can defend my own words and claims. However, nobody can defend themselves against a misunderstanding of their own words. So to prevent misunderstanding, please quote the exact words that you disagree with. That way we can all be clear regarding the exact nature of your objection.

In Addition: If you think I’m using the wrong method or the wrong dataset, please link to or explain the right method or the right dataset. Simply claiming that I am doing something the wrong way does not advance the discussion unless you can show us the right way.

More On CEEMD: Noise Assisted Data Analysis

0 0 votes
Article Rating
275 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Admin
November 3, 2016 11:49 pm

Nicely done. Thanks Willis.You’ve shown that certain types of statistical analysis create signal from noise. Eureka! gives way to “aww, dammit!”.

“The first principle is that you must not fool yourself and you are the easiest person to fool.” – Richard P. Feynman

mark - Helsinki
Reply to  Anthony Watts
November 4, 2016 12:11 am

Yep, date processing creates a signal. CBR is one such example.
Interesting read Willis, good stuff. Learned a thing or two here.

Chimp
Reply to  mark - Helsinki
November 4, 2016 9:32 am

The Cosmic Microwave Background Radiation doesn’t result from signal processing. It is a signal.

Greg
Reply to  mark - Helsinki
November 4, 2016 12:48 pm

The radiation is a signal. Its variation across the sky is noise that is over processed to produce a spurious “signal”.

Chimp
Reply to  mark - Helsinki
November 4, 2016 4:54 pm

To what “processing” by Planck do you object?

Greg
Reply to  Anthony Watts
November 4, 2016 6:54 am

Yes, I suspect that is exactly what Willis has done here. 😉
The result fitted his expectations and he did not question why his “random” data did not have a flat spectrum. We all get caught out by bias confirmation. However, it could be reworked to produce a useful result. The intent and the principal was sound science. He just made a slip on using equal period samples. see below.

Greg
Reply to  Greg
November 4, 2016 7:15 am
george e. smith
Reply to  Anthony Watts
November 4, 2016 2:06 pm

I have sharpened a spade; but not a shovel. I guess we used to break up the ground first before shoveling it out of the hole.
g

catweazle666
Reply to  george e. smith
November 4, 2016 5:16 pm
Mike Rossander
Reply to  george e. smith
November 6, 2016 4:02 pm

Sorry but your gardening site fails. While spades are a subset of the class of shovels, the spade is always the one with the pointy end. It looks like the spade in playing cards. The word is derived from Old English spadu and then from the Greek spathē, the blade of a sword.

Reply to  george e. smith
November 7, 2016 7:38 am

Just need to call a spade a spade.

1sky1
Reply to  Anthony Watts
November 4, 2016 4:24 pm

Nicely done. Thanks Willis. You’ve shown that certain types of statistical analysis create signal from noise.

Actually, what has been shown is that total lack of analytic grounding leads to gross miscomprehension of what certain exercises reveal. Because the time-series synthesized here are sums of (random-phase) sinusoids uniformly spaced in PERIOD, they cannot be expected to produce effectively flat periodograms, which are uniformly spaced in FREQUENCY. The inherent spectral window (dictated entirely by the record length) winds up integrating the spectral content over a FIXED analysis bandwidth that includes more and more sinusoidal components as frequency decreases, no matter by what algorithm the FT is computed. Thus the total variance shown in each band necessarily rises with decreasing frequency.
By clinging to the primitive notion that period, rather than frequency, is the fundamental variable of spectrum analysis, Willis simply winds up with dull conclusions, rather than sharpened tools. Feynman’s caveat rings richly ironic here,

1sky1
Reply to  1sky1
November 5, 2016 2:43 pm

Nothing I said here is restricted to white noise. It applies equally well to red noise, or even to highly structured random signals with a continuous power density.
If Willis had even a modicum of analytic comprehension, none of the features of his synthesized quasi-red-noise time series would have been surprising to him; not the irregular high frequency variations, neither the steepness of occasional “trends,” nor the apparent persistence of longer-period “cycles.” In fact, such “life-like” features are common in autocorrelated time series. No competent analyst would have come to the impressionable conclusion:

The signals contain both clear cycles and clear trends.

But, by blindly “going a different direction entirely” he misled not only Anthony but many others here into thinking that certain analysis techniques “create signal from noise.” That’s what happens when amateurs who have no analytic understanding of Fourier synthesis or analysis run computer programs that produce “surprising” results.

dudleyhorscroft
November 4, 2016 12:07 am

Looks like you have found another way to generate “Hockey sticks”!

Reply to  dudleyhorscroft
November 7, 2016 7:44 am

I’m not sure, but it looks like Willis’ analysis reinforces the idea that to show significance, you need a longer series compared to the cycle you’re investigating. That dashed yellow line for 95% of the data might be throwing out valid cycles just because the time series is too short.
What would the Gleissberg Cycle show with a much longer series? (I have no clue.)

Reply to  Bob Shapiro
November 7, 2016 11:09 am

Gleissberg is here,
whether anyone likes it or not.
it has a sine wave, with 43 years of cooling and 43 years of warming.
Must say the warming is over for 2 decades already but it has been ‘erased’ from the records. It was easy to make it disappear as it was only minus a few tenth of a degree K. However, the next two decades will be a bit more difficult “to cover up”…..
Wait till the snow stands a meter high at your door when spring is starting….

Jarryd Beck
November 4, 2016 12:24 am

What I find interesting is how much noise can look like a signal. This complements the challenge someone posted a while back of finding signals out of 1000 data sets, some of which have trends. What’s to say that our climate isn’t just a whole bunch of noise with hundreds of drivers?

Reply to  Jarryd Beck
November 4, 2016 7:22 am

The ghosts in the tape-recorder with the blank tape..
I am not disagreeing with anything in the post, but it does seem a fairly elaborate way to identify what white noise with no frequency shaping looks like.
a nice back biased zener diode and a spectrum analyser or oscilloscope would show the same.

Greg
Reply to  Leo Smith
November 4, 2016 8:41 am

If this was producing white noise it would be flat.

Greg
Reply to  Leo Smith
November 4, 2016 8:43 am

Since this is FT it looks like figure 5 is showing 1/f , a power spectrum would be show 1/f2 , ie. red noise.

ferd berple
Reply to  Jarryd Beck
November 4, 2016 11:35 am

how much noise can look like a signal
============================
I created an excel spreadsheet with 1000 “time-periods”. Starting at zero, for each time period, I randomly added or subtracted 1 from the previous time period. The results were virtually identical to Figure 3 above.
Climate is a fractal. A random walk down a hallway, with walls on each side, about 10C apart.
http://c3headlines.typepad.com/.a/6a010536b58035970c017c37fa9895970b-pi

Greg
Reply to  ferd berple
November 4, 2016 12:52 pm

Strange that we can’t see the huge rise of the “anthropocene” on the graph !

Chimp
Reply to  ferd berple
November 4, 2016 5:25 pm

Ferd,
The Mesozoic CO2 looks ‘way too low. It’s not clear from the graph’s accompanying sources where you got the CO2 reconstruction. COPSE or Rothman? GEOCARB results are higher.
Usual figures are c. 1750 ppmv for the Triassic average (vs. your 210 ppm), 1950 for the Jurassic and for 1700 the Cretaceous.

Chimp
Reply to  ferd berple
November 4, 2016 5:31 pm

Royer compilation of paleoproxy data?

Duster
Reply to  ferd berple
November 8, 2016 3:32 pm

Ferd, a far better, and standard source for atmospheric CO2 estimates over the Phanerozoic is the Geocarb III plot, which is the (badly) reproduced parent of the CO2 curve in that plot you provide the link for. The plot below shows Geocarb III along with several other, less robust, models.comment image
I don’t know where the temperature estimates were derived from. They approximate a similar plot by JoNova that Skepticalscience was very critical of. Jo was of course pointing out that there is no apparent correlation between CO2 and temperature.
http://static.skepticalscience.com/pics/nova_past_climate1.gif

george e. smith
Reply to  Jarryd Beck
November 4, 2016 12:16 pm

If you look at graphs of TSI data from satellites (real TSI; not Trenberth faux TSI), like Dr.Svalgaard has posted numerous times, you will find that the typically eleven year 0.1% amplitude cyclic variation appears to get “noisier” at the TSI minimum part of the cycle.
I believe Leif has also stated several times that this is NOT NOISE; it is simply an increase in short term fluctuations of the real TSI signal around times of the minima.
And I suspect that Leif could give us a good solar physics explanation of the cause of this. Well at least to the extent that solar physicists understand that cause.
It seems (to me) to be quite implausible that true noisiness of the signal which is such a tiny fraction of the variable value (TSI) would display such a fluctuation increase at the minima of the cycle (of TSI value). The resolution of true changes in observed TSI would seem to be much better than this fluctuation, so it truly is real signal.
As the saying goes; ” A noisy noise annoys an oyster. ” which was a proposition I once saw proven mathematically, at least in the spoof branch of mathematics. Along with such other mysteries as: “Why is a mouse, when it spins ? ” or ” A rolling stone gathers no moss. ”
Why those particular spoof proofs stick in my memory (nothing much else does), I do not know.
G

george e. smith
Reply to  Jarryd Beck
November 4, 2016 2:10 pm

Well the noise that one most often gets to “look at” is often on an oscilloscope screen, so it is not a stationary time series.
g

Reply to  Jarryd Beck
November 4, 2016 2:57 pm

Jarryd,
“What’s to say that our climate isn’t just a whole bunch of noise with hundreds of drivers?”
For one thing, there’s only one driver, which is the Sun. The thing is that the Sun not only drives the system but as part of the system’s response to that forcing, the system may change where this change is often misinterpreted as a forcing (driving) influence because the changes may influence the surface temperature. Consensus climate science doesn’t distinguish between changes to the input forcing (the Sun) and changes to the system (changing CO2 levels and varying clouds) and this contributes to its phenomenal failure.

Peter Sable
November 4, 2016 12:28 am

Willis:
I’ve been creating some tools like this in parallel myself, so I’ll have a lot more comments when it’s not 12:19am, but my initial stab is that what you are doing is estimating the spectrum of the noise on a single assumption that there’s a linear frequency sum of sine waves. That may not represent the actual noise spectrum of the environment that’s creating the data you are comparing the noise. For example it would be grossly unfair to use this comparison for truly gaussian noise. Real world signals vary from violet noise to red noise. what you ave here looks about like pink-red noise (1/f spectrum). I’m not sure I’d agree that the C14 calibration signal is pink noise, it looks more towards grey-pink noise to me. I checked error bars in the data set and those are gaussian (white noise).
Based on my analysis of the spectrum I’d say the 1000 year signal is above 95% confidence and the 2000 year signal is close.
The papers I cite below show in more refined detail the technique for finding the 95% confidence interval of the spectrum of the signal. You got most of it correct except estimating the correct spectrum. They show how to do that using AR analysis on the original signal. Whether they are completely correct I can’t be sure. Please enjoy the reading.
best regards,
Peter
http://paos.colorado.edu/research/wavelets/bams_79_01_0061.pdf
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/softlib/redfit/redfit_preprint.pdf
https://en.wikipedia.org/wiki/Colors_of_noise

george e. smith
Reply to  Willis Eschenbach
November 4, 2016 8:49 am

So you added thousands of equal amplitude sine waves. That sounds like a purely mathematical exercise. We know what a sine wave is; a purely fictitious mathematical function, with many interesting properties.
I believe you said that you started each at a “random” phase, which begs the question of what gave you the random origin of each wave. I don’t think you explicitly stated how you got the frequencies; are they just random real numbers.
Well of course in mathematics you can start with whatever rules you want to dictate, and then investigate the consequences.
The part I don’t understand; except your right to do it, but WHY the exact same amplitude for all ??
As I said; this is you game, you made the rules. But what is the significance of fixing the sinusoid amplitudes ??
Do you know of any physical phenomenon that you think might behave like your model ?
Einstein invented Einstein waves with his mathematical game that he called general relativity. We seem to have found a gravitational system that replicates an Einstein wave; so far at least two of them.
What system would you suppose would emulate your model ??
G

Greg
Reply to  Peter Sable
November 4, 2016 4:31 am

Yes Peter. A lot depends, as always, on what the null hypothesis spectrum should be. I think Willis was intending to produce white noise here, but did equal time intervals , not equal freq intervals.

Greg
Reply to  Greg
November 4, 2016 7:07 am

There will a reliable random number generator in Willis’ favour R package. That may be a better ( ie tested ) and faster way to get a random signal to start the analysis.

Greg
Reply to  Peter Sable
November 4, 2016 4:38 am

The 14C data can not be analysed in this way because the older data have been attenuated FAR more than the more recent parts of the data due to natural decay process. It needs a much more complex analysis to try to reconstruct the orginal signal before any freq analysis can be done.
The spectrum of the detrended calibration curve is spuriousT in itself as a solar proxy, so there is no point in applying any noise profile for comparison and significance testing.

November 4, 2016 1:32 am
Reply to  Khwarizmi
November 4, 2016 10:10 am

Khwarizmi, dictionary.com is quite limited. Try a simple Google search for technical terms: https://www.google.com/search?q=periodogram
“A periodogram calculates the significance of different frequencies in time-series data to identify any intrinsic periodic signals. A periodogram is similar to the Fourier Transform, but is optimized for unevenly time-sampled data, and for different shapes in periodic signals.”

Dermot O'Logical
November 4, 2016 2:06 am

Willis – with apologies for hijacking/directing attention to another wuwt thread, would you be able to reply to comments both I and afonzarelli made over here, regarding the presence of an 11 year signal in temperature data?
https://wattsupwiththat.com/2016/10/31/sun-quiet-again-as-colder-than-normal-winter-approaches/
If I’ve understood afonzarelli correctly, Dr Roy Spencer has produced a chart demonstrating a correlation of TSI variance with HadCRUT3 temps. If that is correct, why has that not shown up in your periodgram works?

Peter
November 4, 2016 2:12 am

Many years ago as a grad student I was helping with the analysis of a new data compression algorithm that is now quite popular. As a test I wanted to throw pseudo random bit sequences at it because of course it would not be able to compress them since there would be no patterns for it to learn and exploit. Well much to my surprise the new algorithm compressed the crap out of what I was throwing at it. A little further analysis in the form of running FFT’s over the pseudo random data I thought I was generating showed clear short cycles of various lengths. How could this be with pseudo random numbers coming from a well researched and designed library? Well it turned out I was using the last bit of the random() number generator to get a pseudo random sequence of bits. Bad idea. Each bit in the 32 bit pseudo random numbers generated by this particular library had a distinct and quite short cycle. Same is true of pairs of bits etc. and you only get the claimed performance if you use the full bit range of the number.
Anyway very interesting post. Not sure if its relevant but just a warning to check how you get your random numbers before you even start any analysis based on them 😉

george e. smith
Reply to  Peter
November 4, 2016 9:00 am

Well you can’t generate random numbers. The sequence of a set of random numbers is NOT a band limited signal. Two members of a random number set generated sequentially could differ by an amount as small as 10^-43, or as large as 10^43, and anything in between; or pick your own range.
Any constraints you put on the numbers renders them non random.
G

Hugs
Reply to  george e. smith
November 4, 2016 11:43 am

Cheese. If you pick a random number, with P = 1 its absolute value is larger than any given fixed real like 1E43.
But everybody knows 7 is a random number. 🙂
Thanks Willis for simple and thought-provoking track of thinking the confidence interval.

John F. Hultquist
Reply to  Peter
November 4, 2016 10:15 am

Many years ago as a grad student …
Makes me wonder if I learned about RNGs before you or because of you.

Greg
November 4, 2016 2:13 am

In order to gain a better understanding of what’s going on, I plotted all of the periodograms. Then I calculated the mean and the range of the errors, and developed an equation for how much we can expect in the way of random cycles. Figure 5 shows that result.

Hi Willis. The aim of what you are doing is very sensible but I think you have a fundamental error in the way you generated the “random” data samples.

I generated a series of sine waves at all periods from one year to thousands of years.

White noise has a flat frequency spectrum, what you have created are data samples which are heavily loaded to wards longer frequencies. This is what is reflected in your spectra.
Because you have done ( AFAICT ) is to use equal period intervals . You cover three decades of frequency : 1-10; 10-100 and 100-1000. 90% of the energy of the spectrum is in that final decade of frequency.
What it seems to be you should do is to do the same thing with equal frequency intervals. Our thought processes are keyed to thinking in terms of time periods, but what you have created is not what would generally be regarded as a random signal. This kind of work is normally done in the frequency domain and sometimes plotted in periods because it is easier for us to think in years the “per years”. Especially when it is human scale periods like years, decades or 90 year periods.

Greg
Reply to  Greg
November 4, 2016 2:15 am

easier for us to think in years than “per years”

stevefitzpatrick
Reply to  Greg
November 4, 2016 6:20 am

I agree, this is the biggest issue. Generating noise with spectrum that is suitable for a ‘system’ is not simple if you don’t already understand the process(es) which control the ‘system’. My guess is that partially autocorrelated white noise will generate similar results…. and also not necessarily be a goo noise model.

george e. smith
Reply to  Greg
November 4, 2016 9:04 am

We live in the real world in the time domain. Frequency is a fictitious artifact of our mathematical minds. We invented it.
When I worked at Tektronix MYA, “frequency” was a swear word !
G

Greg
Reply to  george e. smith
November 4, 2016 11:08 am

Time is an invention of our minds too. What’s the difference?

Peter Sable
Reply to  george e. smith
November 4, 2016 11:36 am

When I worked at Tektronix MYA, “frequency” was a swear word !

That’s funny, I worked in T&M at Tektronix. Had no problem using the frequency word. It did generate some swearing, but I worked in test and the swearing came from developers when a device didn’t meet spec 🙂

Chimp
Reply to  george e. smith
November 4, 2016 11:36 am

Greg,
Time is a requisite dimension of spacetime in our universe, as presently imagined (so to speak) or observed by physics.
But then, you may be referring to another sort of time, or phenomenon called “time” in English.

george e. smith
Reply to  george e. smith
November 4, 2016 12:31 pm

Peter, It was when I worked in Engineering at Cedar Hills (I designed the sweep switching circuits for the Type 547.)
But that all changed, when Hewlett Packard designed their Sampling Scope which was a “Through Sampler”, and the sampling head was a machined piece of microwave plumbing; some ersatz “Magic Tee” circuit with three ports and some reason why you could put a signal in port A, and get an output from port B but with NO A signal coming out of port C; and all the verse vicea combinations.
HP was a very savvy microwave and RF environment in those days, but few of us at Tek could figure out how that sampling head worked; which is to say, we could not explain it in the time domain, so we all had egg over our faces.
It was not unlike not being able to give a classical Physics explanation for the photo-electric effect.
G

Reply to  george e. smith
November 4, 2016 12:54 pm

But that all changed, when Hewlett Packard designed their Sampling Scope which was a “Through Sampler”, and the sampling head was a machined piece of microwave plumbing; some ersatz “Magic Tee” circuit with three ports and some reason why you could put a signal in port A, and get an output from port B but with NO A signal coming out of port C; and all the verse vicea combinations.

As in a microwave switch?
In microwave striplines I did some work with switches and attenuators, cleaver little stuff, we’d use a PIN diode as an adjustable low Z shunt, the microwave just see the big change in impedance and it just reflects it. I think they used one diode per switched port.

george e. smith
Reply to  george e. smith
November 4, 2016 12:49 pm

Peter, I forget what the number was for the dual channel vertical plugin for the 545B and 547 scope family was; something like 1A2 I think. It replaced the CA unit and it was a 50 MHz 5 mV per cm plug in.
I designed the entire thing from the electronics point of view. The 5 mV per cm spec was a misteak (screwup) on my part. It was supposed to be a 10 mV per division plug in at 50 MHz; but I forgot about the 2X gain from the single ended input to push pull output, so I ended up with 5 mV per cm instead. But as I said, I designed the entire plug in electronically from its Nuvistor input to the transistor outputs. Also did the sweeps and horizontal amplifier, and calibrator, besides the sweep switching. I think it was Bob Rullman (my boss) who designed the tunnel diode trigger circuits. I did one later for a new 150 MHz project that never flew, because I and a cohort working on it, left the company. I didn’t do the new 10X probes for the 545B / 547 program. That was done by Wim Velsink, who was in the Accessories/Peripherals Group that were probe experts.
Well we had the guts of it working, but they walked us out the door when we gave our notice, so we never got the chance to pass the designs we had going to someone else to take it over. I believe that delayed them at least three years getting to a 150 MHz real time scope.
TWTD !
G

Greg
Reply to  george e. smith
November 4, 2016 1:02 pm

” we could not explain it in the time domain”
“HP was a very savvy microwave and RF environment in those days” … probably because they were not afraid of using the F-word.

george e. smith
Reply to  george e. smith
November 4, 2016 2:18 pm

If time wasn’t real everything would have happened at once.
G

george e. smith
Reply to  george e. smith
November 4, 2016 2:22 pm

Real events happen in real time. Frequency has no meaning absent the concept of time, so frequency is a derived term not a fundamental one: as in so many per unit of time.
You can’t even talk of a frequency until you have established time itself.
G

Reply to  george e. smith
November 4, 2016 3:39 pm

The 547 was the first scope I ever used way back in high school, which was a big deal that they had one since they were still very expensive. I like my 3032B a whole lot better which from a cost performance perspective, is like the difference between a IBM 370 and my laptop. I will say that the insides of the older Tech scopes were like a work of art and solidly built with lots of gold and silver. I guess they could do this with a price on the order of mid to high end car.

NorwegianSceptic
November 4, 2016 2:22 am

Willis: “95% of the cycles ” – shouldn’t it be “97%”….? (better change it to improve your chances for getting grant money) 😉

November 4, 2016 2:25 am

Willis. This is very interesting thanks. Did you start all the sin functions off at the same point in their cycles, or did you randomly assign the start points? Thanks (apologies if I missed that in the explanation).

Owen in GA
Reply to  Jay Willis
November 4, 2016 9:20 am

Jay,
You missed it. He said he randomized the phase angles of the sine waves.

Sandy In Limousin
November 4, 2016 2:27 am

Willis
Should
Look at Series 4 at the lower left of Figure 1
read
Look at Series 4 at the lower left of Figure 3
or have I misunderstood?

RCS
November 4, 2016 3:00 am

I don’t understand what you have done. You have summed sine waves with random frequency and shown that you get a non flat spectrum.
There are several comments.
I may have missed it, but have you generated sine/cosine waves with random phases? If not this would be an expected result.
The other comment is that in the theory of coherent averaging, it is well known that averaging is a filter that acts on the noise with a defined spectrum that is of the form sinc(frequency).

Greg
Reply to  RCS
November 4, 2016 3:28 am

See my comment above, I’ve explained why he did not get a flat spectrum.

Greg
Reply to  Greg
November 4, 2016 1:06 pm

That was not the impression I got from the article:

Now, at the left end of each of the graphs in Figure 4 we can see that the periodograms are accurate, showing all cycles as being the same small size.

that sounds like you were expecting to get a flat spectrum.

Greg
Reply to  RCS
November 4, 2016 4:20 am

BTW you are also confusing averaging and a running average. It is the latter which has a sinc fn freq response, which is why it is a crappy, distorting filter. Averaging is a means of resampling the data which effectively reduces random variability but can also produce aliasing in the presence of cyclic signals shorter than the averaging period.
Averaging is like doing a running mean then resampling but the running mean is too short to act as a correct anti aliasing filter and again is a crap filter anyway. A proper anti-aliasing filter should be used if there is any non-random, periodic variability in the data. DP101.

george e. smith
Reply to  Greg
November 4, 2016 9:11 am

Averaging is adding all of the members of the set that you already know, and dividing by the number of members in the set. That gives you a single number which not surprisingly, we call the average. And you can’t calculate it until ALL members of the set are known.
If you change any of the numbers it’s a different set, and may have a different average; but that is still a single exact number.
G

November 4, 2016 4:13 am

An interesting approach in signal to noise determination. People are inclined to find patterrns in almost anything, like animals and faces in clouds, and finding a real, meaningful pattern can be difficult. The problem here is determining what a truly signal-free pattern should look like. Much of the criticism thus far seems to be that the synthetic random pattern generation process is not truly random.

Reply to  Tom Halla
November 4, 2016 7:26 am

Synthetic random is fit for statistical purposes by and large. And true randomness is not hard. Any thermal noise source will produce it.
What is more an issue is what spectral weighting is applied to it.
Low pass filter and it sounds like thunder of traffic rumbling away.
Its just noise.

Reply to  Leo Smith
November 4, 2016 9:02 am

Any thermal noise source will produce it.

Oh, this might make a great usb device, a true random number generator, could probably make them pretty cheaply too.
Think there’s a market for a true random generator?

Ed Bo
Reply to  Leo Smith
November 4, 2016 12:45 pm

Intel has been incorporating random number generators into their processors for many years now. Starting in 1999, they exploited thermal noise in an analog circuit to do this.
In more recent years, they have done it digitally by blatantly violating standard digital design rules that are intended to ensure deterministic results — they are using the indeterminism of their “bad” circuits to generate the random numbers.
You can find a good description here:
http://spectrum.ieee.org/computing/hardware/behind-intels-new-randomnumber-generator/0

November 4, 2016 4:16 am

Wow. Obviously one would want to see the additional analysis, but…
There are all kinds of proposed cycles in the climate that should be subjected to this type of analysis. An important one would be the 60 year AMO temperature cycle, the ice age cycles and then the Dark Age, MWP, little ice age cycles if possible.

Greg
Reply to  Bill Illis
November 4, 2016 4:22 am

Once Willis has repeated the process with some properly constructed random samples that would be a good idea. Hopefully he will be back a bit later to comment on the validity of his test samples.

Bloke down the pub
November 4, 2016 4:58 am

Figure 6. Periodograms of 100 datasets formed by adding together unvarying sine waves covering all periods up to the length of the dataset, in this case 3160. 316?

Clyde Spencer
Reply to  Willis Eschenbach
November 4, 2016 9:52 am

Willis,
Speaking of Fig. 6, I’m confused by the dotted line. You state that 95% of the data are supposed to be below the dotted line.and that is obvious for Fig. 5. However, ALL of the short-cycle data are ABOVE the dotted line in Fig. 6. Why are Fig. 5 and 6 different in this respect?

RH
November 4, 2016 5:35 am

Instead of using “unvarying sine waves”, how about varying them by modulating the amplitude with the sunspot data? I’d do it, but I ain’t smart enough.

graphicconception
November 4, 2016 7:12 am

My recommendation would be to read a book like Oran Brigham’s “Fast Fourier Transform”. I read the original. I don’t know what is in the more recent ones. Secondhand ones are quite cheap and the understanding it conveys about Fourier Transforms is enormous: https://www.amazon.com/Fast-Fourier-Transform-Introduction-Application/dp/013307496X/ref=pd_sbs_14_t_0?_encoding=UTF8&psc=1&refRID=HC1ET6784NWBTH1NJY61
When you have mastered the concept of combining waveforms either by addition or by convolution (e.g. filtering) then the results you get from a Fourier Transform will be less surprising.

Hugs
Reply to  Willis Eschenbach
November 4, 2016 11:52 am

No I think s/he means your stuff is too obvious but s/he is still lacking the skill to write.

graphicconception
Reply to  Willis Eschenbach
November 4, 2016 3:54 pm

I am trying to help. Sorry. You were surprised what happened when you added sine waves. If you create a perfect impulse in the time domain it will create components at all frequencies. The converse also works an impulse in the frequency domain results in a steady level in the time domain. Note: You need to include negative frequencies, as suggested by the Fourier Integral.
Truncating time domain signals will result in smearing of the frequency spectrum. Sampling is like multiplying by an infinite series of impulses. So the spectrum of a sampled signal can be determined if you can convolve the in-sampled spectrum with a set of impulses. The Fourier Transform of a set of impulses is another set of impulses. Much is known about Fourier Transforms.
Statements like: “Now, I’m not sure what I expected to find.” must pall with some of those who might have an idea – and, in my case, have known for over 30 years.
Generally, I like the articles posted here, but I can see why some experts do not.

graphicconception
Reply to  Willis Eschenbach
November 5, 2016 8:12 am

Ok Willis: Imagine I had posted a breathless article about a new discovery I had made about how a hollowed out log floats on water. I am now trying to harness the wind to add to effect of the two large serving spoons I am using for propulsion but the rig keeps toppling over and I can’t seem to steer it very well.
How impressed would you be with my efforts? Would you think I was genuinely making scientific progress on behalf of the world? The article may contain no errors at all.
I am now considering asking for a refund of half my AGU funding contribution. 😉

graphicconception
Reply to  Willis Eschenbach
November 5, 2016 4:59 pm

If you had not randomised the phase, what difference would that have made?
You see, I and many others, already know the answer.
I tried to offer a fellow sceptic some genuine help and, well, we can all see what happened.
It is by far the best book I have ever read on the subject of Fourier Transforms. Ironically, although the book’s subject is FFTs, that is its weakest part.
Bye bye.

Peter Sable
Reply to  graphicconception
November 4, 2016 12:09 pm

In other words, you think I’m wrong

Actually I think this means that one of the above cynical commenters is wrong.
Once you are used to the FT, summing sine waves sounds perfectly normal. The biggest thing people get wrong with FTs is forgetting that FT assumes a periodic signal forever, but you only have a window. You have to deal with the edge effects (usually by applying a window function). I’m pretty sure Willis fixed that issue several years ago.

graphicconception
Reply to  Peter Sable
November 5, 2016 9:28 am

“Yes, I use a Hanning window …”
Strictly, you either use a Hann window or a Hamming window. The two have different properties.
Of course, sometimes using a window function just makes matters worse. If your BoxCar window contains integer numbers of cycles then changing windows just gives rise to spectral components that are not really there.

graphicconception
Reply to  Peter Sable
November 5, 2016 4:52 pm

I thought you wanted factual error to be pointed out/? I would not want you to give your enemies any leverage by using imperfect terminology. I am glad you appreciated it.

November 4, 2016 7:17 am

I wonder why you keep on stirring this Gleissberg pot??
You would do well just to study the raw data from a few weather stations around you
but concentrate on daily minima and maxima,
and you will easily see that there is a recurring Gleissberg cycle in the data, ca. 86.5 years
do 4 regressions for each station, for 4 different time periods, to get the speed of warming/cooling in K/annum, now just plot the K/annum found against time and voila
a perfect curve – like somebody is throwing a ball….
quadratic
for the past 43.25 years
if you are lucky you might get a few stations with raw data going further back in time
e.g.
http://oi60.tinypic.com/2d7ja79.jpg

Reply to  henryp
November 4, 2016 7:24 am

note that I made an assumption of the wavelength being 88 years when it was in fact 86.5 years.
I have noted from the solar magnetic and other data that the switch upwards was made in 2014.

Reply to  Willis Eschenbach
November 4, 2016 9:08 am

Willis, I have min/max differences for all sorts of global area on source forge (including 1×1 cells of any cell with a weather station both annual and daily data)
http://sourceforge.net/projects/gsod-rpts/
And could likely make you anything you wanted.

Reply to  Willis Eschenbach
November 4, 2016 10:28 am

Willis
this is so simple that a high schooler or first year student should be able to do it..
you know what linear regression is
here is a link to yearly data, already averaged for you for each year, of a station, going back to 1973
http://www.tutiempo.net/clima/New_York_Kennedy_International_Airport/744860.htm
the second column is maxima, the third column is minima.
I prefer max and min because I find there is less noise in those data.
maxima is a very good proxy for direct heat received through the atmosphere.
so after copying and pasting to excel you can do a regression from 1973 to 2016, from 1980, from 1990 and from 2000. You are only interested in the derative of the equation, i.e. the value before the x….that is the average speed in warming / cooling in K/yr over that period.
In this way you got 4 points in time where you know what the average speed was of warming/ cooling. Now you could take a few stations with good data in areas around NY and together you could average the results for the area and set those results out against time – to get acceleration/deceleration just like the curve of ball thrown in the air. if you do that you should end up with a good curve like I did here for south Africa:
[this one is for minima]comment image
WITHIN the 4 points you can see the half cycle of The Gleissberg, i.e the past 46 or 47 years
you get it now?
btw
minima here always seem on the decline….

Reply to  Willis Eschenbach
November 4, 2016 2:14 pm

no tude, like I said, it is simple 1st year stats, i.e. mostly linear regressions
if you know the velocity of the thrown ball in m/s at 4 points in time from the point that it was thrown you should be able to work out its trajectory.
anyway,
you know the story of trying to bring the horse to the water….
But if you want to give it a go, here is a simple sampling procedure to give you a decent global result either for minima or maxima
https://wattsupwiththat.com/2016/11/01/uah-global-temperature-update-down-slightly-for-october/#comment-2331557
we are globally cooling, looking at the direction of my “crystal” ball…..

george e. smith
Reply to  henryp
November 4, 2016 9:17 am

Henry you continually push this graph above, which so far has not completed a single cycle, in fact it hasn’t covered enough ground to even say what the amplitude is, so why do continue to insist it is cyclic?? (rather than a single transient event..)
G

November 4, 2016 7:31 am

A nice explanation of Linear Feedback Random Number Generators (LFSR)- The Galois method is particularly easy to implement in both hardware and software.
http://courses.cse.tamu.edu/csce680/walker/lfsr_table.pdf
It gives the proper taps for every length up to 2^786 and also values for 2^1024, 2^2048, and 2^4096
I needed one for a project I was doing. I’m using 2^31 as it is easy to use on 32 bit machines. One word is all I need. And the computation is fast. Basically – pick out the lowest bit (bit 0) use it for an XOR if the value is one.. Shift.
A paper on the use of LFSR in crypto – which is not what I’m doing. This is what I’m doing.
http://classicalvalues.com/2016/10/magic-80-ball/

Reply to  M Simon
November 4, 2016 7:35 am
Neo
Reply to  M Simon
November 4, 2016 8:55 am

2^31 is a favorite.
To have a maximal length sequence that is prime you need 2^(a prime).

TLMango
November 4, 2016 8:07 am

Willis,
Great subject! I agree, garbage in . . . garbage out.
On the other hand, if the samples were infinitely large and truly random
you would find that there is a universal set of frequencies. Ray Tomes
did some interesting work in this area. If I understand this correctly, the
strength of some natural cycles are compounded by their harmonics.
Like the natural 144 year cycle { 2, 3, 4, 6, 8, 9, 12, 16, 18, 24, 36, 48, 72, 144 } .
visit Ray at cycles research institute
Regarding the sun: Analysis has to be based on physical mechanism.

jorgekafkazar
Reply to  TLMango
November 4, 2016 10:24 am

The Sun is a somewhat oversized random number generator. If we knew the physical mechanism, we’d already have the answers we seek.

TLMango
Reply to  jorgekafkazar
November 4, 2016 12:47 pm

” If we knew the physical mechanism, we’d already have
the answers we seek. ”
Very true, if we’re strictly talking about the 11 year cycle.
The physical mechanism(s) I’m referring to are the:
1) sun’s acceleration/deceleration cycle of ~9.929 years
2) sun’s SSB 360 degree return to the same location on the ecliptic.
Jorge,
Please visit Weathercycles.wordpress
” Fibonacci and climate “

Curious George
November 4, 2016 8:11 am

Slightly off topic – you have data points for a period of 1 year, 2 years, etc. Nothing for 1.3 years. The analysis should be run for a continuous spectrum, not for a discrete spectrum with an arbitrary base period (1 year). I don’t believe that the sun is heavily influenced by the orbital period of one minor planet.

TLMango
Reply to  Curious George
November 4, 2016 12:21 pm

” The analysis should be run for a continuous spectrum, . . . ”
Thanks CG, you are absolutely correct. Ray’s harmonic theory does
not specify years as the term of measure. I will have to rethink how to
correctly use constructive resonance to make a point.

Greg
November 4, 2016 8:13 am

Figure 1 shows that the circa 90y peak while not as high as the 11y peak is also notably broader. While less well defined and pure harmonic cycle it contains just as much signal as the 11y peak.
The log scale makes the periodogram equivalent to a frequency spectrum plotted backwards, So it gives are realistic comparison.

paqyfelyc
November 4, 2016 8:18 am

“I generated a series of sine waves …” ->
Greg is right.
Moreover, each sin wave is essentially a succession of +1 / -1, or -1/+1 depending on the phase. Meaning you made a sort of “random walk”, very classical, this is why you got spikes this procedure usually shows. Of course randomizing phase does matter a lot.

Greg
Reply to  paqyfelyc
November 4, 2016 8:50 am

Random walk is why W. observed that the results looked “life-like”.

Javier
November 4, 2016 8:45 am

Some things need to be repeated because the first time they are not read or understood.

The situation that you describe is exactly the same for the 11-year sunspot solar cycle. It also disappears between 1650 and 1700. It has irregular length so if you measure it between 1850 and 1900 it will be a 12 yr cycle and if you measure it between 1900 and 1950 it will be a 10 yr cycle. So let me ask you this, Willis. Is there a ~11 year cycle in the sunspot data? Well … yes and no. You see the problem? Whether we think it is due to a chaotic nature or to a poorly understood cause for solar variability, we must accept that this is the nature of solar cycles or abandon its study. Most people seem to believe in the 11 year cycle by conveniently ignoring the same reasons they use to not believe in the ~ 2400 yr cycle. Not very consistent.

Javier
https://wattsupwiththat.com/2016/10/17/the-cosmic-problem-with-rays/#comment-2321835
Let’s make it even clearer:
If you were analyzing sunspot data only between 1620 and 1780 your mathematical analysis would wrongly conclude that there is no 11-year sunspot cycle because it is absent from half of the record.
http://i.imgur.com/UqifLOa.png
If you were analyzing sunspot cycle length only between 1850 and 1960 you would wrongly conclude that there is no 11 yr cycle as a 12 yr cycle morphed into a 10 yr cycle.
http://i.imgur.com/yIC8NMt.png
So Willis, why do you believe in a 11-yr solar cycle? Just because being a shorter cycle you have more data points and can afford the luxury of ignoring that solar cycles sometimes disappear, have tremendously irregular amplitude and very irregular duration?
You have not chosen a good subject for your analysis. It is leading you to the wrong conclusions. We already knew that solar cycles are irregular in presence, amplitude and duration. Your analysis contributes nothing. You set the conditions to reject them. Nothing new. I have read papers rejecting the existence of the Dansgaar-Oechsger cycle based on the same arguments.

Greg
Reply to  Javier
November 4, 2016 9:07 am

Agreed, the circa 90y periodicity is about as significant as the circa 11y one. It is broader, either because of poorer data quality or more likely that the underlying causes are perturbed.
Jupiter’s orbital period has an average of 11.86 years but is quite variable because of the influence of the other planets. That will produce broadening in a frequency analysis of J period.
Having said that, random walk or AR1 data can appear to have “variable periods” too as can be seen in some of the panels Willis’ figure 3.

Reply to  Javier
November 4, 2016 9:23 am

Javier: If you were analyzing sunspot data only between 1620 and 1780 your mathematical analysis would wrongly conclude that there is no 11-year sunspot cycle because it is absent from half of the record.
You make a case that there is no persistent process with an 11-year cycle, or in other words, that the time series is not stationary, even in the wide sense.
You have not chosen a good subject for your analysis. It is leading you to the wrong conclusions. We already knew that solar cycles are irregular in presence, amplitude and duration. Your analysis contributes nothing.
I disagree. Willis Eschenbach has shown that even with a series that is stationary by design, the results of the analyses of finite segments are even less reliable than what you or most of us have believed. Claims of cycles with large periods are especially incredible without lots more evidence.

Greg
Reply to  matthewrmarler
November 4, 2016 9:31 am

“You make a case that there is no persistent process with an 11-year cycle”
No, there are a bunch of frequencies around 11y, at some time they will all add up to nothing when out of phase. If an FT shows a period, it is there persistently, it can’t have a sabbatical. When adjacent frequencies are bit ahead or behind it swings the apparent peaks a little earlier or later producing “variable periods”. Both those features can be produce with perfectly constant harmonic functions of constant amplitude.

Reply to  Greg
November 4, 2016 9:36 am

No, there are a bunch of frequencies around 11y, at some time they will all add up to nothing when out of phase. If an FT shows a period, it is there persistently, it can’t have a sabbatical. When adjacent frequencies are bit ahead or behind it swings the apparent peaks a little earlier or later producing “variable periods”. Both those features can be produce with perfectly constant harmonic functions of constant amplitude.

AM and FM Mixers!!!!

Javier
Reply to  matthewrmarler
November 4, 2016 9:42 am

I disagree with your disagreement. What Willis procedure does is raise the bar for accepting cycles. By doing that you make it more difficult to get false positives but easier to get false negatives. As solar cycles are very irregular they are likely to fall into the last category.

Reply to  matthewrmarler
November 4, 2016 10:56 am

Javier: By doing that you make it more difficult to get false positives but easier to get false negatives.
I made the same claim about Willis Eschenbach’s use of Bonferroni corrections, you may recall.
In both cases we are in a kind of limbo, with the data not clearly describable by any particular hypothesis (or equally well describable, statistically, by contradictory hypotheses.) That being the case, should you not be cautious or modest when claiming to have a persistent oscillatory process with an estimable period?
If the phases or periods or both are constantly changing, is the observed process stationary?
Recall the underappreciated related problem with multiple time series: if there are two autocorrelated independent processes (that is, independent of each other), then they are quite likely to have a statistically significant cross-correlation in finite records. As with the presentation in this essay, the phenomenon is demonstrated with simulations, not derivations.

Javier
Reply to  matthewrmarler
November 4, 2016 12:54 pm

matthewrmarler,

That being the case, should you not be cautious or modest when claiming to have a persistent oscillatory process with an estimable period?

Absolutely if you are only working with that data, as Willis does. However to me a cycle is certain only when you can confirm its existence by a completely independent approach not based on the same data. This has been done for the ~ 2400 yr solar cycle based on its effect on climate since 1968 and was reviewed by me recently here:
https://judithcurry.com/2016/09/20/impact-of-the-2400-yr-solar-cycle-on-climate-and-human-societies/

Reply to  matthewrmarler
November 4, 2016 3:27 pm

Greg: No, there are a bunch of frequencies around 11y, at some time they will all add up to nothing when out of phase.
Appearing/disappearing oscillations occur in EEG recordings during sleep, producing “sleep spindles” in the tracing. At unpredictable times a process with a frequency of 5-8Hz starts with weak amplitude, increases to a peak, and then diminishes to 0. It is “reliable” in the sense that it occurs over and over, but it is not predictable. The appearing/disappearing 11 yr solar cycle has a differing appearance, seeming to be there or not there, without a gradual onset and offset. Wouldn’t the process that you describe have a more gradual increase and decrease in amplitude?

Javier
Reply to  matthewrmarler
November 5, 2016 4:36 am

You are describing the putative effect of a nominal 2,400 year signal, which actually varies from 1800 to 2800 years, on fifty years of temperature data?

Clearly I am not saying that. But you don’t bother to read even what you critizice. I already said that you should have read my article before critizicing it. You still haven’t done it and therefore you do not know what I say. Here is the link in case you feel the inclination:
https://judithcurry.com/2016/09/20/impact-of-the-2400-yr-solar-cycle-on-climate-and-human-societies/
The signal varies in 2,450 ± 200 years, not more. In 1968, before the 14C data became available Roger Bray already described the cycle based on solar records (sunspots, naked-eye sunspots, aurorae), climatology and biology. In 1971 it was independently described based on 14C for the first time by Houtermans.
The existence of the ~ 2400 year cycle is confirmed, as Roger Bray noticed in 1968, by climatological proxies for the last 12,000 years. Read the article.

THERE IS NO REGULAR 2400 YEAR CYCLE IN THE ∆14C DATA. Doesn’t exist. There is only a wildly time-varying and weak cycle which is not statistically significant, as I’ve shown now by two separate and totally distinct methods which gave the same answer.

There is a ~ 2400 year cycle in the ∆14C data. It has been found since data was available in 1971 multiple times by independent researchers using a variety of methods. It has been demonstrated by Usoskin et al., 2007 that the distribution of grand solar minima displays a “quasi-periodicity of 2000–2400 years, which is a well-known period in 14C data” that is statistically significant. That you don’t find it doesn’t mean much. As I have said you have raised the bar for accepting cycles, and therefore irregular solar cycles give false negatives with your method. You have proven nothing. A plethora of climatological data confirms the existence of a climatic ~ 2400 year cycle, and a variety of solar data (∆14C data, solar grand minima distribution, ancient solar records) confirms the existence of a ~ 2400 year solar cycle. That you have found a way of increasing the requirements for accepting cycles and have focused on just one aspect of the issue only tells of your capacity of self-deception by ignoring all the rest of the evidence.

And even if such a cycle did exist, we have no reason to believe that it’s not just a “change within the carbon system itself”

I don’t think you realize the implications of what you say. The ~ 2400 yr periodicity is “the strongest feature in the ∆14C record” in the words of Sonnet and Damon (1991). It is based on the distribution of solar grand minima, the most salient feature of the ∆14C record to the point that the main ones have received names, like Homeric, Roman, Spører, Maunder, etc. If what you say was true, then 14C would not be a proxy for solar activity, and almost everybody believes that is not the case.

Reply to  matthewrmarler
November 5, 2016 8:25 am

Javier: This has been done for the ~ 2400 yr solar cycle based on its effect on climate since 1968 and was reviewed by me recently here:
https://judithcurry.com/2016/09/20/impact-of-the-2400-yr-solar-cycle-on-climate-and-human-societies/

I read your post at Climate Etc. A claim of a 2400 year cycle based on data “since 1968” is hardly credible.

Javier
Reply to  matthewrmarler
November 5, 2016 11:38 am

matthewrmarler,

A claim of a 2400 year cycle based on data “since 1968” is hardly credible.

Then you should know that the claim is “since 1968”, while the data is “since 12,000 BP”.

george e. smith
Reply to  Javier
November 4, 2016 9:29 am

So you have 30 sequential data points in a 300 year time interval. Great; you can plot those on a scatter plot, and use that plot to deduce all kinds of things.
The moment you connect those dots with straight lines, you create a completely bogus graph of a non band limited function, and since it is not band limited, then it is quite clearly under sampled, and ergo, you cannot conclude ANYTHING about your graph. Remove the lines and you have some information. With the lines you have aliased noise, which means you can’t even legitimately average that function. It is total garbage.
G

Javier
Reply to  george e. smith
November 4, 2016 9:35 am

That graph is from this article:
Solheim, J. E., Stordahl, K., & Humlum, O. (2012). The long sunspot cycle 23 predicts a significant temperature decrease in cycle 24. Journal of Atmospheric and Solar-Terrestrial Physics, 80, 267-284.
Direct your quibbles to the authors. I am only interested in the known variability in the 11-yr solar cycle length.

george e. smith
Reply to  george e. smith
November 4, 2016 2:47 pm

I don’t care where the graph is from or who created it. The whole problem with the essentially gobbledegook that passes as climate science statistical studies, is that the purveyors do not even understand the fundamentals of sampled data systems.
Of necessity we have to use sampling systems to study real time physical systems, for the simple reason that it is almost impossible to capture continuous real variable values in real time. Our measurement instruments simply are not fast enough to keep up with real continuous variables.
So we have to sample. But unless we sample in accordance with the rules regarding information theory and sampled data systems, we cannot legitimately claim to have knowledge of that real system.
If one had happened to bore a core drill hole 18 meters deep about 100 years or more ago in South Africa, one might have concluded that the entire crust of the earth contained a layer of type IIA diamond just 18 meters below the surface. When in actuality one had only stumbled upon the Cullinan Diamond.
Actually it was found sticking out of the wall of a tunnel that had already been dug underground at that 18 meters depth, but it could have turned up in a sampling core drill.
I don’t make the rules. But it sure sticks out like a sore thumb when I see instances of persons ignoring the rules, and believing what they think they have found. If that is your idea of quibbling, so be it. I make a habit of not getting between any person and a cliff they are planning to jump off.
G

Reply to  Willis Eschenbach
November 4, 2016 10:59 am

Good reply. Thanks again.

Javier
Reply to  Willis Eschenbach
November 4, 2016 2:33 pm

Willis,
Thank you for such an extensive answer.

1. When the claimed ~2400-year cycle exists in the ∆14C data, it is weak and far from obvious.

It’s funny that you would say that because the cycle was described as soon as the data became available in 1971, and is described by Damon and Sonnet as the strongest feature in the ∆14C record:
III. THE ~ 2300 YR PERIOD
Aside from the aforementioned long (secular?) variation, the strongest feature in the ∆14C record is the long period of ~ 2300 yr. This component of the ∆14C spectrum, in addition to the 208 yr period, was first reported by Houtermans (1971). Its source is enigmatic but probably not attributable to the geomagnetic dipole field, for no periodic geomagnetic dipole field change of the required amplitude has been detected.
Damon and Sonnet 1991, pg. 366

My bold. Damon, P. E., & Sonett, C. P. (1991). Solar and terrestrial components of the atmospheric C-14 variation spectrum. In The Sun in Time (Vol. 1, pp. 360-388).
That is a truly remarkable disconnect between what you say and what it is published.

2. It only exists in about a quarter of the ∆14C data, and not in the rest. WHEN YOU HAVE TO THROW AWAY 3/4 OF YOUR DATA TO MAKE YOUR CASE, THAT’S CHERRY PICKING! And yes, I know you claim that the errors are greater in the earlier part … when is that not the case? That’s true of all paleo data, but the uncertainties are by no means large enough to hide or obscure a cycle of the amplitude that you are claiming.

Here you fail to distinguish between 14C data and solar activity data. To convert measured 14C data into estimated solar activity data usually two different box models are required, one for the biosphere and another one for the oceans. The final error I have already stated that becomes unacceptable when data is older than 12,000 yr BP. It is panel b in this figure:
http://i.imgur.com/tIadNGH.png
http://www.clim-past.net/9/1879/2013/cp-9-1879-2013.pdf
I have already tried to explain this several times. Solar activity cannot be reconstructed with any degree of reliability for older than 12,000 years BP. Prior to that we can talk about 14C but not solar activity.

3… Your purported 2400-year cycle in the ∆14C data, on the other hand, has a huge claimed variation of 1880 to 2810 years

That is not correct. The minima in the cycle have been identified to ± 200 years o the following dates:
B1. 0.4 kyr BP. Little Ice Age (LIA)
B2. 2.8 kyr BP. Sub-Boreal/Sub-Atlantic Minimum
B3. 5.2 kyr BP. Mid-Holocene Transition. Ötzi buried in ice. Start of Neoglacial period
B4. 7.7 kyr BP. Boreal/Atlantic transition and precipitation change
B5. 10.3 kyr BP. Early Holocene Boreal Oscillation
B6. 12.8 kyr BP. Younger Dryas cooling onset
The variability is therefore also about ± 10%

4. NONE of your claimed ~2400 year cycles are actually between 2150 and 2650 years in length. Not one.

Err, again not correct.
B1-B2: 2400 yr
B2-B3: 2400 yr
B3-B4: 2500 yr
B4-B5: 2600 yr
B5-B6: 2500 yr

5. We have observational data long enough to show the existence (or non-existence) of about 20 of your 2400-year cycles … but you’ve THROWN AWAY THE DATA ON ALL BUT FOUR OF THEM. You are basing your claim of a 2400-year cycle on FOUR EXAMPLES which range from 1880 to 2810 years in length, none of which are within 200 years of a 2400 year cycle … do you really believe that this is scientifically defensible?

And yet again this is not correct. All cycles are around 2450 ± 200 years. All of the minima in the cycle coincide with grand solar minima. All of them coincide with significant climate worsening periods of millennial scale as judged by multiple proxies. This is not only scientifically defensible. It has been scientifically defended multiple times and it is widely accepted within the paleoclimatology community.

if every time a cycle between 1800 years and 2900 years shows up you claim it is part of some hypothetical 2400-year solar cycle, what have we gained? It is USELESS in predicting future solar changes, so what good would your claim do us even if it were true?

The cycle is a lot less irregular than you purport it, but yes the next instance of a minimum in the ~ 2400 year solar Bray cycle is scheduled for around 4000 AD in 2,000 years. It that sense it is useless for the next 30 generations. However it helps us understand the climate of the past, the Little Ice Age, it helps calm fears that a new LIA is in the making, and it helps us understand what rules the climate in the millennial scale. I would say that it is more useful than studying the glacial cycle for example.

Next, you have not shown any observational connection between the ∆14C data and the sun.

I did.
http://i.imgur.com/kyi7mn0.png

You claim that the largest cycle you discuss in the record (7000 years) is NOT from the sun

I have not made such claim. I have no idea about that 7000 year periodicity.

but instead is from processes internal to the carbon cycle … but if that is the case, surely the smaller cycles (2400 years, 1000 years) could easily also be from the same cause.

For the ~ 2400 yr cycle we are talking about grand solar minima as the lows in the cycle are the highest 14C production periods of all. If 14C data represents solar activity the 2400 yr cycle is real. As I said the robustness of this feature in the data is what it made it to be discovered from day one.

we have very little solid solar data for the “Maunder Minimum”. While some people claim that there were no sunspots during that time, others strongly dispute that claim

For the matter discussed it doesn’t matter if the sunspots were depressed or absent. Solar cycles appear to interfere with each other so there are times when the cycle studied becomes less conspicuous or even undetectable before reappearing later. To you that means they are not cycles. To me it means that we have to study more to understand what causes them and makes them behave in that way. The 11 year cycle is not the only one that makes that. The ~ 208 yr de Vries cycle also disappears at regular intervals, and the ~ 1000 yr was also unconspicuous between 5000-3000 yr BP

Your uncited graph claiming to show solar cycle length goes back to before 1700, where we have no reliable sunspot records … I was wondering how they did that.

That graph is from NASA, at this page:
http://www.nasa.gov/mission_pages/sunearth/news/solarcycle-primer.html
with this link:
“http://www.nasa.gov/images/content/599325main_ssn_yearly.jpg”
I really cannot attest if NASA is a trusted source or they make up their data.
Best Regards.

henryp
Reply to  Javier
November 4, 2016 2:48 pm

Good comment. Thx. But what now with the DO events @ 1470 yr?

Javier
Reply to  Willis Eschenbach
November 4, 2016 3:42 pm

But what now with the DO events @ 1470 yr?

If the question is for me, the D-O cycle is clearly not of solar origin, despite several articles that claim so. I am writing an article about that cycle during the last glacial period and will send it to Judith Curry in about a month or so to see if she likes it for her blog. The discussion about the existence of the D-O cycle during the Holocene is left for a future article, but I believe there is some evidence that supports an Holocene continuation of the D-O cycle.

Chimp
Reply to  Willis Eschenbach
November 4, 2016 4:50 pm

D/O in the Holocene, aka Bond Cycles, show the same frequency but typically only about 1/10 the amplitude (after the 8.2 ka cold snap), due to relative warmth and equanimity of interglacials.

Javier
Reply to  Willis Eschenbach
November 4, 2016 6:52 pm

D/O in the Holocene, aka Bond Cycles, show the same frequency

That’s a common misunderstanding, but it is not what the evidence shows. The Bond cycle doesn’t really exist. As Gerard Bond himself showed, the ice-rafted debris is capturing mostly solar variability coming from the ~ 1000 yr Eddy and ~ 2400 yr Bray cycles. This figure is from Bond’s 2001 article with my red lines indicating the Eddy cycle:
http://www.euanmearns.com/wp-content/uploads/2016/05/Figure-3.png
Figure 3. Correspondence of the 12,000 years smoothed and detrended record of 14C with the averaged stacked ice rafted debris in North Atlantic sediments that is a proxy for iceberg discharges. Both series have peaks corresponding to the ~ 1000-year periodicity known as the Eddy cycle, most prominently during the first half of the Holocene, when this periodicity was dominant. Source: Bond et al., 2001.
http://euanmearns.com/periodicities-in-solar-variability-and-climate-change-a-simple-model/

Chimp
Reply to  Willis Eschenbach
November 4, 2016 8:28 pm

Javier,
Cold and warm events in the Holocene and previous interglacials, IMO, clearly show the same periodicity. Heinrich events are different, perhaps limited to a glacial world, but Bond and D/O cycles look to me the same, ie not requiring vast NH continental ice sheets for their formation, but simply oceanic circulation.
IMO the oceanic cycles have solar cycles behind them.
I could be wrong, of course.

Javier
Reply to  Willis Eschenbach
November 5, 2016 5:14 am

Chimp,
I’ve got news for you. Gerard Bond cheated with the periodicity of the Bond events.
http://i1039.photobucket.com/albums/a475/Knownuthing/Figure%2036_zpsdazvxzwc.png
By naming the first event zero, and skipping one event between 4 and 5 he got the number down to 8, so 12000 yr / 8 = 1500 yr periodicity.
In reality the number of peaks is (at least) 10 as you can clearly see, so the periodicity is 12000 yr / 10 = 1200 yr.
For the first half of the Holocene Bond events are taking place at ~ 1000 year interval, as the filter clearly shows. For the second part of the Holocene the picture is more complex, there are double peaks that suggest different factors affecting iceberg discharge. A periodicity of 1500 years can be defended for this part only if we consider double peaks as belonging to the same event. I personally doubt that assumption is prudent. It places the peak of the Bond event in a valley in ice rafting.
The evidence supports that Bond events are picking every significant cooling during the Holocene whatever its origin, either solar, oceanic, or other. Therefore it is not a cycle but a cooling record. I do not believe in a Bond cycle, and therefore there cannot be a correspondence with the D-O cycle.

Chimp
Reply to  Willis Eschenbach
November 5, 2016 11:27 am

IMO a change in the periodicity of the cycles after the North American and Eurasian ice sheets melted doesn’t mean that there aren’t still cycles, analogous to those of the glacial interval (D/O).
But in any case, I don’t see how anyone can d@ny the reality of climatic cycles. The Pleistocene glaciations occurred at ~40K year intervals, then ~100K. Ice houses, during which ice ages can occur if the continents are in amenable positions, happen regularly at ~150 million year periods.

Javier
Reply to  Willis Eschenbach
November 5, 2016 11:48 am

Chimp,

IMO a change in the periodicity of the cycles after the North American and Eurasian ice sheets melted doesn’t mean that there aren’t still cycles, analogous to those of the glacial interval (D/O).

So, it doesn’t matter that the nature of cycle is different (D-O warming, Bond cooling), and the periodicity is different (D-O 1470 years, Bond 1000 years), they are still the same cycle. You are not asking too much from evidence before identifying a cycle.

I don’t see how anyone can d@ny the reality of climatic cycles. The Pleistocene glaciations occurred at ~40K year intervals, then ~100K.

Climate cycles are a reality, but our interpretation of the cycles is often incorrect. The 100 kyr cycle does not exist either. We have had 11 interglacials in the past 800,000 years:
https://judithcurry.com/2016/10/24/nature-unbound-i-the-glacial-cycle/

Chimp
Reply to  Willis Eschenbach
November 5, 2016 2:43 pm

Javier,
Each cycle has a warming and cooling element, a peak and a trough.
Two interglacials in the past 800 ky were double peaks, which really ought to count as one. They weren’t separated by glacials. Interglacials can last from less than 10,000 years to more than 30 ky.

Javier
Reply to  Willis Eschenbach
November 6, 2016 6:07 am

Chimp,

Two interglacials in the past 800 ky were double peaks, which really ought to count as one.

No. They should not be counted as one. They are indistinguishable from interglacials in the Early Pleistocene. Should we count all those as pairs then?. As in the case of the Bond periodicity, if you make the data fit your model instead of the opposite way you run into the problem that Feynman described:
“The first principle is that you must not fool yourself – and you are the easiest person to fool.”

Chimp
Reply to  Willis Eschenbach
November 6, 2016 12:38 pm

Nope.
To count as different, they need to have a glacial in between them, whether of ~40K or ~100K years. Just a millennial dip of Dryas duration doesn’t count.

Javier
Reply to  Willis Eschenbach
November 7, 2016 2:17 am

To count as different, they need to have a glacial in between them

Just because you say so. The temperature gets colder the longer the glacial period. Two interglacials that are only separated by 41 kyr do not show a very cold glacial between them. I have already demonstrated that MIS 7c, MIS 15a, and MIS15c, that you count as half interglacial have the typical orbital configuration, temperature profile, and duration of any other interglacial.
http://i.imgur.com/eGcSIBV.png
This is how mistakes are made in science. By taking unwarranted assumptions not supported by the data.

Chimp
Reply to  Willis Eschenbach
November 7, 2016 10:46 am

Javier,
It’s not my say so. By definition, an interglacial occurs in between glacials, ie intervals of ~41K or ~100K years.

Chimp
Reply to  Willis Eschenbach
November 7, 2016 10:48 am

So, you’d have to change the definition of “interglacial” to claim more than one interglacial between two glacials.

Javier
Reply to  Willis Eschenbach
November 8, 2016 2:48 am

Chimp,
Then your problem is with the definition of interglacial. Neither by temperature nor by duration can the period between MIS 15a and MIS15c be rejected as a glacial period without rejecting also most glacial periods prior to the Mid-Pleistocene Transition:
http://i.imgur.com/H3WbQhy.png
So I hope you see the problem. In the middle of the data series you change the criteria for defining a glacial period and based on that you claim a different periodicity for the last part of the data series.
In reality, what the data says is that besides a progressive cooling, the longer the glacial period the colder it gets. Pretty obvious. If you define interglacials by very cold glacial maxima you reject perfectly good interglacials, accept that a huge cooling can take place in the middle of an interglacial, and cherry pick the glacial periods over the Quaternary Ice Age. All that self-deception because you want to match a 100 kyr periodicity that comes out of a frequency analysis with a hypothesis that eccentricity is in charge by adjusting the data to the model, instead of going the opposite way.
The data clearly says that obliquity, and not eccentricity, is in charge, that the 100-kyr periodicity is an artifact, and that we get interglacials usually every 82 kyr in the Late Pleistocene, but sometimes at 41 kyr or 123 kyr.
http://i.imgur.com/sjVlDo8.png
Ignore it at your peril.

Chimp
Reply to  Willis Eschenbach
November 8, 2016 10:50 am

Javier,
I’m not wedded to a 100K cycle. I’m OK with 41K and multiples thereof, averaging out to 100K. But that’s not the point.
Cold snaps during interglacials are more common than not. It has to do with the rate of deglaciation. Dryas-type events can occur later in an interglacial than they did in the Holocene. MIS 15 had a single interglacial, with a deeper than usual cold snap between its peaks. Look at the other interglacials. You’ll find some with dips almost as profound.
The cold snaps aren’t glacial intervals unless they last more than 40,000 years and allow the build up of NH ice sheets, so the two peaks aren’t two different interglacials, but the same one.

Javier
Reply to  Willis Eschenbach
November 9, 2016 1:38 pm

Chimp,

Cold snaps during interglacials are more common than not. It has to do with the rate of deglaciation. Dryas-type events can occur later in an interglacial than they did in the Holocene.

In the case of MIS 15, we are talking about a cooling that lasted 25,000 years (way longer than the average interglacial) and was a drop of -5°C in EPICA which is the usual cooling during a glaciation. I think you are mistaking a glaciation with a cold snap.

The cold snaps aren’t glacial intervals unless they last more than 40,000 years and allow the build up of NH ice sheets, so the two peaks aren’t two different interglacials, but the same one.

Again you are making assumptions that bias your results. If the obliquity cycle lasts 41 kyr, you should not expect a glacial period to last much more than 25,000 years if it only expands one cycle. This is the way it has happened for millions of years. Now you decide that a glaciation is not a glaciation unless it lasts over 40,000 years. Don’t you see that this unwarranted assumption makes it impossible for you to find the truth about the glacial cycle?
Here you have a comparison between the average interglacial and MIS 15c, the first of the two interglacials that you defend are only one.
http://i.imgur.com/N8miip5.png
As you can see MIS 15c is not significantly different in any way to the average interglacial, and the glaciation that comes afterwards cannot be described in any credible way as a cold snap similar to the Younger Dryas. MIS 15a comes about 5000 years after the graph ends and it is also not significantly different to the average interglacial.
MIS 15c and MIS 15a are two different interglacials separated by a short glacial period. That is the way it used to be in the Early Pleistocene, and we still get some of those in the Late Pleistocene when eccentricity is very high. The 100 kyr cycle is an illusion maintained by making assumptions that are unsustainable at the light of the Early Pleistocene data.

November 4, 2016 9:09 am

Thank you for the essay. Well done, and focused, as usual.

November 4, 2016 9:13 am

Actually Willis, an interesting experiment would be to build not a random number, but use the orbital periods of the magnetic planets, and start generating sunspot records, and see if that’s what you get.

shunyata
November 4, 2016 9:18 am

Willis: You can silence the criticisms of your construction of sine waves by simply using the original sample and shuffling the data points to get random samples. By construction, this approach removes any periodicity in the series – but you will still “detect” periodicity using periodigrams. This approach has an added bonus of preserving any of the statistical artifacts that might be created by the underlying sampling distribution. For example, heavy tails can falsely show up as significant short-period cycles.
Great work! Look at Voit, “The Statistical Mechanics of Financial Markets” for even more of these types of tricks!

Greg
Reply to  shunyata
November 4, 2016 9:24 am

W said he used random phase, so I assume he used a random number fn to do that. If he just used the same fn to get the data he would have properly random data from a tested and verified algo. The danger with homespun methods is that you need to test them before relying on them, as Mickey Mann never ceases to demonstrate.

Greg
Reply to  Greg
November 4, 2016 11:03 am

Sorry if that was not clear. I meant in generating your random data. You used a random fn for the phase in simulating some random data. Why not just use the random fn to create random data. It seemed like this was a rather an odd way to create “random” data whilst using a random number generator.

Now, at the left end of each of the graphs in Figure 4 we can see that the periodograms are accurate, showing all cycles as being the same small size. This is true up to about 100 cycles, or about 1/30 of the length of the dataset. But as we get further and further to the right, where we are looking at longer and longer cycles, we can see that we get larger and larger random peaks in the periodogram. These can be as large as forty or fifty percent of the total peak-to-peak range of the raw signal.

It seems that you were expecting white noise and saw the stronger long periods as anomalous “false” cycles that people were often misreading as being real in this kind of data.
Your figure 5 shows your 95% which looks a lot like 1/x plotted backwards. ie it is “red noise”, not white.
This makes sense with the way you loaded the data by using equal period intervals, as I explained.
I’m sure you have created random walk / AR1 / red noise test data before, so I was saying it would be better to use the tested algo for random numbers and create it directly rather then the novel method where you apparently got an unexpected result.

1sky1
Reply to  Greg
November 4, 2016 4:52 pm

Random number generators are usually far from truly random and their output is usually distributed uniformly. By utilizing that output only for the phase of the sinusoids, one takes the first step toward generating GAUSSIAN random data. The next step, alas not taken by Willis here, is to make the periods of the sinusoids incommensurable, thereby avoiding periodic repetitions. There’s a vast literature available on Gaussian time series.

Greg
Reply to  Greg
November 6, 2016 2:40 am

Why, well originally you said you wanted random numbers, so that would have been the obvious choice. Now it seems that you prefer a red-noise model, though you never explicitly stated that nor why you think it is the best null hypothesis for SSN.
Not saying that is necessarily wrong but if that is your intention you should say that is your model and why your chose it to decide what is significant for SSN.
If you want red noise you can just integrate the white noise from the random number generator. I suggested a cumulative integral but you seem to have missed it , which is why you are still asking why I think you could have used that fn to get the test data.
Instead of a thousand sine calculations and a thousand additions for each point it would require one sum!
Since you did not realise that you were making red-noise, you would not have realised this, otherwise I’m sure you would have gone straight for it.

The CEEMD analysis shows that none of these signals are either regular or sustained … and this conclusion is supported by my analysis of the random data. The fluctuations that we are seeing are not distinguishable from random fluctuations.

You are making some very strident claims about what is/isn’t significant without saying why you have chosen a particular noise distribution model or without apparently realising which noise distribution you were using.
Now it may be true that these longer periods are not significant compared to random walk but so far you do not present any reason for assuming SSN is a random walk other than remarking the many “other” natural [ terrestrial ] data look a bit like that too.
There may be some mileage in this but so far you have failed to justify the basis for your significance test.
Best Greg.

george e. smith
Reply to  shunyata
November 4, 2016 9:31 am

“trick” is the meaningful word used here.
G

November 4, 2016 9:34 am

So what. Baffle ’em, razzle-dazzle ’em, with snazzy pictures that answer the wrong questions if there even are related or meaningful questions.

Greg
November 4, 2016 9:46 am

This is because (as you point out) when you use equal frequency intervals, you get white noise … but natural datasets are about as far from white noise as you can get.
Since the equal period random data looks like natural data, and the periodograms of said data look like periodograms of natural data, I used them instead.
Regards,
w.

Yes, many natural processes like temperature time series are strongly auto-correlated due to the thermal inertia of the system. So to conclude that your analysis shows that the longer periods are not significant in SSN you need to show that the mechanism producing them has similar properties, or say that IF whatever produces sunspots is a random walk, the longer “periods” may be illusions.
Perhaps a simpler method would be do the FT on d/dt(SSN) and compare to white noise.
If you look at ddt , you will find that the peak solar activity is around 31 days. It does not look all that white either.

Greg
Reply to  Greg
November 4, 2016 10:04 am

beg your pardon, there are big peaks around 13.5 days which is because of our one-eyed observation point of the predominant circa 27d equatorial rotation, though there is a peak at 30.25 days.
Odd indeed that this is reminiscent of the lunar periods. Maybe the same thing is causing both.

Greg
Reply to  Greg
November 4, 2016 10:14 am

comment image
power spectrum of daily d/dt(SSN) for continuous daily record since 1849. Period in days.

November 4, 2016 10:13 am

This analysis leaves me with one over-riding question — how many people sharpen their shovels? I can honestly say that I have never sharpened a shovel, nor have I ever considered it something that needs to be done. I must not do enough shoveling.
This has raised many questions for me. Do all shovels need to be sharpened? What about snow shovels? I live in Florida now, but my snow shovels used to be plastic. If you have a painted shovel head, do I need to repaint it after I sharpen it.
How do you tell that your shovel needs to be sharpened? Do you measure the radius of the blade or do you worry more about the number of nicks? Do you use a grinding stone or a file?
And, of course, the obvious question — How much is your shovel blade wear affected by climate change?
I know what your thinking — this is the internet. If I wanted to find all of these answers, I could (except for the climate change question. I couldn’t find that one.)
https://www.sharpeningsupplies.com/Sharpening-a-Shovel-or-Spade-W90.aspx

Peter Sable
Reply to  lorcanbonda
November 4, 2016 12:14 pm

You typically sharpen shovels when you want to use them as a weapon 😉

November 4, 2016 10:14 am

Regarding the upper graph set of Figure 2, the CEEMD of sunspots 1700-2015: I see noticeable correlation between adjacent components, generally occuring intermittently but enough to cause correlation throughout the period. So, I think the CEEMD resolved the sunspot record into an excessive number of components by failing to give consideration (or sufficient consideration) to C3 (the ~11-year cycle) being modulated by longer period ones.
I think a more accurate representation would be:
* Removing C2, and distributing its content to add to C1 and C3, as appropriate for frequency
* Removing C4, and distributing its content to C3, C5 and C6, as appropriate for frequency
* After that, consolidating C5 and C6, and possibly also shifting some of the lower frequency content of C6 to C7
This changes the number of components from 7 to 4, and I think such 4 components will stand out better as showing a longer period one as an identifiable cycle than the 7 ones shown. The four components would then be:
* Short term noise,
* The ~11-year cycle whose amplitude and frequency is (as already shown) not constant,
* A ~50-90 year cycle with similarly non-constant amplitude and frequency no more unsteady than shown in C4-C7 in the upper graph set of Figure 2, and standing out better than C4-C6 in that graph set, which would be the Gleissberg cycle, and
* a longer term variation of questionable statistical significance if the duration of 1700-2015 is considered but likely correlating well with the Dalton, Maunder and Sporer and Wolf minima and the minimum before the Oort Minimum, and the Oort minimum being a couple decades late but otherwise correlating well. That would be the Seuss cycle, with period averaging about 200 years.
One more thing: The Moeberg paleoclimate record shows a not-quite-constant bounciness with a period that is somewhat unsteady but mostly around 50-80 years.
It appears to me that Dalton minimum and the ~1910 minimum were minima of the Gleissberg cycle while the minimum of the Seuss cycle was between them. The upcoming solar minimum appears to me as being the Gleissberg and Seuss cycles bottoming out nearly simultaneously – it could reach a Maunder-like depth but for a much shorter amount of time than the Maunder Minimum. The Maunder Minimum appears to me as the bottom of a ~1,000 year cycle that is not sinusoidal, but distorted towards a sawtooth wave by taking less time to rise and more time to fall than a sinewave – which is also the case with the ~11-year cycle.
A test that I propose for my hypothesis: Adding the next 7 decades to the 1700-onward sunspot data, and looking for increased support of my proposed 4 components including the Gleissberg cycle.

Greg
Reply to  Donald L. Klipstein
November 4, 2016 10:45 am

There is quite a lot of overlay since the bandpass filters are never a clear cut-off at either end. Some intermediate frequencies will be split between bands. That could give the impression they are not significant, for example by attenuating a periodicity which sits near a border only half appears on each side. Like all tools it needs some appreciation of what it is doing and how to read it.

jorgekafkazar
Reply to  Donald L. Klipstein
November 4, 2016 11:55 am

“The Moeberg paleoclimate record shows a not-quite-constant bounciness with a period that is somewhat unsteady but mostly around 50-80 years.”–Don Klipstein
Moe Berg’s records show a lifetime average of .243, though he batted .311 with Reading. He knew 12 languages and was once sent to assassinate a German nuclear scientist at a physics conference. After hearing the man speak, Berg slipped away without drawing his gun, realizing the man’s approach wouldn’t soon result in a nuclear bomb.
But you’re probably talking about Moberg. He can hit, but he can’t run.
http://sabr.org/bioproj/person/e1e65b3b

Reply to  Willis Eschenbach
November 6, 2016 12:08 am

The 1700-2015 data alone does not support the Seuss cycle to an extent of statistical significance, and I think the reason is that 1700-2015 covers less than two periods of it.
Meanwhile, I think the Figure 9 CEEMD graph set supports existence of the Seuss cycle (C6), and the Gleissberg cycle (C5, although something of that frequency briefly shows up in C4). The amplitude of these cycles is unsteady – like that of the ~11-year cycle. The frequency of these is non-constant, but the ~11-year cycle also does that a little. The ~1,000 year cycle shows well, but in C8 before the MWP and in C7 during and after. C7 may have most of existence being lower harmonics (especially the second harmonic) of the ~1,000-year cycle – which is non-sinusoidal by having a faster rise and slower fall than a sinewave (like the ~11-year cycle) according tocomment image – which also shows the Seuss cycle. I also note here that the Figure 10 periodogram has a spike at a period close to the second harmonic (~500 years) of the ~1,000 year cycle that shows up well there. Other spikes show up as possibly the 3rd, 4th and 5th harmonics. However, existence of these harmonics at slightly-off-frequency (slower for 2nd, 3rd and 4th harmonics) is suspicious, but possibly explainable if they are stronger when they are running slower and weaker when running faster. Notably also, the ~11-year cycle has its amplitude unsteadiness and its period unsteadiness correlating with each other (slower when weakening).
I suspect CEEMD could be improved upon by comparison of phasing of one cycle with the one of the next lower frequency or two for detection of harmonics to assign to the lower frequency cycles that generated them. And notably, I think the ~11-year cycle has its amplitude being very unsteady – so I think longer period solar cycles can do the same and still exist. As I said before, I think CEEMD resolves more components than actually exist by failing to detect relationships between one resolved component and another. (When I said that previously, I mentioned a component being modulated by a longer period one – which I still think is true – although now I want to add harmonic relationships to this.)
As for the deVries cycle: That is another name for the Seuss cycle. As for the Hallstadt cycle: I did not claim it exists.
As for the Hale cycle: I did not claim before now in this thread that it exists, although I believe it may have some physical effects on/over some regions of Earth. The Hale cycle is the sun’s global magnetic field reversing polarity once per full cycle of the ~11-year cycle so a full cycle of the sun’s global magnetic field polarity is ~22 years, and a ~22-year cycle of physical effects on Earth would be from interaction of the sun’s periodically reversing magnetic field with Earth’s (comparatively) more-constant magnetic field.

Bruce Cobb
November 4, 2016 10:18 am

Sharpening shovels probably isn’t all that important. Unless you have an axe to grind.

Jeff Cagle
November 4, 2016 10:24 am

Willis: These results were surprising to me for several reasons. The first is their irregular, jagged, spiky nature. I’d figured that because these are the sum of smooth sine waves, the result would be at least smoothish as well … but not so at all.
You may be familiar with the Weierstrass Function (http://mathworld.wolfram.com/WeierstrassFunction.html) that is continuous everywhere and differentiable nowhere. Historically, this was an important moment in mathematical understanding because it showed that continuity in no way implies differentiability, not even in a weak sense.

John F. Hultquist
November 4, 2016 10:32 am

I hear patterns when the rain drops into the barrel — regardless of whether or not. At the Cedar River Watershed Education Center (east of Seattle) drums have been placed under a dozen drip points, so “the patterns” are fast and fantastic. Doing a DCDFT here would not be appreciated.
~~~~~~~~~~~~~~~
At 83 comments and counting, I hope you don’t mind if I comment on:
There are a number of lovely folks in this world who know how to use a shovel, but who have never sharpened a shovel.
We use USFS** sharpened shovels/scrapers as volunteers — building and fixing trails in the Cascades. [**Not a garden-store type shovel.] And, yes, we keep them sharp.

Chimp
Reply to  Willis Eschenbach
November 4, 2016 4:38 pm
Paul Drahn
November 4, 2016 11:18 am

I wonder what you would show if you only used “new” sunspot count. Historically each time a spot rotated into view, it was counted as a “new” sunspot. I guess it would be difficult in the old days to know if the spot was “new” or was just long lasting.
And I do have to sharpen shovels because our soil is sand, stones and volcanic ash.

Resourceguy
November 4, 2016 11:35 am

So the Maunder Minimum and Dalton Minimum are either long lived urban legends or one-off events.

Barbara
November 4, 2016 11:35 am

Willis,
Very nice work and presentation. I am wondering if you have seen https://www.youtube.com/watch?v=l-E5y9piHNU – “All Climate Change is Natural” – Professor Carl-Otto Weiss. Given your interest and analysis, I think you might find his work interesting and complementary to your own. – Barbara

Resourceguy
November 4, 2016 11:37 am

I wonder if the smarter dinosaurs concluded this about major impactors during the late Cretaceous.

Editor
November 4, 2016 12:15 pm

Thanks, Willis, very interesting and perception-changing. Regarding the conversation about ∆14C: All analyses are based on time. I would like to see an analysis based on sunspot cycle (SSC) instead of time. By this, I mean that the peak say of cycle 1 would be 1, of cycle 2 would be 2, etc, the ‘middle’ of the trough between cycles 1 and 2 would be 1.5, etc. Of course there would be some arbitrary decisions and approximations to make, but the end result would be (I think) to make much clearer the relationship (or non-relationship) between ∆14C and SSC. The same base used for other values – temperature for example – would also give a clearer picture of the relationship (or non-relationship) to SSC. In particular, a Fourier analysis of SSC on this base would show a much stronger cycle at frequency 1 than the time-based Fourier analysis shows at frequency ~11 (and that’s the point). I suspect that analysing various measures using this base might also throw up some unexpected ‘cycles’ which might turn out to be interesting/useful.
[Yes I could do it myself not just ask you to do it, but I have less skill, less tools, less data, and being in the middle of moving house, less time. The value that you would add would be very high.]

November 4, 2016 12:27 pm

@Javier
perhaps you can help me
where did the 2400 yr cycle came from?
last I checked it was clear which of the longer solar cycles [longer than the 11-22 yr sc]
were relevant…
http://www.nonlin-processes-geophys.net/17/585/2010/npg-17-585-2010.html
i.e. do you agree that the DO cycle of 1470 years is not relevant?

Javier
Reply to  HenryP
November 4, 2016 3:58 pm

HenryP,

where did the 2400 yr cycle came from?

I don’t understand your question.

do you agree that the DO cycle of 1470 years is not relevant?

I don’t believe the evidence supports that the D-O cycle is of solar origin. But of course the D-O cycle is relevant. It is the most drastic, abundant, abrupt climate change in the geological record.

tadchem
November 4, 2016 12:55 pm

A little-appreciated relation in trigonometry is the sum/difference relation:
sin x +/- sin y = 2*sin(1/2*(x +/- y)) * cos(1/2(x -/+ y))
(as well as I can render it in plain text)
Implication: adding 2 sine waves of different periods produces a complex wave of a higher frequency (x + y) modulated by a second wave of a lower frequency (x – y).
So, for example, a wave of 10 years added to a wave of 11 years frequency will produce an apparent wave of 1 year frequency modulated by another wave of 21 years frequency.
When you create canonical ‘white noise’ as you have, you will get a result that includes a multitude of waves of 1 year frequency (the interval between successive wave components in your construction) modulated at frequencies from 2N – 1 years (assuming you started at A years and finished at N years) down to 2A +1 years.
The relative phases will determine whether the addition is constructive or destructive.

Pamela Gray
November 4, 2016 1:26 pm

Reminds me of the search for meaning in brain waves, a notoriously random thing. Evoked signals, which are often below the amplitude of random brainwave strength, can be calculated through the process of mathematically subtracting peaks and troughs of the electrical potentials picked up on the surface of the head (after scratching it up a bit, buttering it with jelly, then squishing an electrode cup into the jelly). Over time you get a straightish line from which evoked (have the subject listen to something, like a series of white noise or frequency centered pings) signals rise out of, all done in real time.

TLMango
November 4, 2016 2:03 pm

The 2402 year cycle is known as the Charvatova cycle (Ivanka Charvatova).
She found that there was disorder in the SSB orbit of the sun. The sun carves
out a three leaf clover shape every ~59.5779 years (tri-synodic). This tre-foil
configuration gets disordered in ~2402 year cycles.
Also:
The sun returns to the same position on the ecliptic every ~2649.63 years.
This is a 360 degree rotation of the sun’s outwardly directed acceleration.
The earth is caught up between a large moon and an accelerating sun.
So . . we get a large tug from the sun every ~2649.63 years.
Our axial precession cycle is a simple beat created by these two cycles.
2649.63 x 2402.616 / ( 2649.63 – 2402.616 ) = 25772 years
Please visit Weathercycles.wordpress
” Fibonacci and climate “

November 4, 2016 2:34 pm

Conclusions? Well, my conclusion is that while it is possible that the ~ 88-year “Gleissberg cycle” in the sunspots, and the ~1,000-year cycle and the ~ 2400-year cycle in the ∆14C data may be real, solid, and persistent, I find no support for those claims in the data that we have at hand
Willis says which I for once tend to agree with him.

Peter Sable
November 4, 2016 3:21 pm

This complements the challenge someone posted a while back of finding signals out of 1000 data sets, some of which have trends.

I believe you are referring to this: http://www.informath.org/Contest1000.htm
Amusing anecdote about this challenge. and a warning about methods:
I tried to solve this challenge and one of the things I did was look for a weakness in how the challenger generated his artificial data. One of the things I did was attempt to recreate the challenger’s methods for creating the artificial data.
I used the standard off the shelf Octave red/pink noise generators that involve generating Gaussian noise, performing a DFT (via FFT), shaping the noise with a settable beta, and then doing the inverse DFT back to the time domain. I did this to see if there was some pattern I could discern in this that is somehow different than naturally generated signals.
I turns out with this method, the phase versus frequency graph looks completely different than a natural signal. In nature, the phase looks contiguous and forms a spiral when you graph imaginary versus real from the DFT. WIth the above method, imaginary versus real looks pretty random.
I got all excited that I’d solved the challenge because it was 3am and I’d forgotten I was working with my artificial data, not the challenger’s. Doh! I checked and the challenger’s data had the nice spiral of imaginary versus real.
It turns out if I generate the random sequences in the time domain using the standard AR equation I get the nice spiral shape, just like nature. I suspect that’s what the challenge author did.
So IMHO the DFT/iDFT method for producing noise, and possibly Willis’ method, will have subtle differences in phase relationships of the signal. I don’t know how meaningful that is to Willis’ Monte Carlo method described above, but I already suggested the literature uses the AR method to generate the proper spectrum for confidence levels, so this anecdote is another reason to use the AR method – it acts more like a real world auto-correlation and not a math model that emulates it in some fashion.
best regards,
Peter

November 4, 2016 4:10 pm

Willis,
your calculations are convincing ….and if someone is in doubt, its up to him to
follow your calculations, fill in own values to prove that the periodicity values of 88,
208, 1000 and 2400 yrs cycles are NOT ARTIFACT of periodicity analysis deficiencies
…..it is clear that those periodogram analyses themselves produce those LONG-YEAR
– periodicities by their inherent defects…… I would rather appreciate a final word from
Mc Kittrick or another high calibre statistician as second opinion… this question is too
important to leave it open….. in the meantime, I will give all points to Willis, to have it
brought up….
…… B the way, the always quoted 14C-values are INPUT measured on Earth within
the troposphere and are NOT measured OUTPUT VARIATIONS of the Sun….. all those
people, who dedicate themselves to the 14C- input on the planet, declare this Earth INPUT
as being a solar OUTPUT and hide Earth orbital variations are REAL cause of 14C-variations……
….. Willis, you should look into a further matter: Cycles with growing amplitudes and periodicities,
There is one which grows by 6.95 years all along the Holocene. Literature, take Part 1 of the Holocene
cycle analysis at http://www.knowledgeminer.eu/climate-papers.html, the Climate Pattern
Recognition Analysis, Part 1 …… Take Alley, R:B: 2000 GISP2 as data series…..
…..
The growing cycle could be resolved, knowing the exact growth of 6.95 years and its
commencement date. Regards JS

Peter Sable
Reply to  Willis Eschenbach
November 4, 2016 9:10 pm

Willis:
You will cause me to switch from Octave (Matlab) to R finally…
What did you get for tau* (the characteristic time scale of the AR1 process) for the 14C data?
I’m surprised it’s as big as I think it shows from the frequency domain.
best regards,
Peter
* it’s funny, I’ve seen the parameter for AR1 called alpha, beta, and tau in the literature. I bet there are more…

Peter Sable
Reply to  Willis Eschenbach
November 4, 2016 9:28 pm

BTW this reminds me that AR1 modeling only works on stationary data. Did you detrend the data before running the analysis?

Reply to  Willis Eschenbach
November 5, 2016 1:53 am

You are approaching Saint Svalgaard status.

Reply to  Steven Mosher
November 5, 2016 5:25 pm

Oh sorry, I’ve often called Lief a Saint for putting up with idiots. Since like 2007..
Any way.
You of course had a choice to take it as a compliment. Odd that you didn’t. oh well. water duck back
As for mind altering substances. nope. never touch the stuff.
[your really should learn to spell Leif’s name correctly -mod]

Reply to  Steven Mosher
November 6, 2016 2:10 pm

its a running joke since 2007 that I will mis spell his name.
but since you are a mod if it means that much to you ( he didnt care) you always have the option
of editing it. In fact It would take you less than time than being pendantic.

Reply to  Willis Eschenbach
November 6, 2016 12:19 am

What is the time period covered by the graph using sunspot data? 315 years? Having only one minimum of the Seuss/deVries cycle, whose minima may be more discernable than its maxima? And I think the Gleissberg cycle is unsteady but it exists. As I said before, I propose this graph being redone after another period of the Gleissberg cycle – I expect that to show it coming closer to being countable as statistically significant.

Editor
November 5, 2016 5:37 am

You say “…neither case are we seeing any kind of true underlying cyclicity.” When referring to the 1,000 year DeVries cycle or the 2100-2700 year Bray (Hallstatt cycle). Later you amend this with “…while it is possible that the 88-year “Gleissberg cycle” in the sunspots, and the ~1,000 year cycle and the ~ 2400-year cycle in the ∆14C data may be real, solid, and persistent, I find no support for those claims in the data that we have at hand.”
Just looking at 14C data, in isolation, while looking for paleotemperatures is problematic. 14C paleotemperatures depend on an accurate model of the total atmospheric carbon mass. It varies a lot over time. Using this technique is virtually impossible before 11000BP due to the Younger Dryas cooling and warming, the previous ice period, etc. These were periods of huge changes in the total atmospheric carbon mass. Since 11000BP there are some serious changes in a few well documented colder periods, like the 8.2kyr event, but it is relatively stable. Therefore, the earlier data was removed (correctly IMO) regarding your comment “thrown away three-quarters of the data.”
14C temperature estimates are somewhat circular, since what you are measuring (temperature) is affecting the total carbon you use in your calculation. 14C temperatures should not be used alone for this reason, which is what your Fourier analysis shows I believe. 14C is commonly combined with 10Be because while they each have data issues, the problems are in complementary areas. They correlate well (R2=0.8) and analyzed together they each offset weaknesses in the other.
See Roth and Joos, Clim. Past, 2013. For an analysis of 14C error see their Appendix A, the radiocarbon errors are high. Their Figure 1 is very instructive as well:
http://meetingorganizer.copernicus.org/3ICESM/3ICESM-405.pdf
Once we add in the worldwide glacier, paleontological and ice raft data we can conclusively find the Eddy cycle (~1,000 year) and Hallstatt (2100-2700) cycle. See Bond, et al. 1997 (Science); Debret, et al., 2007 (Clim. Past). The ice core and glacial records, plus the ice raft data are the most conclusive evidence for most people.
So, your conclusion that 14C temperature records, by themselves, have problems is true; the cycles you speak of are based on much more than 14C data. Even if you ignored the 14C data, the cycles would still be there. All paleo-temperature proxies fall apart statistically in isolation; which is a shame. But, this should not stop us from using them. They are all we have.

Reply to  Andy May
November 5, 2016 6:59 am

found this also in my notes
http://iie.fing.edu.uy/simsee/biblioteca/CICLO_SOLAR_PeristykhDamon03-Gleissbergin14C.pdf
and it seems to confirm what you and Javier are saying.
I determined the Gleissberg presently at 86.5 years but this may have to do with the current planet configuration – I also can confirm that there is correlation of the Gleissberg with the position of Saturn and Uranus.
The position of the smaller planets apparently also affect the length of the shorter term solar cycles.
In theory, I think if you put a program to it, you could look at the position of all the planets of the solar system, and as suggested, also look at the position of the sun itself, and you should be able to predict solar activity.
it works just like a clock.
Amazing.

Editor
Reply to  Willis Eschenbach
November 5, 2016 2:53 pm

Whew! I’ll let my comment speak for itself, but I will address some of your comments that I think are in error or misinterpretations of what I said. First, it is my opinion (and the opinion of many others) that prior to the end of the Younger Dryas, using 14C data for a paleotemperature estimate is not likely to be accurate. 14C paleo-temperature determination is a work in progress and not very advanced IMO. I don’t think that we know how much carbon was in the atmosphere during Younger Dryas cooling with any precisions. Just my opinion.
I can assure you that Javier does not think 14C data alone is enough to establish the Hallstatt (Bray) cycle and he never said that in any of his posts as far as I can recall. You’ve put those words in his mouth. I can’t speak for Clilverd on the subject, but I think he was using the Hallstatt cycle to show his 14C methodology worked, not the other way round.
Your statement that there is not enough evidence in the 14C data to support the long term solar cycles is true, I doubt anyone would argue the point with you. But, I posted my comment because I didn’t want any of your readers to turn that around and say there isn’t a Hallstatt or Eddy cycle. They do exist and that is well established and it has been for over 40 years. Dr. Nicola Scafetta has a very nice new paper in Earth Science Reviews on the topic. Link: http://ac.els-cdn.com/S0012825216301453/1-s2.0-S0012825216301453-main.pdf?_tid=fb1280ca-a39d-11e6-96ef-00000aacb362&acdnat=1478381139_bc8e306a0a39538609a5b70be9ad2cae
See his Figure 10, both the Eddy Cycle and the Hallstatt cycle meet the 95% confidence level according to his analysis. As for Bond’s 1997 paper, on page 1 (magazine page 1257) he identifies a 2800 year cycle that I take as the Hallstatt cycle. If you want mathematical precision, you are in the wrong field. Geology, paleontology and paleo-climate studies all involve using very uncertain indicators. We, hopefully, get more precise with time; but its a struggle.