Guest Post by Willis Eschenbach
There are a number of lovely folks in this world who know how to use a shovel, but who have never sharpened a shovel. I’m not one of them. I like to keep my tools sharp and to understand their oddities. So I periodically think up and run new tests of some of the tools that I use.
Now, a while ago I invented a variant of Fourier analysis, that I called the “Slow Fourier Transform”. I found out later I wasn’t the first person to invent it—Tamino pointed out that it was first invented thirty years ago, and that it is actually called the “Date-Compensated Discrete Fourier Transform”, or DCDFT (Ferraz-Mello, S. 1981, Astron. J., 86, 619). Figure 1 below shows an example of the DCDFT method in use, a periodogram of the cycles in the sunspots:
Figure 1. Periodogram, annual sunspots. The horizontal axis shows length of possible cycles from one to 100 years, and the vertical axis shows the strength of those cycles.
Now, in Figure 1 we can see the familiar 11-year sunspot cycle in the data, along with somewhat weaker sunspot cycles of 10 and 12 years. It also APPEARS that we can see the claimed ~90-year “Gleissberg Cycle”.
However, a deeper examination of the sunspot data shows that the “Gleissberg Cycle” only exists in the first half of the data, and even there it only exists for a couple of cycles. Figure 2 shows a Complete Ensemble Empirical Mode Decomposition of the same sunspot data. The upper graph in Figure 2 shows the underlying empirical modes, and the lower graph shows their frequency:
The ~90-year purported “Gleissberg cycle” is shown in empirical mode C6. In the lower graph in Figure 2, we can see that after the 11-year cycles, C6 has the second-strongest cycle in the data … but in the upper graph, we can see that whatever signal exists, it is actually fairly short-lived, dying out after only a couple of cycles.
And that means that my periodogram shown in Figure 1 was misleading me—the peak at around 90 years was not actually significant. It only lasts a couple of cycles.
So I wanted to sharpen my periodogram tool so it would indicate which cycles are statistically significant. In the past I’ve tested my method by looking at periodograms of square waves, and of individual sine waves, combinations of sine waves and the like.
This time I thought “What I want to test next is something totally featureless, something like my imagination of the Cosmic Background Radiation. That would help me distinguish random noise from significant cycles.
Well, of course I don’t have the CBR to test my periodograms with, so here was my plan for generating some random noise.
I generated a series of sine waves at all periods from one year to thousands of years. They all had the same amplitude. Next, I randomized their phases, meaning that they all started at random points in their cycle. I figured, nothing could be more generic and bland than the sum of a bunch of sine waves of equal strength of all possible periods. Then I added them all together, and plotted the result.
Now, I’m not sure what I expected to find. Something like a hum, something kind of soothing. Or perhaps like on the ocean, when you have small wind-ripples on top of a chop on top of a swell with a bigger swell underneath that. Harmony of the spheres kind of thing is what I thought I’d get, complex but smooth like some mathematical BeeGee’s harmony… however, this was not the case at all. Figure 3 below shows a sample of one of the many different results I’ve generated by adding together thousands of sine waves of identical amplitude covering all the periods
Figure 3. Ten examples of what you get when you add together thousands of sine waves evenly blanketing an entire range of frequencies.
These results were surprising to me for several reasons. The first is their irregular, jagged, spiky nature. I’d figured that because these are the sum of smooth sine waves, the result would be at least smoothish as well … but not so at all.
The next surprise to me was the steepness of the trends. Look at Series 4 at the lower left of Figure 3. Note the size and speed of the rise in the signal. Or check out Series 3. There is a very steep drop in the middle of the record.
The next thing I hadn’t foreseen is the fractal, self-similar nature of the signal. Because it is composed of similar sine waves at all (or at least a wide range) of time scales, the variations at shorter time scales are very similar to variations at larger scales.
I was also not expecting the clear long-term cycles and trends shown in the various random realizations. Regarding the cycles, I had expected that the various sine waves would cancel each other out more than they did, particularly at longer periods.
And regarding the trends, I had thought that because none of the underlying sine waves contained a trend, then as a result the sum of them wouldn’t have much of a trend either. I was wrong on both counts. The signals contain both clear cycles and clear trends.
Another unexpected oddity, although it made sense after I thought about it, is that like a variety of natural climate datasets, these signals all have very high Hurst exponents. The Hurst exponent measures what has been described as the “long-term persistence” of a dataset. Since all of these signals are the sum of unchanging sine waves which assuredly have long-term persistence, it makes perfect sense that these signals also have a high Hurst exponent.
Upon contemplation, I also note that these series are totally deterministic, but with a very long repeat time. For example, the repeat time of all possible periods from 2 to 100 is 6.972038e+40 cycles.
The strangest part of all of this is that the signals look quite lifelike. By that, I mean that they look like a variety of climate-related records.Any one of them could be the El Nino Index, or the temperature of the stratosphere, or any of a number of other datasets.
So after I generated my random datasets composed solely of unvarying sine waves, I used my periodogram function to see what the apparent frequencies of the waves were. Here is a sample of a few of them:
Figure 4. Periodograms covering waves from one to 3200 cycles, in a dataset of length 12,800.
Now, at the left end of each of the graphs in Figure 4 we can see that the periodograms are accurate, showing all cycles as being the same small size. This is true up to about 100 cycles, or about 1/30 of the length of the dataset. But as we get further and further to the right, where we are looking at longer and longer cycles, we can see that we get larger and larger random peaks in the periodogram. These can be as large as forty or fifty percent of the total peak-to-peak range of the raw signal.
In order to gain a better understanding of what’s going on, I plotted all of the periodograms. Then I calculated the mean and the range of the errors, and developed an equation for how much we can expect in the way of random cycles. Figure 5 shows that result.
I also looked at the same situation at various dataset lengths, down to about 200 data points. Here, for example, is the situation regarding a random dataset of length 316, the same length as the annual sunspot record.
Now, this has allowed me to develop a simple empirical expression for the 95% confidence limit. As you can see, the error increases with increasing length of the period in question.
And this is the precise sharpening of the tool that I was looking for. Let me start by revisiting the first figure above, the periodogram of the sunspots, and I’ll use the same error measure of the amplitude of 95% of the random cycles:
Figure 7. As in Figure 1, but with the addition of the line showing the extent of 95% of the random errors as described above.
As you can see, this distinguishes the valid signal at 11 years from the two-cycle fluctuation at 88 years. If you compare this to Figure 6, you can see that a cycle at 88 years needs to be quite large in order to be statistically significant.
Now, I mentioned above that the random datasets generated by this method look very similar to natural datasets. As evidence of this, Series 7 in Figure 3 above is not a random dataset like the others. Series 7 is actually the detrended record of the historical variations in ∆14C, which I discussed in my previous post … compare that actual observational record to say Series 2. There’s not a lot of difference.
And this brings me to the reason for this post. I’ll start by quoting from my previous post linked just above, which discussed the results of a gentleman posting as “Javier”, who in turn used the results of Cliverd et al. If you have not read that post, please do so, as it is central to these findings. In that previous post I’d said:
Let me recapitulate the bidding. To get from the inverted 14C record shown in Figure 3 to the record used by Clilverd et al, they have
- thrown away three-quarters of the data,
- removed a purported linear trend of unknown origin from the remainder,
- subtracted a 7000-year cycle of unknown origin , and
- ASSERTED that the remainder represents solar variations with an underlying 2,300 year period …
The series shown as “Series 7” above is the result of the first two of those steps. As you can see, there is claimed to be a 7000-year signal that they say is “possibly caused by changes in the carbon system itself”. However, there is no reason to believe that this is anything other than a random variation, particularly since it does not appear in the three-quarters of the data that they’ve thrown away … but let’s set that aside for the moment and look at the result of subtracting the purported 7,000-year cycle from the ∆14C data. Here is the periodogram of that result:
Note that this seems to indicate a cycle of about 960 years, and another at about 2200 years … but are they statistically significant?
In the comments to my post, Javier replied and said that I was wrong, that there indeed is a ~2400-year cycle in the ∆14C data. I pointed out to him that a CEEMD (Complete Ensemble Empirical Mode Decomposition) shows that in fact what exists is several cycles of about 2100 years in length, and then sort of a cycle of 2700 years length, and then another short cycle. This result is seen in the empirical mode C9 below:
In empirical mode C9 above you can see the situation I described, with short cycles at the start and end and a long cycle in the middle.
Mode C8 is also interesting, as it has a clear regular ~1000-year cycle at the beginning. Strangely, it tapers off over the period of record to, well, almost nothing. Again, I see this as evidence that this is simply a random fluctuation rather than a true underlying cycle.
In my discussion with Javier, I held that in neither case are we seeing any kind of true underlying cyclicity. And my thanks to Javier for his spirited defense, as it was this question that has led me to sharpen my periodogram tool.
And to complete the circle, Figure 10 below shows what my newly honed periodogram tool says about the ∆14C data:
I note that neither the ~ 1,000-year nor the 2,400-year cycles exceed the range of 95% of the random data. It also bears out the CEEMD analysis, in that the ~1000 year period shows more complete cycles, and more regular cycles, than the 2400 year period. As a result, it is closer to significance than the ~2400 year cycle.
Conclusions? Well, my conclusion is that while it is possible that the ~ 88-year “Gleissberg cycle” in the sunspots, and the ~1,000-year cycle and the ~ 2400-year cycle in the ∆14C data may be real, solid, and persistent, I find no support for those claims in the data that we have at hand. The CEEMD analysis shows that none of these signals are either regular or sustained … and this conclusion is supported by my analysis of the random data. The fluctuations that we are seeing are not distinguishable from random fluctuations.
Anyhow, that’s what I got when I sharpened my shovel … comments, questions, and refutations welcome.
My best to everyone, and my thanks again to Javier,
w.
As Always: I, like most folks, can defend my own words and claims. However, nobody can defend themselves against a misunderstanding of their own words. So to prevent misunderstanding, please quote the exact words that you disagree with. That way we can all be clear regarding the exact nature of your objection.
In Addition: If you think I’m using the wrong method or the wrong dataset, please link to or explain the right method or the right dataset. Simply claiming that I am doing something the wrong way does not advance the discussion unless you can show us the right way.
More On CEEMD: Noise Assisted Data Analysis
Nicely done. Thanks Willis.You’ve shown that certain types of statistical analysis create signal from noise. Eureka! gives way to “aww, dammit!”.
Yep, date processing creates a signal. CBR is one such example.
Interesting read Willis, good stuff. Learned a thing or two here.
The Cosmic Microwave Background Radiation doesn’t result from signal processing. It is a signal.
The radiation is a signal. Its variation across the sky is noise that is over processed to produce a spurious “signal”.
To what “processing” by Planck do you object?
Yes, I suspect that is exactly what Willis has done here. 😉
The result fitted his expectations and he did not question why his “random” data did not have a flat spectrum. We all get caught out by bias confirmation. However, it could be reworked to produce a useful result. The intent and the principal was sound science. He just made a slip on using equal period samples. see below.
by “see below” I’m referring to this :
https://wattsupwiththat.com/2016/11/03/sharpening-a-cyclical-shovel/comment-page-1/#comment-2333086
Greg November 4, 2016 at 6:54 am
If I had used equal frequency samples, I’d have gotten white noise which LOOKS NOTHING LIKE REAL DATA. So instead I used equal period samples which LOOKS A LOT LIKE REAL DATA.
Your slip is showing …
w.
PS-It’s “principle”, not “principal”.
I have sharpened a spade; but not a shovel. I guess we used to break up the ground first before shoveling it out of the hole.
g
Huh? I thought a spade was a shovel. Was I wrong?
w.
“Was I wrong?”
Yes.
http://gardeningproductsreview.com/shovel-vs-spade-whats-difference/
Sorry but your gardening site fails. While spades are a subset of the class of shovels, the spade is always the one with the pointy end. It looks like the spade in playing cards. The word is derived from Old English spadu and then from the Greek spathē, the blade of a sword.
Just need to call a spade a spade.
Actually, what has been shown is that total lack of analytic grounding leads to gross miscomprehension of what certain exercises reveal. Because the time-series synthesized here are sums of (random-phase) sinusoids uniformly spaced in PERIOD, they cannot be expected to produce effectively flat periodograms, which are uniformly spaced in FREQUENCY. The inherent spectral window (dictated entirely by the record length) winds up integrating the spectral content over a FIXED analysis bandwidth that includes more and more sinusoidal components as frequency decreases, no matter by what algorithm the FT is computed. Thus the total variance shown in each band necessarily rises with decreasing frequency.
By clinging to the primitive notion that period, rather than frequency, is the fundamental variable of spectrum analysis, Willis simply winds up with dull conclusions, rather than sharpened tools. Feynman’s caveat rings richly ironic here,
1sky1, if I had wanted white noise, I’d have followed your suggestions … but I DIDN’T WANT WHITE NOISE! Why not? Because it looks nothing like natural data. So I used equal period spacing, and found a surprising result that LOOKS A LOT LIKE REAL DATA.
So I encourage you to follow the false trail of the white noise just as far as you wish … me, I’m going a different direction entirely.
w.
Nothing I said here is restricted to white noise. It applies equally well to red noise, or even to highly structured random signals with a continuous power density.
If Willis had even a modicum of analytic comprehension, none of the features of his synthesized quasi-red-noise time series would have been surprising to him; not the irregular high frequency variations, neither the steepness of occasional “trends,” nor the apparent persistence of longer-period “cycles.” In fact, such “life-like” features are common in autocorrelated time series. No competent analyst would have come to the impressionable conclusion:
But, by blindly “going a different direction entirely” he misled not only Anthony but many others here into thinking that certain analysis techniques “create signal from noise.” That’s what happens when amateurs who have no analytic understanding of Fourier synthesis or analysis run computer programs that produce “surprising” results.
Looks like you have found another way to generate “Hockey sticks”!
I’m not sure, but it looks like Willis’ analysis reinforces the idea that to show significance, you need a longer series compared to the cycle you’re investigating. That dashed yellow line for 95% of the data might be throwing out valid cycles just because the time series is too short.
What would the Gleissberg Cycle show with a much longer series? (I have no clue.)
Gleissberg is here,
whether anyone likes it or not.
it has a sine wave, with 43 years of cooling and 43 years of warming.
Must say the warming is over for 2 decades already but it has been ‘erased’ from the records. It was easy to make it disappear as it was only minus a few tenth of a degree K. However, the next two decades will be a bit more difficult “to cover up”…..
Wait till the snow stands a meter high at your door when spring is starting….
What I find interesting is how much noise can look like a signal. This complements the challenge someone posted a while back of finding signals out of 1000 data sets, some of which have trends. What’s to say that our climate isn’t just a whole bunch of noise with hundreds of drivers?
The ghosts in the tape-recorder with the blank tape..
I am not disagreeing with anything in the post, but it does seem a fairly elaborate way to identify what white noise with no frequency shaping looks like.
a nice back biased zener diode and a spectrum analyser or oscilloscope would show the same.
If this was producing white noise it would be flat.
Since this is FT it looks like figure 5 is showing 1/f , a power spectrum would be show 1/f2 , ie. red noise.
As Greg said, this is not white noise.
w.
how much noise can look like a signal
============================
I created an excel spreadsheet with 1000 “time-periods”. Starting at zero, for each time period, I randomly added or subtracted 1 from the previous time period. The results were virtually identical to Figure 3 above.
Climate is a fractal. A random walk down a hallway, with walls on each side, about 10C apart.
http://c3headlines.typepad.com/.a/6a010536b58035970c017c37fa9895970b-pi
Strange that we can’t see the huge rise of the “anthropocene” on the graph !
Ferd,
The Mesozoic CO2 looks ‘way too low. It’s not clear from the graph’s accompanying sources where you got the CO2 reconstruction. COPSE or Rothman? GEOCARB results are higher.
Usual figures are c. 1750 ppmv for the Triassic average (vs. your 210 ppm), 1950 for the Jurassic and for 1700 the Cretaceous.
Royer compilation of paleoproxy data?
Ferd, a far better, and standard source for atmospheric CO2 estimates over the Phanerozoic is the Geocarb III plot, which is the (badly) reproduced parent of the CO2 curve in that plot you provide the link for. The plot below shows Geocarb III along with several other, less robust, models.

I don’t know where the temperature estimates were derived from. They approximate a similar plot by JoNova that Skepticalscience was very critical of. Jo was of course pointing out that there is no apparent correlation between CO2 and temperature.
http://static.skepticalscience.com/pics/nova_past_climate1.gif
If you look at graphs of TSI data from satellites (real TSI; not Trenberth faux TSI), like Dr.Svalgaard has posted numerous times, you will find that the typically eleven year 0.1% amplitude cyclic variation appears to get “noisier” at the TSI minimum part of the cycle.
I believe Leif has also stated several times that this is NOT NOISE; it is simply an increase in short term fluctuations of the real TSI signal around times of the minima.
And I suspect that Leif could give us a good solar physics explanation of the cause of this. Well at least to the extent that solar physicists understand that cause.
It seems (to me) to be quite implausible that true noisiness of the signal which is such a tiny fraction of the variable value (TSI) would display such a fluctuation increase at the minima of the cycle (of TSI value). The resolution of true changes in observed TSI would seem to be much better than this fluctuation, so it truly is real signal.
As the saying goes; ” A noisy noise annoys an oyster. ” which was a proposition I once saw proven mathematically, at least in the spoof branch of mathematics. Along with such other mysteries as: “Why is a mouse, when it spins ? ” or ” A rolling stone gathers no moss. ”
Why those particular spoof proofs stick in my memory (nothing much else does), I do not know.
G
Well the noise that one most often gets to “look at” is often on an oscilloscope screen, so it is not a stationary time series.
g
Jarryd,
“What’s to say that our climate isn’t just a whole bunch of noise with hundreds of drivers?”
For one thing, there’s only one driver, which is the Sun. The thing is that the Sun not only drives the system but as part of the system’s response to that forcing, the system may change where this change is often misinterpreted as a forcing (driving) influence because the changes may influence the surface temperature. Consensus climate science doesn’t distinguish between changes to the input forcing (the Sun) and changes to the system (changing CO2 levels and varying clouds) and this contributes to its phenomenal failure.
Willis:
I’ve been creating some tools like this in parallel myself, so I’ll have a lot more comments when it’s not 12:19am, but my initial stab is that what you are doing is estimating the spectrum of the noise on a single assumption that there’s a linear frequency sum of sine waves. That may not represent the actual noise spectrum of the environment that’s creating the data you are comparing the noise. For example it would be grossly unfair to use this comparison for truly gaussian noise. Real world signals vary from violet noise to red noise. what you ave here looks about like pink-red noise (1/f spectrum). I’m not sure I’d agree that the C14 calibration signal is pink noise, it looks more towards grey-pink noise to me. I checked error bars in the data set and those are gaussian (white noise).
Based on my analysis of the spectrum I’d say the 1000 year signal is above 95% confidence and the 2000 year signal is close.
The papers I cite below show in more refined detail the technique for finding the 95% confidence interval of the spectrum of the signal. You got most of it correct except estimating the correct spectrum. They show how to do that using AR analysis on the original signal. Whether they are completely correct I can’t be sure. Please enjoy the reading.
best regards,
Peter
http://paos.colorado.edu/research/wavelets/bams_79_01_0061.pdf
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/softlib/redfit/redfit_preprint.pdf
https://en.wikipedia.org/wiki/Colors_of_noise
Thanks, Peter. I’m not surprised that there is actual data about this. It’s 1:01 AM, so I’ll reply when I get a bit closer to noon tomorrow …
w.
So you added thousands of equal amplitude sine waves. That sounds like a purely mathematical exercise. We know what a sine wave is; a purely fictitious mathematical function, with many interesting properties.
I believe you said that you started each at a “random” phase, which begs the question of what gave you the random origin of each wave. I don’t think you explicitly stated how you got the frequencies; are they just random real numbers.
Well of course in mathematics you can start with whatever rules you want to dictate, and then investigate the consequences.
The part I don’t understand; except your right to do it, but WHY the exact same amplitude for all ??
As I said; this is you game, you made the rules. But what is the significance of fixing the sinusoid amplitudes ??
Do you know of any physical phenomenon that you think might behave like your model ?
Einstein invented Einstein waves with his mathematical game that he called general relativity. We seem to have found a gravitational system that replicates an Einstein wave; so far at least two of them.
What system would you suppose would emulate your model ??
G
Yes Peter. A lot depends, as always, on what the null hypothesis spectrum should be. I think Willis was intending to produce white noise here, but did equal time intervals , not equal freq intervals.
There will a reliable random number generator in Willis’ favour R package. That may be a better ( ie tested ) and faster way to get a random signal to start the analysis.
The 14C data can not be analysed in this way because the older data have been attenuated FAR more than the more recent parts of the data due to natural decay process. It needs a much more complex analysis to try to reconstruct the orginal signal before any freq analysis can be done.
The spectrum of the detrended calibration curve is spuriousT in itself as a solar proxy, so there is no point in applying any noise profile for comparison and significance testing.
Not true. The errors on the ∆14C data are an order of magnitude smaller than the swings of the signal.
w.
http://www.dictionary.com/misspelling?term=periodogram
“Did you mean petrogram?”
Khwarizmi, dictionary.com is quite limited. Try a simple Google search for technical terms: https://www.google.com/search?q=periodogram
“A periodogram calculates the significance of different frequencies in time-series data to identify any intrinsic periodic signals. A periodogram is similar to the Fourier Transform, but is optimized for unevenly time-sampled data, and for different shapes in periodic signals.”
Willis – with apologies for hijacking/directing attention to another wuwt thread, would you be able to reply to comments both I and afonzarelli made over here, regarding the presence of an 11 year signal in temperature data?
https://wattsupwiththat.com/2016/10/31/sun-quiet-again-as-colder-than-normal-winter-approaches/
If I’ve understood afonzarelli correctly, Dr Roy Spencer has produced a chart demonstrating a correlation of TSI variance with HadCRUT3 temps. If that is correct, why has that not shown up in your periodgram works?
Dermot, there is no link to Dr. Roy’s site in afonzarelli’s comment. As a result, neither you nor I know what Dr. Roy did … so why are you already believing the Fonz? I mean, he may be right, but he hasn’t even provided a damn link to the post. I’ll wait until he gets it together to comment on some claimed analysis neither you or I have seen. I’ve posted a note over there as well.
w.
Many years ago as a grad student I was helping with the analysis of a new data compression algorithm that is now quite popular. As a test I wanted to throw pseudo random bit sequences at it because of course it would not be able to compress them since there would be no patterns for it to learn and exploit. Well much to my surprise the new algorithm compressed the crap out of what I was throwing at it. A little further analysis in the form of running FFT’s over the pseudo random data I thought I was generating showed clear short cycles of various lengths. How could this be with pseudo random numbers coming from a well researched and designed library? Well it turned out I was using the last bit of the random() number generator to get a pseudo random sequence of bits. Bad idea. Each bit in the 32 bit pseudo random numbers generated by this particular library had a distinct and quite short cycle. Same is true of pairs of bits etc. and you only get the claimed performance if you use the full bit range of the number.
Anyway very interesting post. Not sure if its relevant but just a warning to check how you get your random numbers before you even start any analysis based on them 😉
Well you can’t generate random numbers. The sequence of a set of random numbers is NOT a band limited signal. Two members of a random number set generated sequentially could differ by an amount as small as 10^-43, or as large as 10^43, and anything in between; or pick your own range.
Any constraints you put on the numbers renders them non random.
G
Cheese. If you pick a random number, with P = 1 its absolute value is larger than any given fixed real like 1E43.
But everybody knows 7 is a random number. 🙂
Thanks Willis for simple and thought-provoking track of thinking the confidence interval.
“Many years ago as a grad student …”
Makes me wonder if I learned about RNGs before you or because of you.
Hi Willis. The aim of what you are doing is very sensible but I think you have a fundamental error in the way you generated the “random” data samples.
White noise has a flat frequency spectrum, what you have created are data samples which are heavily loaded to wards longer frequencies. This is what is reflected in your spectra.
Because you have done ( AFAICT ) is to use equal period intervals . You cover three decades of frequency : 1-10; 10-100 and 100-1000. 90% of the energy of the spectrum is in that final decade of frequency.
What it seems to be you should do is to do the same thing with equal frequency intervals. Our thought processes are keyed to thinking in terms of time periods, but what you have created is not what would generally be regarded as a random signal. This kind of work is normally done in the frequency domain and sometimes plotted in periods because it is easier for us to think in years the “per years”. Especially when it is human scale periods like years, decades or 90 year periods.
easier for us to think in years than “per years”
I agree, this is the biggest issue. Generating noise with spectrum that is suitable for a ‘system’ is not simple if you don’t already understand the process(es) which control the ‘system’. My guess is that partially autocorrelated white noise will generate similar results…. and also not necessarily be a goo noise model.
Thanks, Greg. While I had contemplated using equal frequency intervals and had tested that, the problem is that the result looks nothing at all like natural datasets.
Nor do the periodograms of such equal frequency intervals at all resemble the hundreds of periodograms I’ve done of observational climate datasets. Periodograms of real data are not flat across the board like you get from white noise.
This is because (as you point out) when you use equal frequency intervals, you get white noise … but natural datasets are about as far from white noise as you can get.
Since the equal period random data looks like natural data, and the periodograms of said data look like periodograms of natural data, I used them instead.
Regards,
w.
We live in the real world in the time domain. Frequency is a fictitious artifact of our mathematical minds. We invented it.
When I worked at Tektronix MYA, “frequency” was a swear word !
G
Time is an invention of our minds too. What’s the difference?
That’s funny, I worked in T&M at Tektronix. Had no problem using the frequency word. It did generate some swearing, but I worked in test and the swearing came from developers when a device didn’t meet spec 🙂
Greg,
Time is a requisite dimension of spacetime in our universe, as presently imagined (so to speak) or observed by physics.
But then, you may be referring to another sort of time, or phenomenon called “time” in English.
Peter, It was when I worked in Engineering at Cedar Hills (I designed the sweep switching circuits for the Type 547.)
But that all changed, when Hewlett Packard designed their Sampling Scope which was a “Through Sampler”, and the sampling head was a machined piece of microwave plumbing; some ersatz “Magic Tee” circuit with three ports and some reason why you could put a signal in port A, and get an output from port B but with NO A signal coming out of port C; and all the verse vicea combinations.
HP was a very savvy microwave and RF environment in those days, but few of us at Tek could figure out how that sampling head worked; which is to say, we could not explain it in the time domain, so we all had egg over our faces.
It was not unlike not being able to give a classical Physics explanation for the photo-electric effect.
G
As in a microwave switch?
In microwave striplines I did some work with switches and attenuators, cleaver little stuff, we’d use a PIN diode as an adjustable low Z shunt, the microwave just see the big change in impedance and it just reflects it. I think they used one diode per switched port.
Peter, I forget what the number was for the dual channel vertical plugin for the 545B and 547 scope family was; something like 1A2 I think. It replaced the CA unit and it was a 50 MHz 5 mV per cm plug in.
I designed the entire thing from the electronics point of view. The 5 mV per cm spec was a misteak (screwup) on my part. It was supposed to be a 10 mV per division plug in at 50 MHz; but I forgot about the 2X gain from the single ended input to push pull output, so I ended up with 5 mV per cm instead. But as I said, I designed the entire plug in electronically from its Nuvistor input to the transistor outputs. Also did the sweeps and horizontal amplifier, and calibrator, besides the sweep switching. I think it was Bob Rullman (my boss) who designed the tunnel diode trigger circuits. I did one later for a new 150 MHz project that never flew, because I and a cohort working on it, left the company. I didn’t do the new 10X probes for the 545B / 547 program. That was done by Wim Velsink, who was in the Accessories/Peripherals Group that were probe experts.
Well we had the guts of it working, but they walked us out the door when we gave our notice, so we never got the chance to pass the designs we had going to someone else to take it over. I believe that delayed them at least three years getting to a 150 MHz real time scope.
TWTD !
G
” we could not explain it in the time domain”
“HP was a very savvy microwave and RF environment in those days” … probably because they were not afraid of using the F-word.
If time wasn’t real everything would have happened at once.
G
Real events happen in real time. Frequency has no meaning absent the concept of time, so frequency is a derived term not a fundamental one: as in so many per unit of time.
You can’t even talk of a frequency until you have established time itself.
G
The 547 was the first scope I ever used way back in high school, which was a big deal that they had one since they were still very expensive. I like my 3032B a whole lot better which from a cost performance perspective, is like the difference between a IBM 370 and my laptop. I will say that the insides of the older Tech scopes were like a work of art and solidly built with lots of gold and silver. I guess they could do this with a price on the order of mid to high end car.
Willis: “95% of the cycles ” – shouldn’t it be “97%”….? (better change it to improve your chances for getting grant money) 😉
Willis. This is very interesting thanks. Did you start all the sin functions off at the same point in their cycles, or did you randomly assign the start points? Thanks (apologies if I missed that in the explanation).
As mentioned I set the phases randomly.
w.
Jay,
You missed it. He said he randomized the phase angles of the sine waves.
Willis
Should
Look at Series 4 at the lower left of Figure 1
read
Look at Series 4 at the lower left of Figure 3
or have I misunderstood?
Thanks, Sandy, fixed.
w.
I don’t understand what you have done. You have summed sine waves with random frequency and shown that you get a non flat spectrum.
There are several comments.
I may have missed it, but have you generated sine/cosine waves with random phases? If not this would be an expected result.
The other comment is that in the theory of coherent averaging, it is well known that averaging is a filter that acts on the noise with a defined spectrum that is of the form sinc(frequency).
See my comment above, I’ve explained why he did not get a flat spectrum.
See my comment above, I’ve explained why I did not want a flat spectrum …
w.
That was not the impression I got from the article:
that sounds like you were expecting to get a flat spectrum.
BTW you are also confusing averaging and a running average. It is the latter which has a sinc fn freq response, which is why it is a crappy, distorting filter. Averaging is a means of resampling the data which effectively reduces random variability but can also produce aliasing in the presence of cyclic signals shorter than the averaging period.
Averaging is like doing a running mean then resampling but the running mean is too short to act as a correct anti aliasing filter and again is a crap filter anyway. A proper anti-aliasing filter should be used if there is any non-random, periodic variability in the data. DP101.
Averaging is adding all of the members of the set that you already know, and dividing by the number of members in the set. That gives you a single number which not surprisingly, we call the average. And you can’t calculate it until ALL members of the set are known.
If you change any of the numbers it’s a different set, and may have a different average; but that is still a single exact number.
G
An interesting approach in signal to noise determination. People are inclined to find patterrns in almost anything, like animals and faces in clouds, and finding a real, meaningful pattern can be difficult. The problem here is determining what a truly signal-free pattern should look like. Much of the criticism thus far seems to be that the synthetic random pattern generation process is not truly random.
Synthetic random is fit for statistical purposes by and large. And true randomness is not hard. Any thermal noise source will produce it.
What is more an issue is what spectral weighting is applied to it.
Low pass filter and it sounds like thunder of traffic rumbling away.
Its just noise.
Oh, this might make a great usb device, a true random number generator, could probably make them pretty cheaply too.
Think there’s a market for a true random generator?
Intel has been incorporating random number generators into their processors for many years now. Starting in 1999, they exploited thermal noise in an analog circuit to do this.
In more recent years, they have done it digitally by blatantly violating standard digital design rules that are intended to ensure deterministic results — they are using the indeterminism of their “bad” circuits to generate the random numbers.
You can find a good description here:
http://spectrum.ieee.org/computing/hardware/behind-intels-new-randomnumber-generator/0
Wow. Obviously one would want to see the additional analysis, but…
There are all kinds of proposed cycles in the climate that should be subjected to this type of analysis. An important one would be the 60 year AMO temperature cycle, the ice age cycles and then the Dark Age, MWP, little ice age cycles if possible.
Once Willis has repeated the process with some properly constructed random samples that would be a good idea. Hopefully he will be back a bit later to comment on the validity of his test samples.
Figure 6. Periodograms of 100 datasets formed by adding together unvarying sine waves covering all periods up to the length of the dataset, in this case 3160. 316?
Fixed, thanks.
w.
Willis,
Speaking of Fig. 6, I’m confused by the dotted line. You state that 95% of the data are supposed to be below the dotted line.and that is obvious for Fig. 5. However, ALL of the short-cycle data are ABOVE the dotted line in Fig. 6. Why are Fig. 5 and 6 different in this respect?
Willis,
Thanks, Clyde. It’s just the time scales. Look at the data in Figure 4, which show the results below 100 years in length.
My curve is a simple approximation to the actual result, and is not accurate at the left end … but then that’s never where the questions arise.
w.
Instead of using “unvarying sine waves”, how about varying them by modulating the amplitude with the sunspot data? I’d do it, but I ain’t smart enough.
My recommendation would be to read a book like Oran Brigham’s “Fast Fourier Transform”. I read the original. I don’t know what is in the more recent ones. Secondhand ones are quite cheap and the understanding it conveys about Fourier Transforms is enormous: https://www.amazon.com/Fast-Fourier-Transform-Introduction-Application/dp/013307496X/ref=pd_sbs_14_t_0?_encoding=UTF8&psc=1&refRID=HC1ET6784NWBTH1NJY61
When you have mastered the concept of combining waveforms either by addition or by convolution (e.g. filtering) then the results you get from a Fourier Transform will be less surprising.
In other words, you think I’m wrong about something but you don’t have the decency to point out my mistake … not impressed.
w.
No I think s/he means your stuff is too obvious but s/he is still lacking the skill to write.
I am trying to help. Sorry. You were surprised what happened when you added sine waves. If you create a perfect impulse in the time domain it will create components at all frequencies. The converse also works an impulse in the frequency domain results in a steady level in the time domain. Note: You need to include negative frequencies, as suggested by the Fourier Integral.
Truncating time domain signals will result in smearing of the frequency spectrum. Sampling is like multiplying by an infinite series of impulses. So the spectrum of a sampled signal can be determined if you can convolve the in-sampled spectrum with a set of impulses. The Fourier Transform of a set of impulses is another set of impulses. Much is known about Fourier Transforms.
Statements like: “Now, I’m not sure what I expected to find.” must pall with some of those who might have an idea – and, in my case, have known for over 30 years.
Generally, I like the articles posted here, but I can see why some experts do not.
graphicconception November 4, 2016 at 3:54 pm
Since neither of those was the method I used, I fear I don’t see the relevance. What are you objecting to?
Again, yes … but so what? Same problem—what are you objecting to? What are you trying to say? What does this have to do with adding equal-amplitude sine waves of random phases?
So your claim is that thirty years ago you actually did the experiment I described above, creating the sum of equal-amplitude sine waves at all periods with randomized phases?
You’ll excuse me if I don’t believe that in the slightest. I say that you are making that up. My best guess, and it is a good one, is that you’ve never done that experiment in your life, and you are just speculating based on theory.
And finally, why is it that far too often when I say I was surprised by something scientific, some fool jumps up and excoriates me for being surprised, and claims that they would never be surprised by things like that, oh, no, no way, they are so far above surprise, they are totally inoculated against surprise of any kind … hey, I am a man who is constantly amazed and surprised by the world, and I feel sorry for those who think that scientific surprise is a sign of anything but joy and awe.
If you had both the scientific ability and the common decency to point out any single error I might have made, you would advance your credibility as a so-called “expert”… as it is, all you have given us are claims of your stupendous brilliance, irrelevant expositions about Fourier, accusations of my foolishness, denigration of my surprise at finding new things, and unsupported boasts that you already knew it all thirty years ago … well, my congratulations on being such a genius.
I bow in deep respect for your true brilliance, and I look forward to any further pearls of pure wisdom that may drop from your lips … meanwhile, I’ve redone my analysis using the “REDFIT” method of Schulz and Mudelsee (which of course you have been totally familiar with for thirty years now) and I GOT THE SAME RESULTS. See below for those results.
What does THAT say about your claims that I don’t know what I’m doing?
w.
PS—Like many people posting anonymously, I fear you don’t understand the consequences of your choice to forego your real name. You seem to think that despite using an alias, your words on certain subjects should have the same extra weight that they likely have in your daily life.
But from our perspective on this side of the screen, you could just as easily be a bored teenager typing away in his mom’s basement … so I fear your words get no extra weight at all. In fact, because you have not taken responsibility for your words, they get even less weight with me. Why should I pay attention to something you are unwilling to sign your name to? You can walk away from your claims at any time, you take no responsibility.
(Note that these are not moral judgements or accusations that using an alias is wrong. There are many perfectly valid reasons for using an alias. I’m just trying to point out some of the COSTS of that choice.)
And you also seem to think you are exempt from a polite request that you quote the exact words you object to … sorry, but to me you’re just another in the line of random anonymous internet popups who like to make ad hominems, but who don’t point out any errors in my work or tell us how to do it correctly.
But it doesn’t have to be that way. I invite you to either stick around, with some effort and application you might either teach something or learn something … or if not, I invite you to find another blog where your true genius can be appreciated and celebrated.
Ok Willis: Imagine I had posted a breathless article about a new discovery I had made about how a hollowed out log floats on water. I am now trying to harness the wind to add to effect of the two large serving spoons I am using for propulsion but the rig keeps toppling over and I can’t seem to steer it very well.
How impressed would you be with my efforts? Would you think I was genuinely making scientific progress on behalf of the world? The article may contain no errors at all.
I am now considering asking for a refund of half my AGU funding contribution. 😉
graphicconception November 5, 2016 at 8:12 am
Graphic, I did a curious thing. I looked into the result of adding up sine waves of equal amplitude and random phase. From that, I developed an equation that I said gave the 95% CI for the errors in the DCDFT periodogram that I use. I used that equation to show that the “Gleissberg cycle” in sunspot data and the 1000 and 2400-year cycles in the ∆14C data were not statistically significant.
And you cannot claim that you knew all along that the 2400-year cycle and the Gleissberg cycles were not statistically significant. I not only improved (“sharpened”) my tool, I used it with real-world data.
SUBSEQUENTLY, I found the “redfit” algorithm of Shulz and Mudelsee, which completely confirmed my results.
Now, this may or may not be of interest to you. But comparing it to “discovering” that a hollow log floats? That’s just petty nastiness with not a scrap of truth in it.
Do what you want, graphic. You’ve cancelled your vote with me.
w.
If you had not randomised the phase, what difference would that have made?
You see, I and many others, already know the answer.
I tried to offer a fellow sceptic some genuine help and, well, we can all see what happened.
It is by far the best book I have ever read on the subject of Fourier Transforms. Ironically, although the book’s subject is FFTs, that is its weakest part.
Bye bye.
graphicconception November 5, 2016 at 4:59 pm Edit
I’m sorry, but that question is far too poorly specified to answer. The first thing you haven’t specified is what you will do if the phases are not set at random … there are lots of kinds of “not set at random”. For example, they could all start at the same point in the cycle … but then you have to decide at what point in the cycle you are going to start? Where in the cycle you start makes a huge difference in the short term. Or each one could start one data point later than the previous one in the list. Assuredly not random, you might think of it as circularly polarized. Like I said, there are all kinds of “not randomized”, and you haven’t said which one of the many you are asking about.
Next, you haven’t specified “what difference would that have made” TO WHAT? Are you asking what difference it would make to the raw sums themselves? To the periodograms? To the histograms of the entire dataset? To the mean step length? To the general shape of the generated data? Again, your question is far too poorly formulated to answer.
I fear you haven’t even figured out how to ask the question … BUT:
If your question is “If you started all of the cycles at zero, what difference would it make to the periodograms”, I do know that. How? Because of course, in the process of writing the head post I tried it …
“Genuine help” is something that advances the discussion. You have offered nothing of the sort, including in your latest message, only your oft-restated opinion that you are so much wiser that I am … and while that might indeed be so, I fear that you boasting about it doesn’t advance the discussion one bit. You weren’t the one who suggested the “redfit” algorithm”. You didn’t provide evidence that my assessment of the 95%CI was wrong … and in fact it was right. All you did was claim inside knowledge that unfortunately you never got around to sharing. You may be right, your secret understandings may indeed be pearls of wisdom, but you were so busy telling us how wise you are and how foolish I am that you never got around to telling us those pearls …
Finally, let me point out that I invented what to me was an entirely new method of Fourier analysis, only to find out some months later that it was actually invented in the 1980s. Now, it is true I wasn’t the first man to invent it.
But I invented it totally independently, and it works flawlessly … and despite that, you insist on treating me like I was some kind of Fourier noob. So let me ask:
When’s the last time YOU invented a new method of Fourier analysis, even one that was invented a few decades ago?
Which makes your patronizing tone both totally unwarranted and very unpleasant … so I gotta say, I’m not sad to see you go. “Bye-bye” indeed.
Onwards,
w.
Actually I think this means that one of the above cynical commenters is wrong.
Once you are used to the FT, summing sine waves sounds perfectly normal. The biggest thing people get wrong with FTs is forgetting that FT assumes a periodic signal forever, but you only have a window. You have to deal with the edge effects (usually by applying a window function). I’m pretty sure Willis fixed that issue several years ago.
Yes, I use a Hanning window … and as I recall it was in response to your urging.
w.
“Yes, I use a Hanning window …”
Strictly, you either use a Hann window or a Hamming window. The two have different properties.
Of course, sometimes using a window function just makes matters worse. If your BoxCar window contains integer numbers of cycles then changing windows just gives rise to spectral components that are not really there.
graphicconception November 5, 2016 at 9:28 am
The usually unreliable source (Wiki) says “The Hann window named after Julius von Hann and also known as the Hanning (for being similar in name and form to the Hamming window), von Hann and the raised cosine window …”
Strictly speaking, you may use a Hann window and I use a Hanning window. The two terms are used interchangeably. For example, MATLAB refers to it as the Hann (Hanning) window. But if you prefer the authentic German form, of course, we should all be using “von Hann window”, because we wouldn’t want to rob a German of his “von”. And “raised cosine window”? Is a “raised cosine window” stricter or less strict than the “von Hann window”?
So we have now determined that strictly speaking, I’m using the Hann Hanning cosine von Hann window. Thanks for pointing that out, it was very useful and advanced the discussion …
w.
I thought you wanted factual error to be pointed out/? I would not want you to give your enemies any leverage by using imperfect terminology. I am glad you appreciated it.
I wonder why you keep on stirring this Gleissberg pot??
You would do well just to study the raw data from a few weather stations around you
but concentrate on daily minima and maxima,
and you will easily see that there is a recurring Gleissberg cycle in the data, ca. 86.5 years
do 4 regressions for each station, for 4 different time periods, to get the speed of warming/cooling in K/annum, now just plot the K/annum found against time and voila
a perfect curve – like somebody is throwing a ball….
quadratic
for the past 43.25 years
if you are lucky you might get a few stations with raw data going further back in time
e.g.
http://oi60.tinypic.com/2d7ja79.jpg
note that I made an assumption of the wavelength being 88 years when it was in fact 86.5 years.
I have noted from the solar magnetic and other data that the switch upwards was made in 2014.
henryp November 4, 2016 at 7:17 am
Henry, I’ve said many times that if you send me a link to the study and another link to the data I’d be glad to see if your claims are real. Instead you’ve given me handwaving …
w.
Willis, I have min/max differences for all sorts of global area on source forge (including 1×1 cells of any cell with a weather station both annual and daily data)
http://sourceforge.net/projects/gsod-rpts/
And could likely make you anything you wanted.
Willis

this is so simple that a high schooler or first year student should be able to do it..
you know what linear regression is
here is a link to yearly data, already averaged for you for each year, of a station, going back to 1973
http://www.tutiempo.net/clima/New_York_Kennedy_International_Airport/744860.htm
the second column is maxima, the third column is minima.
I prefer max and min because I find there is less noise in those data.
maxima is a very good proxy for direct heat received through the atmosphere.
so after copying and pasting to excel you can do a regression from 1973 to 2016, from 1980, from 1990 and from 2000. You are only interested in the derative of the equation, i.e. the value before the x….that is the average speed in warming / cooling in K/yr over that period.
In this way you got 4 points in time where you know what the average speed was of warming/ cooling. Now you could take a few stations with good data in areas around NY and together you could average the results for the area and set those results out against time – to get acceleration/deceleration just like the curve of ball thrown in the air. if you do that you should end up with a good curve like I did here for south Africa:
[this one is for minima]
WITHIN the 4 points you can see the half cycle of The Gleissberg, i.e the past 46 or 47 years
you get it now?
btw
minima here always seem on the decline….
HenryP .comment-author .vcard November 4, 2016 at 10:28 am
Your dismissive attitude does you no favors. My free advice, worth every penny, would be to lose the ‘tude …
Yep.
Well, I have to say that diagnosing an 88 year signal based on four data points is a new one for me. Also, you’ve taken four trends from various times in the past to the present … but then you’ve posted them up based not on the time in the past but on the length of the time involved. This puts them in the reverse order than they actually occurred, and locates them not in the middle of the time period but at the length of the time period. I don’t understand that at all.
What I get that you are using linear regression in ways that make no sense. To start with, there are only 43 data points. Next, you’ve totally ignored the uncertainty of your results. Because of the short dataset, in all cases, your results are a joke, with none being statistically significant.
Start Trend Uncertainty p-value
1973 .02 .09 .86
1980 -.09 .12 .43
1990 .12 .19 .54
2000 .72 .33 .06
Where is “here” for you?
w.
no tude, like I said, it is simple 1st year stats, i.e. mostly linear regressions
if you know the velocity of the thrown ball in m/s at 4 points in time from the point that it was thrown you should be able to work out its trajectory.
anyway,
you know the story of trying to bring the horse to the water….
But if you want to give it a go, here is a simple sampling procedure to give you a decent global result either for minima or maxima
https://wattsupwiththat.com/2016/11/01/uah-global-temperature-update-down-slightly-for-october/#comment-2331557
we are globally cooling, looking at the direction of my “crystal” ball…..
Henry you continually push this graph above, which so far has not completed a single cycle, in fact it hasn’t covered enough ground to even say what the amplitude is, so why do continue to insist it is cyclic?? (rather than a single transient event..)
G
A nice explanation of Linear Feedback Random Number Generators (LFSR)- The Galois method is particularly easy to implement in both hardware and software.
http://courses.cse.tamu.edu/csce680/walker/lfsr_table.pdf
It gives the proper taps for every length up to 2^786 and also values for 2^1024, 2^2048, and 2^4096
I needed one for a project I was doing. I’m using 2^31 as it is easy to use on 32 bit machines. One word is all I need. And the computation is fast. Basically – pick out the lowest bit (bit 0) use it for an XOR if the value is one.. Shift.
A paper on the use of LFSR in crypto – which is not what I’m doing. This is what I’m doing.
http://classicalvalues.com/2016/10/magic-80-ball/
The crypto paper:
https://eprint.iacr.org/2014/051.pdf
2^31 is a favorite.
To have a maximal length sequence that is prime you need 2^(a prime).
Willis,
Great subject! I agree, garbage in . . . garbage out.
On the other hand, if the samples were infinitely large and truly random
you would find that there is a universal set of frequencies. Ray Tomes
did some interesting work in this area. If I understand this correctly, the
strength of some natural cycles are compounded by their harmonics.
Like the natural 144 year cycle { 2, 3, 4, 6, 8, 9, 12, 16, 18, 24, 36, 48, 72, 144 } .
visit Ray at cycles research institute
Regarding the sun: Analysis has to be based on physical mechanism.
The Sun is a somewhat oversized random number generator. If we knew the physical mechanism, we’d already have the answers we seek.
” If we knew the physical mechanism, we’d already have
the answers we seek. ”
Very true, if we’re strictly talking about the 11 year cycle.
The physical mechanism(s) I’m referring to are the:
1) sun’s acceleration/deceleration cycle of ~9.929 years
2) sun’s SSB 360 degree return to the same location on the ecliptic.
Jorge,
Please visit Weathercycles.wordpress
” Fibonacci and climate “
Slightly off topic – you have data points for a period of 1 year, 2 years, etc. Nothing for 1.3 years. The analysis should be run for a continuous spectrum, not for a discrete spectrum with an arbitrary base period (1 year). I don’t believe that the sun is heavily influenced by the orbital period of one minor planet.
” The analysis should be run for a continuous spectrum, . . . ”
Thanks CG, you are absolutely correct. Ray’s harmonic theory does
not specify years as the term of measure. I will have to rethink how to
correctly use constructive resonance to make a point.
Figure 1 shows that the circa 90y peak while not as high as the 11y peak is also notably broader. While less well defined and pure harmonic cycle it contains just as much signal as the 11y peak.
The log scale makes the periodogram equivalent to a frequency spectrum plotted backwards, So it gives are realistic comparison.
“I generated a series of sine waves …” ->
Greg is right.
Moreover, each sin wave is essentially a succession of +1 / -1, or -1/+1 depending on the phase. Meaning you made a sort of “random walk”, very classical, this is why you got spikes this procedure usually shows. Of course randomizing phase does matter a lot.
Random walk is why W. observed that the results looked “life-like”.
Some things need to be repeated because the first time they are not read or understood.
Javier
https://wattsupwiththat.com/2016/10/17/the-cosmic-problem-with-rays/#comment-2321835
Let’s make it even clearer:
If you were analyzing sunspot data only between 1620 and 1780 your mathematical analysis would wrongly conclude that there is no 11-year sunspot cycle because it is absent from half of the record.
http://i.imgur.com/UqifLOa.png
If you were analyzing sunspot cycle length only between 1850 and 1960 you would wrongly conclude that there is no 11 yr cycle as a 12 yr cycle morphed into a 10 yr cycle.
http://i.imgur.com/yIC8NMt.png
So Willis, why do you believe in a 11-yr solar cycle? Just because being a shorter cycle you have more data points and can afford the luxury of ignoring that solar cycles sometimes disappear, have tremendously irregular amplitude and very irregular duration?
You have not chosen a good subject for your analysis. It is leading you to the wrong conclusions. We already knew that solar cycles are irregular in presence, amplitude and duration. Your analysis contributes nothing. You set the conditions to reject them. Nothing new. I have read papers rejecting the existence of the Dansgaar-Oechsger cycle based on the same arguments.
Agreed, the circa 90y periodicity is about as significant as the circa 11y one. It is broader, either because of poorer data quality or more likely that the underlying causes are perturbed.
Jupiter’s orbital period has an average of 11.86 years but is quite variable because of the influence of the other planets. That will produce broadening in a frequency analysis of J period.
Having said that, random walk or AR1 data can appear to have “variable periods” too as can be seen in some of the panels Willis’ figure 3.
Javier: If you were analyzing sunspot data only between 1620 and 1780 your mathematical analysis would wrongly conclude that there is no 11-year sunspot cycle because it is absent from half of the record.
You make a case that there is no persistent process with an 11-year cycle, or in other words, that the time series is not stationary, even in the wide sense.
You have not chosen a good subject for your analysis. It is leading you to the wrong conclusions. We already knew that solar cycles are irregular in presence, amplitude and duration. Your analysis contributes nothing.
I disagree. Willis Eschenbach has shown that even with a series that is stationary by design, the results of the analyses of finite segments are even less reliable than what you or most of us have believed. Claims of cycles with large periods are especially incredible without lots more evidence.
“You make a case that there is no persistent process with an 11-year cycle”
No, there are a bunch of frequencies around 11y, at some time they will all add up to nothing when out of phase. If an FT shows a period, it is there persistently, it can’t have a sabbatical. When adjacent frequencies are bit ahead or behind it swings the apparent peaks a little earlier or later producing “variable periods”. Both those features can be produce with perfectly constant harmonic functions of constant amplitude.
AM and FM Mixers!!!!
I disagree with your disagreement. What Willis procedure does is raise the bar for accepting cycles. By doing that you make it more difficult to get false positives but easier to get false negatives. As solar cycles are very irregular they are likely to fall into the last category.
Javier: By doing that you make it more difficult to get false positives but easier to get false negatives.
I made the same claim about Willis Eschenbach’s use of Bonferroni corrections, you may recall.
In both cases we are in a kind of limbo, with the data not clearly describable by any particular hypothesis (or equally well describable, statistically, by contradictory hypotheses.) That being the case, should you not be cautious or modest when claiming to have a persistent oscillatory process with an estimable period?
If the phases or periods or both are constantly changing, is the observed process stationary?
Recall the underappreciated related problem with multiple time series: if there are two autocorrelated independent processes (that is, independent of each other), then they are quite likely to have a statistically significant cross-correlation in finite records. As with the presentation in this essay, the phenomenon is demonstrated with simulations, not derivations.
matthewrmarler,
Absolutely if you are only working with that data, as Willis does. However to me a cycle is certain only when you can confirm its existence by a completely independent approach not based on the same data. This has been done for the ~ 2400 yr solar cycle based on its effect on climate since 1968 and was reviewed by me recently here:
https://judithcurry.com/2016/09/20/impact-of-the-2400-yr-solar-cycle-on-climate-and-human-societies/
Greg: No, there are a bunch of frequencies around 11y, at some time they will all add up to nothing when out of phase.
Appearing/disappearing oscillations occur in EEG recordings during sleep, producing “sleep spindles” in the tracing. At unpredictable times a process with a frequency of 5-8Hz starts with weak amplitude, increases to a peak, and then diminishes to 0. It is “reliable” in the sense that it occurs over and over, but it is not predictable. The appearing/disappearing 11 yr solar cycle has a differing appearance, seeming to be there or not there, without a gradual onset and offset. Wouldn’t the process that you describe have a more gradual increase and decrease in amplitude?
Javier November 4, 2016 at 12:54 pm
You are describing the putative effect of a nominal 2,400 year signal, which actually varies from 1800 to 2800 years, on fifty years of temperature data?
Say what? There’s no where near enough temperature data to do that with any meaning at all.
During that time (1968 to present), the temperature went down, then up, and then flat, and you think that some part of this is related to a 2400 year cycle? How could that possibly be established with that tiny amount of data, only fifty years? During fifty years the 2400 year cycle will only change by about 4%, and you think you’ll find that? Really?
Not only that, but you are playing fast and loose by using a “best-fit” 2400-year regular unvarying cycle and claiming that that cycle has something to do with the sun. THERE IS NO REGULAR 2400 YEAR CYCLE IN THE ∆14C DATA. Doesn’t exist. There is only a wildly time-varying and weak cycle which is not statistically significant, as I’ve shown now by two separate and totally distinct methods which gave the same answer.
And even if such a cycle did exist, we have no reason to believe that it’s not just a “change within the carbon system itself”, as is claimed by your cited source regarding the 7000-year cycle removed from the ∆14C data.
w.
Clearly I am not saying that. But you don’t bother to read even what you critizice. I already said that you should have read my article before critizicing it. You still haven’t done it and therefore you do not know what I say. Here is the link in case you feel the inclination:
https://judithcurry.com/2016/09/20/impact-of-the-2400-yr-solar-cycle-on-climate-and-human-societies/
The signal varies in 2,450 ± 200 years, not more. In 1968, before the 14C data became available Roger Bray already described the cycle based on solar records (sunspots, naked-eye sunspots, aurorae), climatology and biology. In 1971 it was independently described based on 14C for the first time by Houtermans.
The existence of the ~ 2400 year cycle is confirmed, as Roger Bray noticed in 1968, by climatological proxies for the last 12,000 years. Read the article.
There is a ~ 2400 year cycle in the ∆14C data. It has been found since data was available in 1971 multiple times by independent researchers using a variety of methods. It has been demonstrated by Usoskin et al., 2007 that the distribution of grand solar minima displays a “quasi-periodicity of 2000–2400 years, which is a well-known period in 14C data” that is statistically significant. That you don’t find it doesn’t mean much. As I have said you have raised the bar for accepting cycles, and therefore irregular solar cycles give false negatives with your method. You have proven nothing. A plethora of climatological data confirms the existence of a climatic ~ 2400 year cycle, and a variety of solar data (∆14C data, solar grand minima distribution, ancient solar records) confirms the existence of a ~ 2400 year solar cycle. That you have found a way of increasing the requirements for accepting cycles and have focused on just one aspect of the issue only tells of your capacity of self-deception by ignoring all the rest of the evidence.
I don’t think you realize the implications of what you say. The ~ 2400 yr periodicity is “the strongest feature in the ∆14C record” in the words of Sonnet and Damon (1991). It is based on the distribution of solar grand minima, the most salient feature of the ∆14C record to the point that the main ones have received names, like Homeric, Roman, Spører, Maunder, etc. If what you say was true, then 14C would not be a proxy for solar activity, and almost everybody believes that is not the case.
Javier: This has been done for the ~ 2400 yr solar cycle based on its effect on climate since 1968 and was reviewed by me recently here:
https://judithcurry.com/2016/09/20/impact-of-the-2400-yr-solar-cycle-on-climate-and-human-societies/
I read your post at Climate Etc. A claim of a 2400 year cycle based on data “since 1968” is hardly credible.
matthewrmarler,
Then you should know that the claim is “since 1968”, while the data is “since 12,000 BP”.
So you have 30 sequential data points in a 300 year time interval. Great; you can plot those on a scatter plot, and use that plot to deduce all kinds of things.
The moment you connect those dots with straight lines, you create a completely bogus graph of a non band limited function, and since it is not band limited, then it is quite clearly under sampled, and ergo, you cannot conclude ANYTHING about your graph. Remove the lines and you have some information. With the lines you have aliased noise, which means you can’t even legitimately average that function. It is total garbage.
G
That graph is from this article:
Solheim, J. E., Stordahl, K., & Humlum, O. (2012). The long sunspot cycle 23 predicts a significant temperature decrease in cycle 24. Journal of Atmospheric and Solar-Terrestrial Physics, 80, 267-284.
Direct your quibbles to the authors. I am only interested in the known variability in the 11-yr solar cycle length.
I don’t care where the graph is from or who created it. The whole problem with the essentially gobbledegook that passes as climate science statistical studies, is that the purveyors do not even understand the fundamentals of sampled data systems.
Of necessity we have to use sampling systems to study real time physical systems, for the simple reason that it is almost impossible to capture continuous real variable values in real time. Our measurement instruments simply are not fast enough to keep up with real continuous variables.
So we have to sample. But unless we sample in accordance with the rules regarding information theory and sampled data systems, we cannot legitimately claim to have knowledge of that real system.
If one had happened to bore a core drill hole 18 meters deep about 100 years or more ago in South Africa, one might have concluded that the entire crust of the earth contained a layer of type IIA diamond just 18 meters below the surface. When in actuality one had only stumbled upon the Cullinan Diamond.
Actually it was found sticking out of the wall of a tunnel that had already been dug underground at that 18 meters depth, but it could have turned up in a sampling core drill.
I don’t make the rules. But it sure sticks out like a sore thumb when I see instances of persons ignoring the rules, and believing what they think they have found. If that is your idea of quibbling, so be it. I make a habit of not getting between any person and a cliff they are planning to jump off.
G
Javier November 4, 2016 at 8:45 am
Javier, thanks as always for your comments. You compare the claimed 2400-year cycle to the known ~11-year sunspot cycle. I don’t think the 2400 year cycle is real and persistent, but not just because it is so irregular. Here are the other reasons it is different from the sunspot data:
1. When the claimed ~2400-year cycle exists in the ∆14C data, it is weak and far from obvious. Statistically, it is not significant nor is it distinguishable from random noise. The opposite is true for the sunspot cycle.
2. It only exists in about a quarter of the ∆14C data, and not in the rest. WHEN YOU HAVE TO THROW AWAY 3/4 OF YOUR DATA TO MAKE YOUR CASE, THAT’S CHERRY PICKING! And yes, I know you claim that the errors are greater in the earlier part … when is that not the case? That’s true of all paleo data, but the uncertainties are by no means large enough to hide or obscure a cycle of the amplitude that you are claiming. And once again, the opposite is true about the sunspot cycle, it exists in all of the observational data. There’s no need to throw away 3/4 of the sunspot data to claim an ~ 11-year cycle …
3. The sunspot cycle is about 11 years, but as you point out sometimes it is plus or minus about 10% of that. Applied to the claimed 2400-year cycle in ∆14C data, that would give a variation of about 2180 to 2640 years. Your purported 2400-year cycle in the ∆14C data, on the other hand, has a huge claimed variation of 1880 to 2810 years … you sure you don’t want to widen the boundaries out even more?
4. NONE of your claimed ~2400 year cycles are actually between 2150 and 2650 years in length. Not one. The nearest is 2110 years, next nearest is 2660 years. On the other hand, MOST of the sunspot cycles are around 11 years, and a number of them are within a few percent of 11 years.
5. We have observational data on about 28 sunspot cycles. We have observational data long enough to show the existence (or non-existence) of about 20 of your 2400-year cycles … but you’ve THROWN AWAY THE DATA ON ALL BUT FOUR OF THEM. You are basing your claim of a 2400-year cycle on FOUR EXAMPLES which range from 1880 to 2810 years in length, none of which are within 200 years of a 2400 year cycle … do you really believe that this is scientifically defensible?
And that is why the 2400-year “cycle” cannot be compared to the ~ 11-year sunspot cycle.
Next, I don’t understand how a claim about some wildly irregular ∆14C cycle helps us even if it is true. I mean, if every time a cycle between 1800 years and 2900 years shows up you claim it is part of some hypothetical 2400-year solar cycle, what have we gained? It is USELESS in predicting future solar changes, so what good would your claim do us even if it were true?
Next, you have not shown any observational connection between the ∆14C data and the sun. You claim that the largest cycle you discuss in the record (7000 years) is NOT from the sun, but instead is from processes internal to the carbon cycle … but if that is the case, surely the smaller cycles (2400 years, 1000 years) could easily also be from the same cause. You have not provided any data to show that the 7000-year cycle is from “changes within the carbon system itself” as you claim, but that the others are from the sun.
Finally, we have very little solid solar data for the “Maunder Minimum”. While some people claim that there were no sunspots during that time, others strongly dispute that claim, e.g. (emphasis mine)
As a result, I fear that basing some of your claims on the Maunder Minimum just guarantees that they are not falsifiable … and thus they are not science.
My best regards to you,
w.
PS—Your uncited graph claiming to show solar cycle length goes back to before 1700, where we have no reliable sunspot records … I was wondering how they did that.
Good reply. Thanks again.
Willis,
Thank you for such an extensive answer.
It’s funny that you would say that because the cycle was described as soon as the data became available in 1971, and is described by Damon and Sonnet as the strongest feature in the ∆14C record:
III. THE ~ 2300 YR PERIOD
Aside from the aforementioned long (secular?) variation, the strongest feature in the ∆14C record is the long period of ~ 2300 yr. This component of the ∆14C spectrum, in addition to the 208 yr period, was first reported by Houtermans (1971). Its source is enigmatic but probably not attributable to the geomagnetic dipole field, for no periodic geomagnetic dipole field change of the required amplitude has been detected.
Damon and Sonnet 1991, pg. 366
My bold. Damon, P. E., & Sonett, C. P. (1991). Solar and terrestrial components of the atmospheric C-14 variation spectrum. In The Sun in Time (Vol. 1, pp. 360-388).
That is a truly remarkable disconnect between what you say and what it is published.
Here you fail to distinguish between 14C data and solar activity data. To convert measured 14C data into estimated solar activity data usually two different box models are required, one for the biosphere and another one for the oceans. The final error I have already stated that becomes unacceptable when data is older than 12,000 yr BP. It is panel b in this figure:
http://i.imgur.com/tIadNGH.png
http://www.clim-past.net/9/1879/2013/cp-9-1879-2013.pdf
I have already tried to explain this several times. Solar activity cannot be reconstructed with any degree of reliability for older than 12,000 years BP. Prior to that we can talk about 14C but not solar activity.
That is not correct. The minima in the cycle have been identified to ± 200 years o the following dates:
B1. 0.4 kyr BP. Little Ice Age (LIA)
B2. 2.8 kyr BP. Sub-Boreal/Sub-Atlantic Minimum
B3. 5.2 kyr BP. Mid-Holocene Transition. Ötzi buried in ice. Start of Neoglacial period
B4. 7.7 kyr BP. Boreal/Atlantic transition and precipitation change
B5. 10.3 kyr BP. Early Holocene Boreal Oscillation
B6. 12.8 kyr BP. Younger Dryas cooling onset
The variability is therefore also about ± 10%
Err, again not correct.
B1-B2: 2400 yr
B2-B3: 2400 yr
B3-B4: 2500 yr
B4-B5: 2600 yr
B5-B6: 2500 yr
And yet again this is not correct. All cycles are around 2450 ± 200 years. All of the minima in the cycle coincide with grand solar minima. All of them coincide with significant climate worsening periods of millennial scale as judged by multiple proxies. This is not only scientifically defensible. It has been scientifically defended multiple times and it is widely accepted within the paleoclimatology community.
The cycle is a lot less irregular than you purport it, but yes the next instance of a minimum in the ~ 2400 year solar Bray cycle is scheduled for around 4000 AD in 2,000 years. It that sense it is useless for the next 30 generations. However it helps us understand the climate of the past, the Little Ice Age, it helps calm fears that a new LIA is in the making, and it helps us understand what rules the climate in the millennial scale. I would say that it is more useful than studying the glacial cycle for example.
I did.
http://i.imgur.com/kyi7mn0.png
I have not made such claim. I have no idea about that 7000 year periodicity.
For the ~ 2400 yr cycle we are talking about grand solar minima as the lows in the cycle are the highest 14C production periods of all. If 14C data represents solar activity the 2400 yr cycle is real. As I said the robustness of this feature in the data is what it made it to be discovered from day one.
For the matter discussed it doesn’t matter if the sunspots were depressed or absent. Solar cycles appear to interfere with each other so there are times when the cycle studied becomes less conspicuous or even undetectable before reappearing later. To you that means they are not cycles. To me it means that we have to study more to understand what causes them and makes them behave in that way. The 11 year cycle is not the only one that makes that. The ~ 208 yr de Vries cycle also disappears at regular intervals, and the ~ 1000 yr was also unconspicuous between 5000-3000 yr BP
That graph is from NASA, at this page:
http://www.nasa.gov/mission_pages/sunearth/news/solarcycle-primer.html
with this link:
“http://www.nasa.gov/images/content/599325main_ssn_yearly.jpg”
I really cannot attest if NASA is a trusted source or they make up their data.
Best Regards.
Good comment. Thx. But what now with the DO events @ 1470 yr?
If the question is for me, the D-O cycle is clearly not of solar origin, despite several articles that claim so. I am writing an article about that cycle during the last glacial period and will send it to Judith Curry in about a month or so to see if she likes it for her blog. The discussion about the existence of the D-O cycle during the Holocene is left for a future article, but I believe there is some evidence that supports an Holocene continuation of the D-O cycle.
D/O in the Holocene, aka Bond Cycles, show the same frequency but typically only about 1/10 the amplitude (after the 8.2 ka cold snap), due to relative warmth and equanimity of interglacials.
That’s a common misunderstanding, but it is not what the evidence shows. The Bond cycle doesn’t really exist. As Gerard Bond himself showed, the ice-rafted debris is capturing mostly solar variability coming from the ~ 1000 yr Eddy and ~ 2400 yr Bray cycles. This figure is from Bond’s 2001 article with my red lines indicating the Eddy cycle:
http://www.euanmearns.com/wp-content/uploads/2016/05/Figure-3.png
Figure 3. Correspondence of the 12,000 years smoothed and detrended record of 14C with the averaged stacked ice rafted debris in North Atlantic sediments that is a proxy for iceberg discharges. Both series have peaks corresponding to the ~ 1000-year periodicity known as the Eddy cycle, most prominently during the first half of the Holocene, when this periodicity was dominant. Source: Bond et al., 2001.
http://euanmearns.com/periodicities-in-solar-variability-and-climate-change-a-simple-model/
Javier,
Cold and warm events in the Holocene and previous interglacials, IMO, clearly show the same periodicity. Heinrich events are different, perhaps limited to a glacial world, but Bond and D/O cycles look to me the same, ie not requiring vast NH continental ice sheets for their formation, but simply oceanic circulation.
IMO the oceanic cycles have solar cycles behind them.
I could be wrong, of course.
Chimp,
I’ve got news for you. Gerard Bond cheated with the periodicity of the Bond events.
http://i1039.photobucket.com/albums/a475/Knownuthing/Figure%2036_zpsdazvxzwc.png
By naming the first event zero, and skipping one event between 4 and 5 he got the number down to 8, so 12000 yr / 8 = 1500 yr periodicity.
In reality the number of peaks is (at least) 10 as you can clearly see, so the periodicity is 12000 yr / 10 = 1200 yr.
For the first half of the Holocene Bond events are taking place at ~ 1000 year interval, as the filter clearly shows. For the second part of the Holocene the picture is more complex, there are double peaks that suggest different factors affecting iceberg discharge. A periodicity of 1500 years can be defended for this part only if we consider double peaks as belonging to the same event. I personally doubt that assumption is prudent. It places the peak of the Bond event in a valley in ice rafting.
The evidence supports that Bond events are picking every significant cooling during the Holocene whatever its origin, either solar, oceanic, or other. Therefore it is not a cycle but a cooling record. I do not believe in a Bond cycle, and therefore there cannot be a correspondence with the D-O cycle.
IMO a change in the periodicity of the cycles after the North American and Eurasian ice sheets melted doesn’t mean that there aren’t still cycles, analogous to those of the glacial interval (D/O).
But in any case, I don’t see how anyone can d@ny the reality of climatic cycles. The Pleistocene glaciations occurred at ~40K year intervals, then ~100K. Ice houses, during which ice ages can occur if the continents are in amenable positions, happen regularly at ~150 million year periods.
Chimp,
So, it doesn’t matter that the nature of cycle is different (D-O warming, Bond cooling), and the periodicity is different (D-O 1470 years, Bond 1000 years), they are still the same cycle. You are not asking too much from evidence before identifying a cycle.
Climate cycles are a reality, but our interpretation of the cycles is often incorrect. The 100 kyr cycle does not exist either. We have had 11 interglacials in the past 800,000 years:
https://judithcurry.com/2016/10/24/nature-unbound-i-the-glacial-cycle/
Javier,
Each cycle has a warming and cooling element, a peak and a trough.
Two interglacials in the past 800 ky were double peaks, which really ought to count as one. They weren’t separated by glacials. Interglacials can last from less than 10,000 years to more than 30 ky.
Chimp,
No. They should not be counted as one. They are indistinguishable from interglacials in the Early Pleistocene. Should we count all those as pairs then?. As in the case of the Bond periodicity, if you make the data fit your model instead of the opposite way you run into the problem that Feynman described:
“The first principle is that you must not fool yourself – and you are the easiest person to fool.”
Nope.
To count as different, they need to have a glacial in between them, whether of ~40K or ~100K years. Just a millennial dip of Dryas duration doesn’t count.
Just because you say so. The temperature gets colder the longer the glacial period. Two interglacials that are only separated by 41 kyr do not show a very cold glacial between them. I have already demonstrated that MIS 7c, MIS 15a, and MIS15c, that you count as half interglacial have the typical orbital configuration, temperature profile, and duration of any other interglacial.
http://i.imgur.com/eGcSIBV.png
This is how mistakes are made in science. By taking unwarranted assumptions not supported by the data.
Javier,
It’s not my say so. By definition, an interglacial occurs in between glacials, ie intervals of ~41K or ~100K years.
So, you’d have to change the definition of “interglacial” to claim more than one interglacial between two glacials.
Chimp,
Then your problem is with the definition of interglacial. Neither by temperature nor by duration can the period between MIS 15a and MIS15c be rejected as a glacial period without rejecting also most glacial periods prior to the Mid-Pleistocene Transition:
http://i.imgur.com/H3WbQhy.png
So I hope you see the problem. In the middle of the data series you change the criteria for defining a glacial period and based on that you claim a different periodicity for the last part of the data series.
In reality, what the data says is that besides a progressive cooling, the longer the glacial period the colder it gets. Pretty obvious. If you define interglacials by very cold glacial maxima you reject perfectly good interglacials, accept that a huge cooling can take place in the middle of an interglacial, and cherry pick the glacial periods over the Quaternary Ice Age. All that self-deception because you want to match a 100 kyr periodicity that comes out of a frequency analysis with a hypothesis that eccentricity is in charge by adjusting the data to the model, instead of going the opposite way.
The data clearly says that obliquity, and not eccentricity, is in charge, that the 100-kyr periodicity is an artifact, and that we get interglacials usually every 82 kyr in the Late Pleistocene, but sometimes at 41 kyr or 123 kyr.
http://i.imgur.com/sjVlDo8.png
Ignore it at your peril.
Javier,
I’m not wedded to a 100K cycle. I’m OK with 41K and multiples thereof, averaging out to 100K. But that’s not the point.
Cold snaps during interglacials are more common than not. It has to do with the rate of deglaciation. Dryas-type events can occur later in an interglacial than they did in the Holocene. MIS 15 had a single interglacial, with a deeper than usual cold snap between its peaks. Look at the other interglacials. You’ll find some with dips almost as profound.
The cold snaps aren’t glacial intervals unless they last more than 40,000 years and allow the build up of NH ice sheets, so the two peaks aren’t two different interglacials, but the same one.
Chimp,
In the case of MIS 15, we are talking about a cooling that lasted 25,000 years (way longer than the average interglacial) and was a drop of -5°C in EPICA which is the usual cooling during a glaciation. I think you are mistaking a glaciation with a cold snap.
Again you are making assumptions that bias your results. If the obliquity cycle lasts 41 kyr, you should not expect a glacial period to last much more than 25,000 years if it only expands one cycle. This is the way it has happened for millions of years. Now you decide that a glaciation is not a glaciation unless it lasts over 40,000 years. Don’t you see that this unwarranted assumption makes it impossible for you to find the truth about the glacial cycle?
Here you have a comparison between the average interglacial and MIS 15c, the first of the two interglacials that you defend are only one.
http://i.imgur.com/N8miip5.png
As you can see MIS 15c is not significantly different in any way to the average interglacial, and the glaciation that comes afterwards cannot be described in any credible way as a cold snap similar to the Younger Dryas. MIS 15a comes about 5000 years after the graph ends and it is also not significantly different to the average interglacial.
MIS 15c and MIS 15a are two different interglacials separated by a short glacial period. That is the way it used to be in the Early Pleistocene, and we still get some of those in the Late Pleistocene when eccentricity is very high. The 100 kyr cycle is an illusion maintained by making assumptions that are unsustainable at the light of the Early Pleistocene data.
Thank you for the essay. Well done, and focused, as usual.
Actually Willis, an interesting experiment would be to build not a random number, but use the orbital periods of the magnetic planets, and start generating sunspot records, and see if that’s what you get.
The problem is that there are literally hundreds and hundreds of periods involved in the sun, moon and planets. All of them travel in a complex motion, are tilted on their axes, have eccentricity in their orbits, and interact gravitationally with all of the others.
So just which “orbital period” are you suggesting we use … and how will you pick it in advance? Because if you just start looking at random, the Bonferroni correction will soon eat your results for lunch …
w.
Willis: You can silence the criticisms of your construction of sine waves by simply using the original sample and shuffling the data points to get random samples. By construction, this approach removes any periodicity in the series – but you will still “detect” periodicity using periodigrams. This approach has an added bonus of preserving any of the statistical artifacts that might be created by the underlying sampling distribution. For example, heavy tails can falsely show up as significant short-period cycles.
Great work! Look at Voit, “The Statistical Mechanics of Financial Markets” for even more of these types of tricks!
W said he used random phase, so I assume he used a random number fn to do that. If he just used the same fn to get the data he would have properly random data from a tested and verified algo. The danger with homespun methods is that you need to test them before relying on them, as Mickey Mann never ceases to demonstrate.
I don’t understand this comment. The “data” is just the sum of the sine waves with random phases. How could I “use the same fn to get the data”? What does that mean?
w.
Sorry if that was not clear. I meant in generating your random data. You used a random fn for the phase in simulating some random data. Why not just use the random fn to create random data. It seemed like this was a rather an odd way to create “random” data whilst using a random number generator.
It seems that you were expecting white noise and saw the stronger long periods as anomalous “false” cycles that people were often misreading as being real in this kind of data.
Your figure 5 shows your 95% which looks a lot like 1/x plotted backwards. ie it is “red noise”, not white.
This makes sense with the way you loaded the data by using equal period intervals, as I explained.
I’m sure you have created random walk / AR1 / red noise test data before, so I was saying it would be better to use the tested algo for random numbers and create it directly rather then the novel method where you apparently got an unexpected result.
Random number generators are usually far from truly random and their output is usually distributed uniformly. By utilizing that output only for the phase of the sinusoids, one takes the first step toward generating GAUSSIAN random data. The next step, alas not taken by Willis here, is to make the periods of the sinusoids incommensurable, thereby avoiding periodic repetitions. There’s a vast literature available on Gaussian time series.
Greg November 4, 2016 at 11:03 am
I used a uniform random number function to generate a value between -pi and +pi to randomize the phase.
However, my results are a million miles from being uniform random numbers … so why would I want to use that function to generate data?
w.
Why, well originally you said you wanted random numbers, so that would have been the obvious choice. Now it seems that you prefer a red-noise model, though you never explicitly stated that nor why you think it is the best null hypothesis for SSN.
Not saying that is necessarily wrong but if that is your intention you should say that is your model and why your chose it to decide what is significant for SSN.
If you want red noise you can just integrate the white noise from the random number generator. I suggested a cumulative integral but you seem to have missed it , which is why you are still asking why I think you could have used that fn to get the test data.
Instead of a thousand sine calculations and a thousand additions for each point it would require one sum!
Since you did not realise that you were making red-noise, you would not have realised this, otherwise I’m sure you would have gone straight for it.
You are making some very strident claims about what is/isn’t significant without saying why you have chosen a particular noise distribution model or without apparently realising which noise distribution you were using.
Now it may be true that these longer periods are not significant compared to random walk but so far you do not present any reason for assuming SSN is a random walk other than remarking the many “other” natural [ terrestrial ] data look a bit like that too.
There may be some mileage in this but so far you have failed to justify the basis for your significance test.
Best Greg.
“trick” is the meaningful word used here.
G
Who is it that you think you are tricking this time, George?
w.
So what. Baffle ’em, razzle-dazzle ’em, with snazzy pictures that answer the wrong questions if there even are related or meaningful questions.
I have no clue what you’re talking about. Let me ask you again to QUOTE WHAT YOU DISAGREE WITH, because at present you are just screaming at the wind, and that is less than useless.
w.
Yes, many natural processes like temperature time series are strongly auto-correlated due to the thermal inertia of the system. So to conclude that your analysis shows that the longer periods are not significant in SSN you need to show that the mechanism producing them has similar properties, or say that IF whatever produces sunspots is a random walk, the longer “periods” may be illusions.
Perhaps a simpler method would be do the FT on d/dt(SSN) and compare to white noise.
If you look at ddt , you will find that the peak solar activity is around 31 days. It does not look all that white either.
beg your pardon, there are big peaks around 13.5 days which is because of our one-eyed observation point of the predominant circa 27d equatorial rotation, though there is a peak at 30.25 days.
Odd indeed that this is reminiscent of the lunar periods. Maybe the same thing is causing both.
power spectrum of daily d/dt(SSN) for continuous daily record since 1849. Period in days.
This analysis leaves me with one over-riding question — how many people sharpen their shovels? I can honestly say that I have never sharpened a shovel, nor have I ever considered it something that needs to be done. I must not do enough shoveling.
This has raised many questions for me. Do all shovels need to be sharpened? What about snow shovels? I live in Florida now, but my snow shovels used to be plastic. If you have a painted shovel head, do I need to repaint it after I sharpen it.
How do you tell that your shovel needs to be sharpened? Do you measure the radius of the blade or do you worry more about the number of nicks? Do you use a grinding stone or a file?
And, of course, the obvious question — How much is your shovel blade wear affected by climate change?
I know what your thinking — this is the internet. If I wanted to find all of these answers, I could (except for the climate change question. I couldn’t find that one.)
https://www.sharpeningsupplies.com/Sharpening-a-Shovel-or-Spade-W90.aspx
You typically sharpen shovels when you want to use them as a weapon 😉
lorcanbonda November 4, 2016 at 10:13 am
Good questions all. As to how many people sharpen their shovels, I’d include those who earn a wage using a shovel … including myself in my younger years. Ten hours a day, pick and shovel, five days a week … I got a raise that year, to the princely rate of fifty cents an hour. But then I was fifteen, and overjoyed to have a job …
There’s shoveling, and there’s shoveling.
Shovels get dull like any other tool. But it depends on your usage. If you are using a “materials handling” shovel to pick loose sand up off of a concrete floor, the action will keep the flat edge of the shovel sharp by constantly whetting it against the concrete.
But if you do a lot of digging where there are roots in rocky soil, pretty soon you’ll find that your shovel won’t cut the roots …
These are a type of materials handling shovel, and if you are shoveling a concrete driveway, it will keep itself sharp … but snow shovels don’t generally need to be much sharper than an ice cream spoon unless you’re using it to chisel ice off the walkway.
If you need to sharpen it, it is likely being used in an environment where the paint will come off fairly rapidly and you’ll be where you started. However, the marine environment in Florida is death on tools, so you might want to paint it. However, a bit of rust won’t bother most shovels. You might consider just wiping the edge with an oily rag before putting it away if it is an issue.
Look at the edge. You’re not looking for a razor edge, typically more like a 45° bevel on each side. If there are bent-over sections of the edge, I flatten them out using a smooth-faced hammer against an anvil … or a hammer against a rock.
I just beat out any bent sections, grind or file the front and back edge nicks and all, and go back to shoveling. Nicks that are sharpened cut just fine.
Back in the day I always used a file. These days I use my Black and Decker battery-powered angle grinder …
My best to you,
w.
Regarding the upper graph set of Figure 2, the CEEMD of sunspots 1700-2015: I see noticeable correlation between adjacent components, generally occuring intermittently but enough to cause correlation throughout the period. So, I think the CEEMD resolved the sunspot record into an excessive number of components by failing to give consideration (or sufficient consideration) to C3 (the ~11-year cycle) being modulated by longer period ones.
I think a more accurate representation would be:
* Removing C2, and distributing its content to add to C1 and C3, as appropriate for frequency
* Removing C4, and distributing its content to C3, C5 and C6, as appropriate for frequency
* After that, consolidating C5 and C6, and possibly also shifting some of the lower frequency content of C6 to C7
This changes the number of components from 7 to 4, and I think such 4 components will stand out better as showing a longer period one as an identifiable cycle than the 7 ones shown. The four components would then be:
* Short term noise,
* The ~11-year cycle whose amplitude and frequency is (as already shown) not constant,
* A ~50-90 year cycle with similarly non-constant amplitude and frequency no more unsteady than shown in C4-C7 in the upper graph set of Figure 2, and standing out better than C4-C6 in that graph set, which would be the Gleissberg cycle, and
* a longer term variation of questionable statistical significance if the duration of 1700-2015 is considered but likely correlating well with the Dalton, Maunder and Sporer and Wolf minima and the minimum before the Oort Minimum, and the Oort minimum being a couple decades late but otherwise correlating well. That would be the Seuss cycle, with period averaging about 200 years.
One more thing: The Moeberg paleoclimate record shows a not-quite-constant bounciness with a period that is somewhat unsteady but mostly around 50-80 years.
It appears to me that Dalton minimum and the ~1910 minimum were minima of the Gleissberg cycle while the minimum of the Seuss cycle was between them. The upcoming solar minimum appears to me as being the Gleissberg and Seuss cycles bottoming out nearly simultaneously – it could reach a Maunder-like depth but for a much shorter amount of time than the Maunder Minimum. The Maunder Minimum appears to me as the bottom of a ~1,000 year cycle that is not sinusoidal, but distorted towards a sawtooth wave by taking less time to rise and more time to fall than a sinewave – which is also the case with the ~11-year cycle.
A test that I propose for my hypothesis: Adding the next 7 decades to the 1700-onward sunspot data, and looking for increased support of my proposed 4 components including the Gleissberg cycle.
There is quite a lot of overlay since the bandpass filters are never a clear cut-off at either end. Some intermediate frequencies will be split between bands. That could give the impression they are not significant, for example by attenuating a periodicity which sits near a border only half appears on each side. Like all tools it needs some appreciation of what it is doing and how to read it.
“The Moeberg paleoclimate record shows a not-quite-constant bounciness with a period that is somewhat unsteady but mostly around 50-80 years.”–Don Klipstein
Moe Berg’s records show a lifetime average of .243, though he batted .311 with Reading. He knew 12 languages and was once sent to assassinate a German nuclear scientist at a physics conference. After hearing the man speak, Berg slipped away without drawing his gun, realizing the man’s approach wouldn’t soon result in a nuclear bomb.
But you’re probably talking about Moberg. He can hit, but he can’t run.
http://sabr.org/bioproj/person/e1e65b3b
Donald L. Klipstein November 4, 2016 at 10:14 am
Donald, thanks for your comment, but you seem to misunderstand the “empirical” part of the CEEMD analysis. The bands were not chosen by you, me, or anyone else. They are determined empirically, based on the dataset itself. I’m sorry you don’t like what they say, but there it is, and you don’t get to rearrange it to fit your fancy.
Next, while it is fine to wave your hands at purported cycles like a “Suess Cycle”, and a “Gleissberg Cycle”, and a “Hale cycle”, and a “deVries Cycle”, and a “Hallstat cycle”, my point is that the evidence at hand does not support such constructs.
My best to you,
w.
The 1700-2015 data alone does not support the Seuss cycle to an extent of statistical significance, and I think the reason is that 1700-2015 covers less than two periods of it.

– which also shows the Seuss cycle. I also note here that the Figure 10 periodogram has a spike at a period close to the second harmonic (~500 years) of the ~1,000 year cycle that shows up well there. Other spikes show up as possibly the 3rd, 4th and 5th harmonics. However, existence of these harmonics at slightly-off-frequency (slower for 2nd, 3rd and 4th harmonics) is suspicious, but possibly explainable if they are stronger when they are running slower and weaker when running faster. Notably also, the ~11-year cycle has its amplitude unsteadiness and its period unsteadiness correlating with each other (slower when weakening).
Meanwhile, I think the Figure 9 CEEMD graph set supports existence of the Seuss cycle (C6), and the Gleissberg cycle (C5, although something of that frequency briefly shows up in C4). The amplitude of these cycles is unsteady – like that of the ~11-year cycle. The frequency of these is non-constant, but the ~11-year cycle also does that a little. The ~1,000 year cycle shows well, but in C8 before the MWP and in C7 during and after. C7 may have most of existence being lower harmonics (especially the second harmonic) of the ~1,000-year cycle – which is non-sinusoidal by having a faster rise and slower fall than a sinewave (like the ~11-year cycle) according to
I suspect CEEMD could be improved upon by comparison of phasing of one cycle with the one of the next lower frequency or two for detection of harmonics to assign to the lower frequency cycles that generated them. And notably, I think the ~11-year cycle has its amplitude being very unsteady – so I think longer period solar cycles can do the same and still exist. As I said before, I think CEEMD resolves more components than actually exist by failing to detect relationships between one resolved component and another. (When I said that previously, I mentioned a component being modulated by a longer period one – which I still think is true – although now I want to add harmonic relationships to this.)
As for the deVries cycle: That is another name for the Seuss cycle. As for the Hallstadt cycle: I did not claim it exists.
As for the Hale cycle: I did not claim before now in this thread that it exists, although I believe it may have some physical effects on/over some regions of Earth. The Hale cycle is the sun’s global magnetic field reversing polarity once per full cycle of the ~11-year cycle so a full cycle of the sun’s global magnetic field polarity is ~22 years, and a ~22-year cycle of physical effects on Earth would be from interaction of the sun’s periodically reversing magnetic field with Earth’s (comparatively) more-constant magnetic field.
Sharpening shovels probably isn’t all that important. Unless you have an axe to grind.
If you think sharpening shovels is not important, it’s clear you haven’t done enough shoveling …
w.
Willis: These results were surprising to me for several reasons. The first is their irregular, jagged, spiky nature. I’d figured that because these are the sum of smooth sine waves, the result would be at least smoothish as well … but not so at all.
You may be familiar with the Weierstrass Function (http://mathworld.wolfram.com/WeierstrassFunction.html) that is continuous everywhere and differentiable nowhere. Historically, this was an important moment in mathematical understanding because it showed that continuity in no way implies differentiability, not even in a weak sense.
I hear patterns when the rain drops into the barrel — regardless of whether or not. At the Cedar River Watershed Education Center (east of Seattle) drums have been placed under a dozen drip points, so “the patterns” are fast and fantastic. Doing a DCDFT here would not be appreciated.
~~~~~~~~~~~~~~~
At 83 comments and counting, I hope you don’t mind if I comment on:
“There are a number of lovely folks in this world who know how to use a shovel, but who have never sharpened a shovel.
We use USFS** sharpened shovels/scrapers as volunteers — building and fixing trails in the Cascades. [**Not a garden-store type shovel.] And, yes, we keep them sharp.
John, a sharp shovel is a wonderful tool …
w.
Russian SpetsNaz offensive shovel:
http://www.incredible-adventures.com/graphics/russian_throw_shovel.jpg
I wonder what you would show if you only used “new” sunspot count. Historically each time a spot rotated into view, it was counted as a “new” sunspot. I guess it would be difficult in the old days to know if the spot was “new” or was just long lasting.
And I do have to sharpen shovels because our soil is sand, stones and volcanic ash.
So the Maunder Minimum and Dalton Minimum are either long lived urban legends or one-off events.
Willis,
Very nice work and presentation. I am wondering if you have seen https://www.youtube.com/watch?v=l-E5y9piHNU – “All Climate Change is Natural” – Professor Carl-Otto Weiss. Given your interest and analysis, I think you might find his work interesting and complementary to your own. – Barbara
Thanks for the link, Barbara, I’ll take a look.
w.
I wonder if the smarter dinosaurs concluded this about major impactors during the late Cretaceous.
Thanks, Willis, very interesting and perception-changing. Regarding the conversation about ∆14C: All analyses are based on time. I would like to see an analysis based on sunspot cycle (SSC) instead of time. By this, I mean that the peak say of cycle 1 would be 1, of cycle 2 would be 2, etc, the ‘middle’ of the trough between cycles 1 and 2 would be 1.5, etc. Of course there would be some arbitrary decisions and approximations to make, but the end result would be (I think) to make much clearer the relationship (or non-relationship) between ∆14C and SSC. The same base used for other values – temperature for example – would also give a clearer picture of the relationship (or non-relationship) to SSC. In particular, a Fourier analysis of SSC on this base would show a much stronger cycle at frequency 1 than the time-based Fourier analysis shows at frequency ~11 (and that’s the point). I suspect that analysing various measures using this base might also throw up some unexpected ‘cycles’ which might turn out to be interesting/useful.
[Yes I could do it myself not just ask you to do it, but I have less skill, less tools, less data, and being in the middle of moving house, less time. The value that you would add would be very high.]
@Javier
perhaps you can help me
where did the 2400 yr cycle came from?
last I checked it was clear which of the longer solar cycles [longer than the 11-22 yr sc]
were relevant…
http://www.nonlin-processes-geophys.net/17/585/2010/npg-17-585-2010.html
i.e. do you agree that the DO cycle of 1470 years is not relevant?
HenryP,
I don’t understand your question.
I don’t believe the evidence supports that the D-O cycle is of solar origin. But of course the D-O cycle is relevant. It is the most drastic, abundant, abrupt climate change in the geological record.
A little-appreciated relation in trigonometry is the sum/difference relation:
sin x +/- sin y = 2*sin(1/2*(x +/- y)) * cos(1/2(x -/+ y))
(as well as I can render it in plain text)
Implication: adding 2 sine waves of different periods produces a complex wave of a higher frequency (x + y) modulated by a second wave of a lower frequency (x – y).
So, for example, a wave of 10 years added to a wave of 11 years frequency will produce an apparent wave of 1 year frequency modulated by another wave of 21 years frequency.
When you create canonical ‘white noise’ as you have, you will get a result that includes a multitude of waves of 1 year frequency (the interval between successive wave components in your construction) modulated at frequencies from 2N – 1 years (assuming you started at A years and finished at N years) down to 2A +1 years.
The relative phases will determine whether the addition is constructive or destructive.
Reminds me of the search for meaning in brain waves, a notoriously random thing. Evoked signals, which are often below the amplitude of random brainwave strength, can be calculated through the process of mathematically subtracting peaks and troughs of the electrical potentials picked up on the surface of the head (after scratching it up a bit, buttering it with jelly, then squishing an electrode cup into the jelly). Over time you get a straightish line from which evoked (have the subject listen to something, like a series of white noise or frequency centered pings) signals rise out of, all done in real time.
The 2402 year cycle is known as the Charvatova cycle (Ivanka Charvatova).
She found that there was disorder in the SSB orbit of the sun. The sun carves
out a three leaf clover shape every ~59.5779 years (tri-synodic). This tre-foil
configuration gets disordered in ~2402 year cycles.
Also:
The sun returns to the same position on the ecliptic every ~2649.63 years.
This is a 360 degree rotation of the sun’s outwardly directed acceleration.
The earth is caught up between a large moon and an accelerating sun.
So . . we get a large tug from the sun every ~2649.63 years.
Our axial precession cycle is a simple beat created by these two cycles.
2649.63 x 2402.616 / ( 2649.63 – 2402.616 ) = 25772 years
Please visit Weathercycles.wordpress
” Fibonacci and climate “
Conclusions? Well, my conclusion is that while it is possible that the ~ 88-year “Gleissberg cycle” in the sunspots, and the ~1,000-year cycle and the ~ 2400-year cycle in the ∆14C data may be real, solid, and persistent, I find no support for those claims in the data that we have at hand
Willis says which I for once tend to agree with him.
I believe you are referring to this: http://www.informath.org/Contest1000.htm
Amusing anecdote about this challenge. and a warning about methods:
I tried to solve this challenge and one of the things I did was look for a weakness in how the challenger generated his artificial data. One of the things I did was attempt to recreate the challenger’s methods for creating the artificial data.
I used the standard off the shelf Octave red/pink noise generators that involve generating Gaussian noise, performing a DFT (via FFT), shaping the noise with a settable beta, and then doing the inverse DFT back to the time domain. I did this to see if there was some pattern I could discern in this that is somehow different than naturally generated signals.
I turns out with this method, the phase versus frequency graph looks completely different than a natural signal. In nature, the phase looks contiguous and forms a spiral when you graph imaginary versus real from the DFT. WIth the above method, imaginary versus real looks pretty random.
I got all excited that I’d solved the challenge because it was 3am and I’d forgotten I was working with my artificial data, not the challenger’s. Doh! I checked and the challenger’s data had the nice spiral of imaginary versus real.
It turns out if I generate the random sequences in the time domain using the standard AR equation I get the nice spiral shape, just like nature. I suspect that’s what the challenge author did.
So IMHO the DFT/iDFT method for producing noise, and possibly Willis’ method, will have subtle differences in phase relationships of the signal. I don’t know how meaningful that is to Willis’ Monte Carlo method described above, but I already suggested the literature uses the AR method to generate the proper spectrum for confidence levels, so this anecdote is another reason to use the AR method – it acts more like a real world auto-correlation and not a math model that emulates it in some fashion.
best regards,
Peter
Willis,
your calculations are convincing ….and if someone is in doubt, its up to him to
follow your calculations, fill in own values to prove that the periodicity values of 88,
208, 1000 and 2400 yrs cycles are NOT ARTIFACT of periodicity analysis deficiencies
…..it is clear that those periodogram analyses themselves produce those LONG-YEAR
– periodicities by their inherent defects…… I would rather appreciate a final word from
Mc Kittrick or another high calibre statistician as second opinion… this question is too
important to leave it open….. in the meantime, I will give all points to Willis, to have it
brought up….
…… B the way, the always quoted 14C-values are INPUT measured on Earth within
the troposphere and are NOT measured OUTPUT VARIATIONS of the Sun….. all those
people, who dedicate themselves to the 14C- input on the planet, declare this Earth INPUT
as being a solar OUTPUT and hide Earth orbital variations are REAL cause of 14C-variations……
….. Willis, you should look into a further matter: Cycles with growing amplitudes and periodicities,
There is one which grows by 6.95 years all along the Holocene. Literature, take Part 1 of the Holocene
cycle analysis at http://www.knowledgeminer.eu/climate-papers.html, the Climate Pattern
Recognition Analysis, Part 1 …… Take Alley, R:B: 2000 GISP2 as data series…..
…..
The growing cycle could be resolved, knowing the exact growth of 6.95 years and its
commencement date. Regards JS
For those of you who have taken delight in rubbishing my methods, and who have advised me to use white noise, and told me that I’m doing it wrong, wrong, wrong, let me point out that someone actually followed my request, which was:
Peter Sable, to his credit, suggested the “redfit” algorithm.
He pointed me to a Fortran program … and while I can still get by in Fortran, I went to see what I could find in R. I found a package called “dplyR”, which inplements the “redfit” algorithm in R. It returns the 99%, 95%, and 90% confidence intervals. Here are those results:




Please note that THESE ARE THE SAME AS MY RESULTS. Both sets of results show that the ~1000 year cycle is right at the 95% CI level, and also that the 2400-year cycle is far from significance …
From there we can go to the sunspot data … here is that result:
Again, this gives exactly the same result as my analysis—the 11-year cycle is significant, the so-called “Gleissberg Cycle” is not.
So I hold by my claims that the “Gleissberg” solar cycle, as well as the 100-year and the 2400-year cycles in the ∆14C data, are entirely unremarkable and are not distinguishable from random fluctuations.
And for all of you good folks who have told me over and over in this thread things like that I’m just an uneducated fool who is doing it wrong and I should have used equal frequency intervals …
Sorry … but this time, you are the ones who are wrong.
w.
Willis:
You will cause me to switch from Octave (Matlab) to R finally…
What did you get for tau* (the characteristic time scale of the AR1 process) for the 14C data?
I’m surprised it’s as big as I think it shows from the frequency domain.
best regards,
Peter
* it’s funny, I’ve seen the parameter for AR1 called alpha, beta, and tau in the literature. I bet there are more…
BTW this reminds me that AR1 modeling only works on stationary data. Did you detrend the data before running the analysis?
Yes.
w.
You are approaching Saint Svalgaard status.
Steven Mosher November 5, 2016 at 1:53 am
And you are approaching incoherency. I can’t even tell whether your comment is aimed at me or at Peter, and I can’t tell whether your comment is a compliment or an attack.
You really should cut back on the drunkblogging, Steven, it’s not doing your rep any good.
w.
Oh sorry, I’ve often called Lief a Saint for putting up with idiots. Since like 2007..
Any way.
You of course had a choice to take it as a compliment. Odd that you didn’t. oh well. water duck back
As for mind altering substances. nope. never touch the stuff.
[your really should learn to spell Leif’s name correctly -mod]
its a running joke since 2007 that I will mis spell his name.
but since you are a mod if it means that much to you ( he didnt care) you always have the option
of editing it. In fact It would take you less than time than being pendantic.
Steven Mosher November 5, 2016 at 5:25 pm
Steven, as I said, it was totally unclear. In my words, “I can’t tell whether your comment is a compliment or an attack” So I didn’t take it as a compliment because I was unable to take it or not take it as anything.
My bad, won’t happen again.
w.
What is the time period covered by the graph using sunspot data? 315 years? Having only one minimum of the Seuss/deVries cycle, whose minima may be more discernable than its maxima? And I think the Gleissberg cycle is unsteady but it exists. As I said before, I propose this graph being redone after another period of the Gleissberg cycle – I expect that to show it coming closer to being countable as statistically significant.
Given that we haven’t seen the “Gleissberg cycle” since about 1850, that is to say about 160 years, I fear that whatever it may be it doesn’t qualify as a “cycle” …
w.
You say “…neither case are we seeing any kind of true underlying cyclicity.” When referring to the 1,000 year DeVries cycle or the 2100-2700 year Bray (Hallstatt cycle). Later you amend this with “…while it is possible that the 88-year “Gleissberg cycle” in the sunspots, and the ~1,000 year cycle and the ~ 2400-year cycle in the ∆14C data may be real, solid, and persistent, I find no support for those claims in the data that we have at hand.”
Just looking at 14C data, in isolation, while looking for paleotemperatures is problematic. 14C paleotemperatures depend on an accurate model of the total atmospheric carbon mass. It varies a lot over time. Using this technique is virtually impossible before 11000BP due to the Younger Dryas cooling and warming, the previous ice period, etc. These were periods of huge changes in the total atmospheric carbon mass. Since 11000BP there are some serious changes in a few well documented colder periods, like the 8.2kyr event, but it is relatively stable. Therefore, the earlier data was removed (correctly IMO) regarding your comment “thrown away three-quarters of the data.”
14C temperature estimates are somewhat circular, since what you are measuring (temperature) is affecting the total carbon you use in your calculation. 14C temperatures should not be used alone for this reason, which is what your Fourier analysis shows I believe. 14C is commonly combined with 10Be because while they each have data issues, the problems are in complementary areas. They correlate well (R2=0.8) and analyzed together they each offset weaknesses in the other.
See Roth and Joos, Clim. Past, 2013. For an analysis of 14C error see their Appendix A, the radiocarbon errors are high. Their Figure 1 is very instructive as well:
http://meetingorganizer.copernicus.org/3ICESM/3ICESM-405.pdf
Once we add in the worldwide glacier, paleontological and ice raft data we can conclusively find the Eddy cycle (~1,000 year) and Hallstatt (2100-2700) cycle. See Bond, et al. 1997 (Science); Debret, et al., 2007 (Clim. Past). The ice core and glacial records, plus the ice raft data are the most conclusive evidence for most people.
So, your conclusion that 14C temperature records, by themselves, have problems is true; the cycles you speak of are based on much more than 14C data. Even if you ignored the 14C data, the cycles would still be there. All paleo-temperature proxies fall apart statistically in isolation; which is a shame. But, this should not stop us from using them. They are all we have.
found this also in my notes
http://iie.fing.edu.uy/simsee/biblioteca/CICLO_SOLAR_PeristykhDamon03-Gleissbergin14C.pdf
and it seems to confirm what you and Javier are saying.
I determined the Gleissberg presently at 86.5 years but this may have to do with the current planet configuration – I also can confirm that there is correlation of the Gleissberg with the position of Saturn and Uranus.
The position of the smaller planets apparently also affect the length of the shorter term solar cycles.
In theory, I think if you put a program to it, you could look at the position of all the planets of the solar system, and as suggested, also look at the position of the sun itself, and you should be able to predict solar activity.
it works just like a clock.
Amazing.
Andy May November 5, 2016 at 5:37 am
Despite your claim that we have to stop our analysis at 11000 BP, the link you refer me to (Roth and Joos) goes back to 21,000 BC … so it appears you’ve bought into the cherry picking. I’m sorry, Andy, but your own citation shows that the claim is meaningless.
Not only that, but your own citation shows that the increase in the errors doesn’t even START until 13000 years ago, so AT A MINIMUM Javier and his predecessors have thrown away two thousand years of very valid data that your own citation says are as good as the more recent data.
So yes, Andy … you are cherry picking at a rate of knots.
You say that we have to “add in” a bunch of other data. Here we’ve been discussing the work of Javier and Clilverd, neither of which say we have to “add in” anything. They claim that the ∆14C data is enough ON ITS OWN to establish the existence of the cycles. I say no.
In any case, I went to check out “Bond et al, 1997”, with the “et al.” sadly not including James Bond, but instead even more sadly including that noted scientist and harridan, Heidi Cullen … hardly inspires confidence. The link is here. It turns out to say NOTHING about 1000-year and 2400-year cycles as you have claimed. Instead, he identifies a 1470-year cycle which appears NOWHERE in the ∆14C data. So your claim that we can just somehow “add in” a 1470-year cycle to the ∆14C records is a joke. Bond et al. themselves say:
Sez it all …
So I fear your claim about having to “add in” Bond is … well … unwise. It does not support your claim, it diametrically opposes it.
ALL paleotemperature proxies fall apart in isolation? Andy, are you sure we’re talking about the same thing? And if what you say is true and all paleo proxies fall apart, how likely is it that their sum will be any better?
And I am NOT saying the cycles don’t exist. What I said was simple—there is not enough evidence at hand in the ∆14C data to support the existence of such cycles.
I’m sorry, Andy, but you are a long, long ways from establishing your claim. However, I do appreciate the fact that you seem to agree that the ∆14C data does not support the existence of the claimed cycles.
w.
Whew! I’ll let my comment speak for itself, but I will address some of your comments that I think are in error or misinterpretations of what I said. First, it is my opinion (and the opinion of many others) that prior to the end of the Younger Dryas, using 14C data for a paleotemperature estimate is not likely to be accurate. 14C paleo-temperature determination is a work in progress and not very advanced IMO. I don’t think that we know how much carbon was in the atmosphere during Younger Dryas cooling with any precisions. Just my opinion.
I can assure you that Javier does not think 14C data alone is enough to establish the Hallstatt (Bray) cycle and he never said that in any of his posts as far as I can recall. You’ve put those words in his mouth. I can’t speak for Clilverd on the subject, but I think he was using the Hallstatt cycle to show his 14C methodology worked, not the other way round.
Your statement that there is not enough evidence in the 14C data to support the long term solar cycles is true, I doubt anyone would argue the point with you. But, I posted my comment because I didn’t want any of your readers to turn that around and say there isn’t a Hallstatt or Eddy cycle. They do exist and that is well established and it has been for over 40 years. Dr. Nicola Scafetta has a very nice new paper in Earth Science Reviews on the topic. Link: http://ac.els-cdn.com/S0012825216301453/1-s2.0-S0012825216301453-main.pdf?_tid=fb1280ca-a39d-11e6-96ef-00000aacb362&acdnat=1478381139_bc8e306a0a39538609a5b70be9ad2cae
See his Figure 10, both the Eddy Cycle and the Hallstatt cycle meet the 95% confidence level according to his analysis. As for Bond’s 1997 paper, on page 1 (magazine page 1257) he identifies a 2800 year cycle that I take as the Hallstatt cycle. If you want mathematical precision, you are in the wrong field. Geology, paleontology and paleo-climate studies all involve using very uncertain indicators. We, hopefully, get more precise with time; but its a struggle.
Andy May November 5, 2016 at 2:53 pm
Andy, thanks for your reply. First, when someone says that his opinion is backed by “many others” but he does not specify who they might be, I fear my suspicion index goes up rather than down.
Second, YOUR OWN CITATION used the ∆14C data back 22,000 years so I guess he’s not one of the “many others” you are relying on.
Third, the uncertainty in the ∆14C data from YOUR OWN CITATION does not even begin to increase until 13,000 years BP, so as I pointed out before, Javier and Clilverd are cherry-picking a minimum of two millennia of data.
Fourth, neither you nor the others have shown the effect of your cherry-picking. If you want to use less than a full dataset, you can’t just wave your hands and say “I don’t like the old stuff”. You have to show the effect of your deletion of data and fully justify it, and justify the exact choice of your cutoff-date based on objective criteria … in short, you need to justify it with something more than “my opinion” backed by unknown “others”.
Fifth, if there is that much uncertainty in the ∆14C records, why can we use it as far back as 50,000 years with good accuracy?
Sixth, the ∆14C data comes with the error estimate of the original authors. While it increases with increasing age, the same is true of e.g. the paleo records you are using to claim the existence of the Hallstatt cycle.
Seventh, I’m not at all clear about the very basis of your entire objection. I am completely in mystery as to why you are referring to inaccuracies in “using 14C data for a paleotemperature estimate”. You might be right about that, but I’m not doing anything like that. I’m just trying to find short-term (~ 2400 year) cycles in the 14C data, not use the 14C data to estimate the temperature in the year 35245 BC. Why is the early data not valid for a simple cyclical analysis? Yes, there are large secular changes, but IF the Hallstatt cycle is real we should still see it on top of those secular changes.
And I can assure you that he does NOT accept that the “evidence” for it in the ∆14 data is nothing but a far-from-statistically-significant blip … nor have you accepted that, from all appearances. You say that the “14C data alone” doesn’t establish the Bray cycle, as though the 14C data were actually significant. It can’t do it either alone or in concert with others, not unless they happen to have the same bizarre combination of ~2000 year cycles interspersed with the odd 2800-year cycle …
But even AFTER:
1. throwing away 3/4 of the data, and
2. arbitrarily subtracting out a 11,000 year perfectly straight linear trend of unknown origin, and
3. arbitrarily removing a 7000 year cycle of unknown origin which is NOT even the proper size (biggest cycle in that chunk of data is 6,600 years)
the claimed cycle is STILL not statistically significant. It’s not that “14C data alone” is not enough … it’s that we have no evidence that the blip in the 14C data is more than just a random fluctuation.
Finally, Javier says that if we best-fit a 2300 year perfectly regular sine cycle to the ∆14C data, we can use it to forecast the future evolution of the sun, and through that, we can forecast the future evolution of the temperature … so he clearly thinks the cycle in the 14C data is both real and significant, or he wouldn’t fit his cycle to it.
So you and Javier say the Hallstatt cycle is real in part BECAUSE of the ∆14C data, and he says his ∆14C method is valid BECAUSE of the Hallstatt cycle? I’m not touching that one.
Javier and others have been strenuously stating and restating THAT EXACT POINT. To all appearances they think the blip in the Fourier analysis is statistically significant. It is not, it is far from it.
The so-called “Gleissberg cycle” has been “well established” for much longer than that, and I haven’t found any evidence for that either. Am I the only one who has looked critically at Gleissberg’s original work? Go take a look, his statistics are hilarious. I discuss the issues here and here.
Is there a significant “Hallstatt cycle” in the 14C data? NO, it is far from significant. Does this mean the cycle doesn’t exist? NO. It may well exist.