Guest Post by Willis Eschenbach
Loehle and Scafetta recently posted a piece on decomposing the HadCRUT3 temperature record into a couple of component cycles plus a trend. I disagreed with their analysis on a variety of grounds. In the process, I was reminded of work I had done a few years ago using what is called “Periodicity Analysis” (PDF).
A couple of centuries ago, a gentleman named Fourier showed that any signal could be uniquely decomposed into a number of sine waves with different periods. Fourier analysis has been a mainstay analytical tool since that time. It allows us to detect any underlying regular sinusoidal cycles in a chaotic signal.
Figure 1. Joseph Fourier, looking like the world’s happiest mathematician
While Fourier analysis is very useful, it has a few shortcomings. First, it can only extract sinusoidal signals. Second, although it has good resolution as short timescales, it has poor resolution at the longer timescales. For many kinds of cyclical analysis, I prefer periodicity analysis.
So how does periodicity analysis work? The citation above gives a very technical description of the process, and it’s where I learned how to do periodicity analysis. Let me attempt to give a simpler description, although I recommend the citation for mathematicians.
Periodicity analysis breaks down a signal into cycles, but not sinusoidal cycles. It does so by directly averaging the data itself, so that it shows the actual cycles rather than theoretical cycles.
For example, suppose that we want to find the actual cycle of length two in a given dataset. We can do it by numbering the data points in order, and then dividing them into odd- and even-numbered data points. If we average all of the odd data points, and we average all of the even data, it will give us the average cycle of length two in the data. Here is what we get when we apply that procedure to the HadCRUT3 dataset:
Figure 2. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 2. The cycle has been extended to be as long as the original dataset.
As you might imagine for a cycle of length 2, it is a simple zigzag. The amplitude is quite small, only plus/minus a hundredth of a degree. So we can conclude that there is only a tiny cycle of length two in the HadCRUT3.
Next, here is the same analysis, but with a cycle length of four. To do the analysis, we number the dataset in order with a cycle of four, i.e. “1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4 …”
Then we average all the “ones” together, and all of the twos and the threes and the fours. When we plot these out, we see the following pattern:
Figure 3. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 4. The cycle has been extended to be as long as the original dataset.
As I mentioned above, we are not reducing the dataset to sinusoidal (sine wave shaped) cycles. Instead, we are determining the actual cycles in the dataset. This becomes more evident when we look at say the twenty year cycle:
Figure 4. Periodicity in the HadCRUT3 dataset, with a cycle length of 20. The cycle has been extended to be as long as the original dataset.
Note that the actual 20 year cycle is not sinusoidal. Instead, it rises quite sharply, and then decays slowly.
Now, as you can see from the three examples above, the amplitudes of the various length cycles are quite different. If we set the mean (average) of the original data to zero, we can measure the power in the cyclical underlying signals as the sum of the absolute values of the signal data. It is useful to compare this power value to the total power in the original signal. If we do this at all possible frequencies, we get a graph of the strength of each of the underlying cycles.
For example, suppose we are looking at a simple sine wave with a period of 24 years. Figure 5 shows the sine wave, along with periodicity analysis in blue showing the power in each of the various length cycles:
Figure 5. A sine wave, along with the periodicity analysis of all cycles up to half the length of the dataset.
Looking at Figure 5, we can see one clear difference between Fourier analysis and periodicity analysis — the periodicity analysis shows peaks at 24, 48, and 72 years, while a Fourier analysis of the same data would only show the 24-year cycle. Of course, the apparent 48 and 72 year peaks are merely a result of the 24 year cycle. Note also that the shortest length peak (24 years) is sharper than the longest length (72-year) peak. This is because there are fewer data points to measure and average when we are dealing with longer time spans, so the sharp peaks tend to broaden with increasing cycle length.
To move to a more interesting example relevant to the Loehle/Scafetta paper, consider the barycentric cycle of the sun. The sun rotates around the center of mass of the solar system. As it rotates, it speeds up and slows down because of the varying pull of the planets. What are the underlying cycles?
We can use periodicity analysis to find the cycles that have the most effect on the barycentric velocity. Figure 6 shows the process, step by step:
Figure 6. Periodicity analysis of the annual barycentric velocity data.
The top row shows the barycentric data on the left, along with the amount of power in cycles of various lengths on the right in blue. The periodicity diagram at the top right shows that the overwhelming majority of the power in the barycentric data comes from a ~20 year cycle. It also demonstrates what we saw above, the spreading of the peaks of the signal at longer time periods because of the decreasing amount of data.
The second row left panel shows the signal that is left once we subtract out the 20-year cycle from the barycentric data. The periodicity diagram on the second row right shows that after we remove the 20-year cycle, the maximum amount of power is in the 83 year cycle. So as before, we remove that 83-year cycle.
Once that is done, the third row right panel shows that there is a clear 19-year cycle (visible as peaks at 19, 38, 57, and 76 years. This cycle may be a result of the fact that the “20-year cycle” is actually slightly less than 20 years). When that 19-year cycle is removed, there is a 13-year cycle visible at 13, 26, 39 years etc. And once that 13-year cycle is removed … well, there’s not much left at all.
The bottom left panel shows the original barycentric data in black, and the reconstruction made by adding just these four cycles of different lengths is shown in blue. As you can see, these four cycles are sufficient to reconstruct the barycentric data quite closely. This shows that we’ve done a valid deconstruction of the original data.
Now, what does all of this have to do with the Loehle/Scafetta paper? Well, two things. First, in the discussion on that thread I had said that I thought that the 60 year cycle that Loehle/Scafetta said was in the barycentric data was very weak. As the analysis above shows, the barycentric data does not have any kind of strong 60-year underlying cycle. Loehle/Scafetta claimed that there were ~ 20-year and ~ 60-year cycles in both the solar barycentric data and the surface temperature data. I find no such 60-year cycle in the barycentric data.
However, that’s not what I set out to investigate. I started all of this because I thought that the analysis of random red-noise datasets might show spurious cycles. So I made up some random red-noise datasets the same length as the HadCRUT3 annual temperature records (158 years), and I checked to see if they contained what look like cycles.
A “red-noise” dataset is one which is “auto-correlated”. In a temperature dataset, auto-correlated means that todays temperature depends in part on yesterday’s temperature. One kind of red-noise data is created by what are called “ARMA” processes. “AR” stands for “auto-regressive”, and “MA” stands for “moving average”. This kind of random noise is very similar observational datasets such as the HadCRUT3 dataset.
So, I made up a couple dozen random ARMA “pseudo-temperature” datasets using the AR and MA values calculated from the HadCRUT3 dataset, and I ran a periodicity analysis on each of the pseudo-temperature datasets to see what kinds of cycles they contained. Figure 6 shows eight of the two dozen random pseudo-temperature datasets in black, along with the corresponding periodicity analysis of the power in various cycles in blue to the right of the graph of the dataset:
Figure 6. Pseudo-temperature datasets (black lines) and their associated periodicity (blue circles). All pseudo-temperature datasets have been detrended.
Note that all of these pseudo-temperature datasets have some kind of apparent underlying cycles, as shown by the peaks in the periodicity analyses in blue on the right. But because they are purely random data, these are only pseudo-cycles, not real underlying cycles. Despite being clearly visible in the data and in the periodicity analyses, the cycles are an artifact of the auto-correlation of the datasets.
So for example random set 1 shows a strong cycle of about 42 years. Random set 6 shows two strong cycles, of about 38 and 65 years. Random set 17 shows a strong ~ 45-year cycle, and a weaker cycle around 20 years or so. We see this same pattern in all eight of the pseudo-temperature datasets, with random set 20 having cycles at 22 and 44 years, and random set 21 having a 60-year cycle and weak smaller cycles.
That is the main problem with the Loehle/Scafetta paper. While they do in fact find cycles in the HadCRUT3 data, the cycles are neither stronger nor more apparent than the cycles in the random datasets above. In other words, there is no indication at all that the HadCRUT3 dataset has any kind of significant multi-decadal cycles.
How do I know that?
Well, one of the datasets shown in Figure 6 above is actually not a random dataset. It is the HadCRUT3 surface temperature dataset itself … and it is indistinguishable from the truly random datasets in terms of its underlying cycles. All of them have visible cycles, it’s true, in some cases strong cycles … but they don’t mean anything.
w.
APPENDIX:
I did the work in the R computer language. Here’s the code, giving the “periods” function which does the periodicity function calculations. I’m not that fluent in R, it’s about the eighth computer language I’ve learned, so it might be kinda klutzy.
#FUNCTIONS
PI=4*atan(1) # value of pi
dsin=function(x) sin(PI*x/180) # sine function for degrees
regb =function(x) {lm(x~c(1:length(x)))[[1]][[1]]} #gives the intercept of the trend line
regm =function(x) {lm(x~c(1:length(x)))[[1]][[2]]} #gives the slope of the trend line
detrend = function(x){ #detrends a line
x-(regm(x)*c(1:length(x))+regb(x))
}
meanbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle means
rep(tapply(x,modline,mean),length.out=length(x))
}
countbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle number of datapoints N
rep(tapply(x,modline,length),length.out=length(x))
}
sdbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle standard deviations
rep(tapply(x,modline,sd),length.out=length(x))
}
normmatrix=function(x) sum(abs(x)) #returns the norm of the dataset, which is proportional to the power in the signal
# Function “periods” (below) is the main function that calculates the percentage of power in each of the cycles. It takes as input the data being analyzed (inputx). It displays the strength of each cycle. It returns a list of the power of the cycles (vals), along with the means (means), numner of datapoints N (count), and standard deviations (sds).
# There’s probably an easier way to do this, I’ve used a brute force method. It’s slow on big datasets
periods=function(inputx,detrendit=TRUE,doplot=TRUE,val_lim=1/2) {
x=inputx
if (detrendit==TRUE) x=detrend(as.vector(inputx))
xlen=length(x)
modmatrix=matrix(NA, xlen,xlen)
modmatrix=matrix(mod((col(modmatrix)-1),row(modmatrix)),xlen,xlen)
countmatrix=aperm(apply(modmatrix,1,countbyrow,x))
meanmatrix=aperm(apply(modmatrix,1,meanbyrow,x))
sdmatrix=aperm(apply(modmatrix,1,sdbyrow,x))
xpower=normmatrix(x)
powerlist=apply(meanmatrix,1,normmatrix)/xpower
plotlist=powerlist[1:(length(powerlist)*val_lim)]
if (doplot) plot(plotlist,ylim=c(0,1),ylab=”% of total power”,xlab=”Cycle Length (yrs)”,col=”blue”)
invisible(list(vals=powerlist,means=meanmatrix,count=countmatrix,sds=sdmatrix))
}
# /////////////////////////// END OF FUNCTIONS
# TEST
# each row in the values returned represents a different period length.
myreturn=periods(c(1,2,1,4,1,2,1,8,1,2,2,4,1,2,1,8,6,5))
myreturn$vals
myreturn$means
myreturn$sds
myreturn$count
#ARIMA pseudotemps
# note that they are standardized to a mean of zero and a standard deviation of 0.2546, which is the standard deviation of the HadCRUT3 dataset.
# each row is a pseudotemperature record
instances=24 # number of records
instlength=158 # length of each record
rand1=matrix(arima.sim(list(order=c(1,0,1), ar=.9673,ma=-.4591),
n=instances*instlength),instlength,instances) #create pseudotemps
pseudotemps =(rand1-mean(rand1))*.2546/sd(rand1)
# Periodicity analysis of simple sine wave
par(mfrow=c(1,2),mai=c(.8,.8,.2,.2)*.8,mgp=c(2,1,0)) # split window
sintest=dsin((0:157)*15)# sine function
plotx=sintest
plot(detrend(plotx)~c(1850:2007),type=”l”,ylab= “24 year sine wave”,xlab=”Year”)
myperiod=periods(plotx)
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
So what you’re saying is possibly, if you’re a hammer, everything looks like a nail.
Fair enough.
“Spectral leakage” of Fourier transforms in real life data analysis can and should be reduced by using a good window function, for instance a Hamming window. Just cutting out a slice out of a time series implicitly uses a rectangular window function, leading to lots of spurious frequencies in the transform.
http://en.wikipedia.org/wiki/Window_function
Periodicity analysis reminds me a little of the Hough transform.
http://en.wikipedia.org/wiki/Hough_transform
Ninderthana says:
July 30, 2011 at 5:57 am
This is caused by the fact that the Sun completes one loop around the barycentre roughly once every every 20 years, However, the axis of alignment of the orbital loop (about the barycentre) advances by 120 degrees with each completed orbit. This means that the Sun must complete three twenty year orbits for it to return to roughly same position with respect to the distant stars.
If you assume that the cause is astrological [‘distant stars’ – e.g. whether the Sun is in Leo or some other sign] you may have a point, but if you assume that the actual, local, configuration of the planets [via physical cause, such as tides] is the cause, then the orientation of the axis of the loop with respect to the distant stars doesn’t matter. Which is it?
For those who believe that a square wave is actually the sum of the odd numbered components of the square wave frequency, I would like to point out that Fourier was analyzing waveforms. They can be represented as the sum of the frequencies, however, the assumption of infinite cycles, is what makes the whole thing problematic when single pulses are involved. The Gibbs Phenomenon is another clue that this is a tool (map) not the territory.
I agree with several posters, that wavelet transforms will probably bring interesting results.
Thanks a lot Willis, I always learn, and sometimes have to rethink what I thought I knew, when you write.
Willis Eschenbach linked to the following article:
Sethares, W.A.; & Staley, T.W. (1999). Periodicity Transforms. IEEE Transactions of Signal Processing 47(11), 2953-2964.
From it’s abstract:
“The algorithm finds its own set of nonorthogonal basis elements (based on the data), rather than assuming a fixed predetermined basis as in the Fourier, Gabor, and wavelet transforms.”
The authors have a very narrow view (or at least had a very narrow view in 1999) of what can be done with wavelet methods. Wavelet methods are incredibly flexible and ABSOLUTELY DO NOT demand assumption of a predetermined dyadic basis. I NEVER assume a predetermined basis when applying exploratory wavelet methods. Untenable assumptions are NOT the course to enlightenment.
Regards.
There are very good reasons why Fourier analysis doesn’t always pick up certain cycles that are apparent through other methods. The quasi 60 year cycle is a 2nd level harmonic that is not present in the first level, ie the quasi 60 year cycle is a modulation of the quasi 20 year cycle. This modulation to the velocity curve (highs and lows) at the higher level is a direct result of Uranus and Neptune.
Also when looking at the 172 year cycle in the temperature or solar proxy record is not supremely evident because the cycle has multiple prongs. It travels in a cluster (usually 3) or multiple components that occur each 172 years. Think of it as a hand on a clock that ends in a trident, every time it goes past midnight the amount of prongs varies, sometimes it has the last prong missing or the first prong could be missing or all three are present. Add to that a variable “strength” to each prong and you see why a regular pattern cannot be teased out, but the underlying force is still there. This is how grand minima works, another example of Uranus and Neptune at work.
If we only relied on Fourier analysis the world would be a poorer place. Nature does not always conform.
While Fourier analysis is very useful, it has a few shortcomings. First, it can only extract sinusoidal signals.
A physicist says:
July 30, 2011 at 6:13 am
John Day says:
July 30, 2011 at 6:24 am
Good answers to the issue mentioned above.
However, let’s turn the argument around. What you need to perform a Fourier type analysis is a set of orthnormal functions. They don’t have to be sines.
http://75.24.127.133/Courses/tutorials/Generalized_Fourier_Series.htm
Maybe this makes sense: Any function can be described by a linear combination of sine functions. That’s the standard Fourier approach most people use. Then a linear combination of functions composed of linear combinations of sines can also serve in Fourier analysis.
For example, square waves can be constructed from sines. Square pulses can be used instead of sines in a generalized Fourier analysis.
http://www.icrepq.com/icrepq-08/365-iwaszkiewicz.pdf
Sines are just convenient.
The observation of patterns or associations is the first step towards knowledge. After that comes hypothesis as to causation. The hypothesis is rarely unique, but the observation is or should be.
Regardless of the Fourier vs periodicity analysis argument, is it true that a 60/120 year cycle of some regularity can be found in the global temperature record of the last 200 years? As a non-mathematician I do not understand whether this disagreement on analysis style says the “observation” is an artefact or a reality.
Amen brother. The naive application of Fourier Analysis can produce results that are somewhere between being valid and being complete garbage. The difference usually depends on luck. 😉
Here is the barycentre velocity graph from Willis with the quasi 60 year cycle annotated.
http://tinyurl.com/2dg9u22/images/willis.png
Mike Jonas says:
July 30, 2011 at 3:25 am
Sorry, nothing for that. Doesn’t mean it doesn’t exist, however.
w.
I have added the parable of the blind men and the elephant to my poster graphic about myopic lack of practical intelligence on both sides of the silly climate war:
http://i.minus.com/ijfwX2.jpg
A single underwater volcano can suddenly becomes active for a century or two in a critical location where an ocean current initiates a thousand mile wide twisty path to the surface, tickling the whole system away from its current state, enough to alter all the but the very longest cycles, which themselves are unpredictable due to three body problem orbital chaos. On both century and millennial time scales, simple fluid dynamic chaos of ocean currents are reasonably expected to dominate sea surface and atmospheric temperature due to the massive heat content of the oceans, this despite any minor changes in external forcings and feedbacks that involve the sun and atmosphere. You also have non-volcanic shifts in crust thickness of the ocean floor such as seems to be occurring around the Antarctic peninsula, and a sudden 20th century loosening of the location of the magnetic poles due to chaotic shirts in magna currents. A tiny angular shift in the North Pole extended out into space represents a shift of hundreds of miles of magnetic influence on cosmic ray shielding over Greenland.
“No, so holp me Petault, it is not a miseffectual whyancinthinous riot of blots and blurs and bars and balls and hoops and wriggles and juxtaposed jottings linked by spurts of speed: it only looks as like is as damn it; and, sure, we ought really to rest thankful that at this deleteful hour of dungflies dawning we have even a written on with dried ink scrap of paper at all to show for ourselves, tare it or leaf it, (and we are lufted to ourselves as the soulfisher when he led the cat out of the bout) after all that we lost and plundered of it even to the hidmost coignings of the earth and all it has gone through and by all means, after a good ground kiss to Terracussa and for wars luck our lefftoff’s flung over our home homeplate, cling to it as with drowning hands, hoping against all hope all the while that, by the light of philosophy, (and may she never folsage us!) things will begain to clear up a bit one way or another within the next quarrel of an hour and be hanged to them as ten to one they will too, please the pigs, as they ought to categorically, as, strickly between ourselves, there is a limit to all things so this will never do.” -James Joyce (Finnegans Wake 1939)
Stephen Wilde says:
July 30, 2011 at 3:53 am
Well, if like Loehle and Scafetta you think that you can find out fundamental climate truths by analyzing the “cycles”, yes, the fact that they are artifacts does indeed matter.
w.
Henry says:
July 30, 2011 at 4:18 am
Thanks, Henry, nice. As I said, I’m not that well versed in R.
w.
Leif Svalgaard says:
July 30, 2011 at 8:52 am
Ninderthana says:
July 30, 2011 at 5:57 am
This is caused by the fact that the Sun completes one loop around the barycentre roughly once every every 20 years, However, the axis of alignment of the orbital loop (about the barycentre) advances by 120 degrees with each completed orbit. This means that the Sun must complete three twenty year orbits for it to return to roughly same position with respect to the distant stars.
If you assume that the cause is astrological [‘distant stars’ – e.g. whether the Sun is in Leo or some other sign] you may have a point, but if you assume that the actual, local, configuration of the planets [via physical cause, such as tides] is the cause, then the orientation of the axis of the loop with respect to the distant stars doesn’t matter. Which is it?
It’s possible that there is more than one type of physical cause. So there could be a tidal effect and an electromagnetic effect of planetary alignments. Rather than astrology “Sun is in Leo”, it may be significant for the apparent 60 year signal that every third Jupiter-Saturn conjunction takes place between the Sun and the centre of our galaxy, and/or that every third conjunction takes place towards the bowshock of the heliosphere. We don’t know yet, but it seems reasonable to me to investigate these possibilities.
I commend Willis for doing the analysis, it adds to our knowledge. I don’t think it provides a basis for dismissing Loehle and Scafetta’s paper, or other efforts to discover the linkage between solar system dynamics and climate, but it should galvanise efforts to improve on their ‘first foray’ into this area.
Whilst this type of study proves that apparent cycles in climate might just be random walk chance, it doesn’t rule out the possibility that they might not be. The apparent 60 year signal in climate may be the result of the terrestrial amplification of a small astronomical signal because the planets also directly affect the Earth Moon system, as well as indirectly affecting Earth’s climate via an effect on solar activity levels. I’m not saying this is necessarily so, but it’s a possibility.
John Day wrote (July 30, 2011 at 6:24 am) wrote:
“Whereas in […] Wavelet […] decomposition the subspaces are othognal, and so the transforms produce unique pattern features.”
Not necessarily, nor necessarily even desirable. Wavelet methods are far more flexible & adaptable than many seem to conceive. Some of the misunderstandings might be arising out of an assumption that the goal is to model the signal (&/or perform statistical inference). For those of us who stick to data exploration, this is not the goal. Some of the communication divides frequently arising in these discussions are fundamentally paradigmatic. We need to get past untenable assumptions that have taken DEEP cultural root (some might say rot).
Best Regards.
Nick Stokes says:
July 30, 2011 at 4:26 am
Thanks, Nick. I’m not sure what you mean when you say you can’t “partition the power between them”. I’m not doing that. I’m directly calculating the power each individual cycle has. That’s why they all together sum up to much more than 100%.
I have partitioned the barycentric data into 20, 83, 19, and 13 year cycles. I have then combined the cycles to closely reconstitute the original data.
So what is the problem? I mean, the method finds the cycles. It doesn’t find a significant 60-year cycle in the barycentric data, mostly a large 20 year cycle. It doesn’t find a large 20-year cycle in the temperature data.
As a result, I believe I’ve shown that the Loehle/Scafetta analysis is fatally flawed. Do you disagree with that conclusion?
w.
James Reid says:
July 30, 2011 at 4:33 am
Excellent question, James. I’ll take a look when I find the time.
w.
Willis and Anthony,
Thank you. Anthony, I love that you will host a blog post by the authors of a newly published peer reviewed paper and within days host a second blog post criticizing the paper. This is the way science should work.
Willis, thank you for your writing. Very interesting, as always (almost always – your blog post criticizing mine was slightly less interesting ) The way I see it you have clearly expressed why you are not persuaded by the Loehle and Scafetta paper, but have not refuted the paper. It will be interesting to see the response by Loehle and Scafetta.
Ninderthana says:
July 30, 2011 at 5:57 am
This may be all correct … but where is the signal? Take another look at my analysis of the barycentric signal in Figure 6. If there were a 60 year cycle, we’d see evidence of it once we remove the 20 year cycle. But it’s not there.
So you may be right, but if so, such a 60 year cycle is quite small.
w.
Willis said:
“Well, if like Loehle and Scafetta you think that you can find out fundamental climate truths by analyzing the “cycles”, yes, the fact that they are artifacts does indeed matter”
There is the nub. Loehle and Scafetta (and me) are noting fundamental climate truths from observations of solar and oceanic variability and their recorded effects on climate and seeking to interpret their nature and significance.
Whether those real natural fundamental climate truths fail to come through mathematical analysis after applying various sorts of filtering and processing which obscure the difference between artifacts and real cycles is rather beside the point.
So I think what you have shown us is simply the wrong way around. By all means use those methods to tease out cycles or other forms of relationship that are not otherwise apparent but it’s not useful to use those techniques to support an assertion that cycles that exist out in the real world (as evidenced by lots of other sources of data) are mere artifacts just because they don’t survive the processing of the limited data that you use.
“But because they are purely random data, these are only pseudo-cycles, not real underlying cycles.”
You can only say that because you know the means by which the data were generated. In the case of observations of real-world data, we don’t know, so we have to analyze the data and try to find out whether any of the apparent cycles are real.
One technique I’d like to try is to look at smaller chunks of the time series, analyze each of them separately, and see if there are any signals that appear strongly across them all. Unfortunately, with 150 years of data, if we split it into three parts, we wouldn’t even have a full 60-year cycle to look at. It would be impossible to give an estimate of the confidence level on a 60-year component when our data only span 2.5 such cycles. Maybe if we had six centuries of solid data, we could find a statistically-significant signal at the 60 y level.
Another obvious problem is the arbitrary assignment of cycles in full years. As the known cycles for Jupiter and Saturn don’t work out to an exact number of years, the 19.85y signal is mis-analyzed as two separate 19- and 20- year cycles.
A physicist says:
July 30, 2011 at 6:13 am
First, I wrote the piece, not Anthony, please don’t blame him for my errors.
Second, let me quote for you from the Sethares document cited above:
If we have a dataset of 158 datapoints, the Fourier transform returns values at cycle lengths of 158 years, 158/2 years, 158/3 years, 158/4 years, etc. This means we get answers at 158, 79, 52.33 years, 39.5 years, etc. … but not in between.
That is what I meant by reduced resolution at longer timescales. Sorry for my lack of clarity.
w.
Leif Svalgaard ,
One circuit with respect to the stars means that what ever mechanism is involved, it is aligned/synchronized with the seasons here on the Earth.
It is neither astrological nor is it planetary tidal forces directly acting on the atmosphere, since, as you and I both know, they are either totally insignificant or delusional.
The only reason/hypothesis that I can [logically] come up with is a long-term synchronization between factors that are know to influence the Earth’s atmosphere (i.e. lunar tides and/or the level of solar activity), and the planetary configuration. I agree with you that this has still yet to be proven. However, I disagree with your contention that it is not worth looking for a possible connection.
The hypothesis that I have presented here would be possible if the rate of procession the lunar line-of-apse and line-of-nodes were set by periodic resonances between the lunar orbit and weak gravitational perturbations of Venus and Jupiter over the last few billion years.
None of these ideas are so far fetched as to beyond the borders of reasonable scientific research.
Some times the answer is right in front our nose but very often we are either too silly or too blind to see it.