Guest Post by Willis Eschenbach
Loehle and Scafetta recently posted a piece on decomposing the HadCRUT3 temperature record into a couple of component cycles plus a trend. I disagreed with their analysis on a variety of grounds. In the process, I was reminded of work I had done a few years ago using what is called “Periodicity Analysis” (PDF).
A couple of centuries ago, a gentleman named Fourier showed that any signal could be uniquely decomposed into a number of sine waves with different periods. Fourier analysis has been a mainstay analytical tool since that time. It allows us to detect any underlying regular sinusoidal cycles in a chaotic signal.
Figure 1. Joseph Fourier, looking like the world’s happiest mathematician
While Fourier analysis is very useful, it has a few shortcomings. First, it can only extract sinusoidal signals. Second, although it has good resolution as short timescales, it has poor resolution at the longer timescales. For many kinds of cyclical analysis, I prefer periodicity analysis.
So how does periodicity analysis work? The citation above gives a very technical description of the process, and it’s where I learned how to do periodicity analysis. Let me attempt to give a simpler description, although I recommend the citation for mathematicians.
Periodicity analysis breaks down a signal into cycles, but not sinusoidal cycles. It does so by directly averaging the data itself, so that it shows the actual cycles rather than theoretical cycles.
For example, suppose that we want to find the actual cycle of length two in a given dataset. We can do it by numbering the data points in order, and then dividing them into odd- and even-numbered data points. If we average all of the odd data points, and we average all of the even data, it will give us the average cycle of length two in the data. Here is what we get when we apply that procedure to the HadCRUT3 dataset:
Figure 2. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 2. The cycle has been extended to be as long as the original dataset.
As you might imagine for a cycle of length 2, it is a simple zigzag. The amplitude is quite small, only plus/minus a hundredth of a degree. So we can conclude that there is only a tiny cycle of length two in the HadCRUT3.
Next, here is the same analysis, but with a cycle length of four. To do the analysis, we number the dataset in order with a cycle of four, i.e. “1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4 …”
Then we average all the “ones” together, and all of the twos and the threes and the fours. When we plot these out, we see the following pattern:
Figure 3. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 4. The cycle has been extended to be as long as the original dataset.
As I mentioned above, we are not reducing the dataset to sinusoidal (sine wave shaped) cycles. Instead, we are determining the actual cycles in the dataset. This becomes more evident when we look at say the twenty year cycle:
Figure 4. Periodicity in the HadCRUT3 dataset, with a cycle length of 20. The cycle has been extended to be as long as the original dataset.
Note that the actual 20 year cycle is not sinusoidal. Instead, it rises quite sharply, and then decays slowly.
Now, as you can see from the three examples above, the amplitudes of the various length cycles are quite different. If we set the mean (average) of the original data to zero, we can measure the power in the cyclical underlying signals as the sum of the absolute values of the signal data. It is useful to compare this power value to the total power in the original signal. If we do this at all possible frequencies, we get a graph of the strength of each of the underlying cycles.
For example, suppose we are looking at a simple sine wave with a period of 24 years. Figure 5 shows the sine wave, along with periodicity analysis in blue showing the power in each of the various length cycles:
Figure 5. A sine wave, along with the periodicity analysis of all cycles up to half the length of the dataset.
Looking at Figure 5, we can see one clear difference between Fourier analysis and periodicity analysis — the periodicity analysis shows peaks at 24, 48, and 72 years, while a Fourier analysis of the same data would only show the 24-year cycle. Of course, the apparent 48 and 72 year peaks are merely a result of the 24 year cycle. Note also that the shortest length peak (24 years) is sharper than the longest length (72-year) peak. This is because there are fewer data points to measure and average when we are dealing with longer time spans, so the sharp peaks tend to broaden with increasing cycle length.
To move to a more interesting example relevant to the Loehle/Scafetta paper, consider the barycentric cycle of the sun. The sun rotates around the center of mass of the solar system. As it rotates, it speeds up and slows down because of the varying pull of the planets. What are the underlying cycles?
We can use periodicity analysis to find the cycles that have the most effect on the barycentric velocity. Figure 6 shows the process, step by step:
Figure 6. Periodicity analysis of the annual barycentric velocity data.
The top row shows the barycentric data on the left, along with the amount of power in cycles of various lengths on the right in blue. The periodicity diagram at the top right shows that the overwhelming majority of the power in the barycentric data comes from a ~20 year cycle. It also demonstrates what we saw above, the spreading of the peaks of the signal at longer time periods because of the decreasing amount of data.
The second row left panel shows the signal that is left once we subtract out the 20-year cycle from the barycentric data. The periodicity diagram on the second row right shows that after we remove the 20-year cycle, the maximum amount of power is in the 83 year cycle. So as before, we remove that 83-year cycle.
Once that is done, the third row right panel shows that there is a clear 19-year cycle (visible as peaks at 19, 38, 57, and 76 years. This cycle may be a result of the fact that the “20-year cycle” is actually slightly less than 20 years). When that 19-year cycle is removed, there is a 13-year cycle visible at 13, 26, 39 years etc. And once that 13-year cycle is removed … well, there’s not much left at all.
The bottom left panel shows the original barycentric data in black, and the reconstruction made by adding just these four cycles of different lengths is shown in blue. As you can see, these four cycles are sufficient to reconstruct the barycentric data quite closely. This shows that we’ve done a valid deconstruction of the original data.
Now, what does all of this have to do with the Loehle/Scafetta paper? Well, two things. First, in the discussion on that thread I had said that I thought that the 60 year cycle that Loehle/Scafetta said was in the barycentric data was very weak. As the analysis above shows, the barycentric data does not have any kind of strong 60-year underlying cycle. Loehle/Scafetta claimed that there were ~ 20-year and ~ 60-year cycles in both the solar barycentric data and the surface temperature data. I find no such 60-year cycle in the barycentric data.
However, that’s not what I set out to investigate. I started all of this because I thought that the analysis of random red-noise datasets might show spurious cycles. So I made up some random red-noise datasets the same length as the HadCRUT3 annual temperature records (158 years), and I checked to see if they contained what look like cycles.
A “red-noise” dataset is one which is “auto-correlated”. In a temperature dataset, auto-correlated means that todays temperature depends in part on yesterday’s temperature. One kind of red-noise data is created by what are called “ARMA” processes. “AR” stands for “auto-regressive”, and “MA” stands for “moving average”. This kind of random noise is very similar observational datasets such as the HadCRUT3 dataset.
So, I made up a couple dozen random ARMA “pseudo-temperature” datasets using the AR and MA values calculated from the HadCRUT3 dataset, and I ran a periodicity analysis on each of the pseudo-temperature datasets to see what kinds of cycles they contained. Figure 6 shows eight of the two dozen random pseudo-temperature datasets in black, along with the corresponding periodicity analysis of the power in various cycles in blue to the right of the graph of the dataset:
Figure 6. Pseudo-temperature datasets (black lines) and their associated periodicity (blue circles). All pseudo-temperature datasets have been detrended.
Note that all of these pseudo-temperature datasets have some kind of apparent underlying cycles, as shown by the peaks in the periodicity analyses in blue on the right. But because they are purely random data, these are only pseudo-cycles, not real underlying cycles. Despite being clearly visible in the data and in the periodicity analyses, the cycles are an artifact of the auto-correlation of the datasets.
So for example random set 1 shows a strong cycle of about 42 years. Random set 6 shows two strong cycles, of about 38 and 65 years. Random set 17 shows a strong ~ 45-year cycle, and a weaker cycle around 20 years or so. We see this same pattern in all eight of the pseudo-temperature datasets, with random set 20 having cycles at 22 and 44 years, and random set 21 having a 60-year cycle and weak smaller cycles.
That is the main problem with the Loehle/Scafetta paper. While they do in fact find cycles in the HadCRUT3 data, the cycles are neither stronger nor more apparent than the cycles in the random datasets above. In other words, there is no indication at all that the HadCRUT3 dataset has any kind of significant multi-decadal cycles.
How do I know that?
Well, one of the datasets shown in Figure 6 above is actually not a random dataset. It is the HadCRUT3 surface temperature dataset itself … and it is indistinguishable from the truly random datasets in terms of its underlying cycles. All of them have visible cycles, it’s true, in some cases strong cycles … but they don’t mean anything.
w.
APPENDIX:
I did the work in the R computer language. Here’s the code, giving the “periods” function which does the periodicity function calculations. I’m not that fluent in R, it’s about the eighth computer language I’ve learned, so it might be kinda klutzy.
#FUNCTIONS
PI=4*atan(1) # value of pi
dsin=function(x) sin(PI*x/180) # sine function for degrees
regb =function(x) {lm(x~c(1:length(x)))[[1]][[1]]} #gives the intercept of the trend line
regm =function(x) {lm(x~c(1:length(x)))[[1]][[2]]} #gives the slope of the trend line
detrend = function(x){ #detrends a line
x-(regm(x)*c(1:length(x))+regb(x))
}
meanbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle means
rep(tapply(x,modline,mean),length.out=length(x))
}
countbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle number of datapoints N
rep(tapply(x,modline,length),length.out=length(x))
}
sdbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle standard deviations
rep(tapply(x,modline,sd),length.out=length(x))
}
normmatrix=function(x) sum(abs(x)) #returns the norm of the dataset, which is proportional to the power in the signal
# Function “periods” (below) is the main function that calculates the percentage of power in each of the cycles. It takes as input the data being analyzed (inputx). It displays the strength of each cycle. It returns a list of the power of the cycles (vals), along with the means (means), numner of datapoints N (count), and standard deviations (sds).
# There’s probably an easier way to do this, I’ve used a brute force method. It’s slow on big datasets
periods=function(inputx,detrendit=TRUE,doplot=TRUE,val_lim=1/2) {
x=inputx
if (detrendit==TRUE) x=detrend(as.vector(inputx))
xlen=length(x)
modmatrix=matrix(NA, xlen,xlen)
modmatrix=matrix(mod((col(modmatrix)-1),row(modmatrix)),xlen,xlen)
countmatrix=aperm(apply(modmatrix,1,countbyrow,x))
meanmatrix=aperm(apply(modmatrix,1,meanbyrow,x))
sdmatrix=aperm(apply(modmatrix,1,sdbyrow,x))
xpower=normmatrix(x)
powerlist=apply(meanmatrix,1,normmatrix)/xpower
plotlist=powerlist[1:(length(powerlist)*val_lim)]
if (doplot) plot(plotlist,ylim=c(0,1),ylab=”% of total power”,xlab=”Cycle Length (yrs)”,col=”blue”)
invisible(list(vals=powerlist,means=meanmatrix,count=countmatrix,sds=sdmatrix))
}
# /////////////////////////// END OF FUNCTIONS
# TEST
# each row in the values returned represents a different period length.
myreturn=periods(c(1,2,1,4,1,2,1,8,1,2,2,4,1,2,1,8,6,5))
myreturn$vals
myreturn$means
myreturn$sds
myreturn$count
#ARIMA pseudotemps
# note that they are standardized to a mean of zero and a standard deviation of 0.2546, which is the standard deviation of the HadCRUT3 dataset.
# each row is a pseudotemperature record
instances=24 # number of records
instlength=158 # length of each record
rand1=matrix(arima.sim(list(order=c(1,0,1), ar=.9673,ma=-.4591),
n=instances*instlength),instlength,instances) #create pseudotemps
pseudotemps =(rand1-mean(rand1))*.2546/sd(rand1)
# Periodicity analysis of simple sine wave
par(mfrow=c(1,2),mai=c(.8,.8,.2,.2)*.8,mgp=c(2,1,0)) # split window
sintest=dsin((0:157)*15)# sine function
plotx=sintest
plot(detrend(plotx)~c(1850:2007),type=”l”,ylab= “24 year sine wave”,xlab=”Year”)
myperiod=periods(plotx)
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
I disagreed with the L & S 60 year analysis, for reason I could not see it in the 350 year old CET record.
Only place where there is a sort of longish quasi 60 year cycle is in the secular variation of the geomagnetic field in the Hudson Bay magnetic pole. Even so it is only ‘around 60ish’, detected as a negative forcing (falling GMF) – troughs at: ~1750, ~1810, ~ 1870s, ~1930s and late 1990s
I think it is due to the Earth’s passage trough so called ‘magnetic clouds’ (CMEs) connecting the sun to the two largest magnetospheres (Jupiter & Saturn). More (speculative) details at
http://www.vukcevic.talktalk.net/LFC5.htm
also see Fig. 8 in the ‘Earth bound effects’ chapter, not all graphs are numbered.
Robert of Ottawa wrote:
The human mind looks for patterns where there are not necessailry any – both a gift and curse.
PiperPaul says:
Great observation. And people that don’t understand confirmation bias are doomed to continue misinterpreting things due to mental myopia.
Both comments are core to the understanding of both paranoia (not involved here!) and “wiggle-matching.” Humans (and others) detect potential threats by isolating a dim image from surrounding noise, as, for example, discerning a tiger amongst foliage. This capability is “designed” (ha-ha, just kidding!) to err on the consrvative side, giving a TIGER! EEK! signal when there is no tiger a lot more often than vice versa. It’s safer to see imaginary tigers and reach for your spear 100 times than to NOT see a real tiger once and get eaten. Cost-to-risk ratio very reasonable.
So L&S’s “tiger” may be imaginary. But, if so, they are not the only ones seeing tigers in the data, and at least L&S are not demanding we reach for our wallets to drive phantom beasts away.
Leif Svalgaard says:
July 30, 2011 at 1:04 pm
http://www.leif.org/research/FFT-Periods-with-Trends.png
Great post.
A couple of years ago I posted a comment,using Fourier Analysis, or Spectral Analysis, showing a recent leveling off of the NOAA global temperature. It did raise a hornet’s nest of comments, at RC, including from one “Tamino” who dismissed it as bungled. Although in mean time, he seems to have discovered as variation, called “wavelets”. The graph I posted back then is referenced below:
http://www.4shared.com/photo/uv-Q2YoJ/NOAA_yr_001.html
I also compared a Fourier and Empirical Mode Decomposition (EMD) analysis of HadCrut global temperature result, and would have posted it here. However the site with my graphs, seems to have been removed and will take a little while to get my graphs back on line.
The IEEE paper you referenced is vary interesting, and will merit some study. An additional note was that mentioning the IEEE at RC, seemed to be like referencing a cultist belief in their minds. While Dr. Norbert Wiener was some unknown, who knew nothing of mathematical perdition methods. Which said a lot.
The search for wave signatures in the climate record identifying direct astrophysical forcings of climate, makes a major assumption that has not been spelled out here; namely, that the climate system responds in a passive, linear manner. The climate record is noisy, and Willis’ study here is an important critical shot across the bows for such an approach, demonstrating the prevalence of spurious underlying wave signals in noisy wavetrains, arising from autocorrelation.
However climate exhibits nonlinear, chaotic characteristics, as has been explained by several here on WUWT, notably Willis himself. A nonlinear-chaotic climate may not respond passively and linearly to forcings, astrophysical or otherwise. In a periodically forced nonlinear oscillator the forcing can either be strong – in which case the periodicity of the forcer is clearly evident in the forced system, or it can be weak. In a weakly periodically forced nonlinear oscillator, the relationship between the forcing and responsive frequencies can be very complex, such as to defeat analytical attempts to find simple underlying forcing wave frequencies.
In this case one has to look for a very different type of diagnostic pattern to analyse the system as a nonlinear oscillator, such as log-log and fractal character.
Geoff & Willis
Graph shown at http://www.landscheidt.info/images/willis.png
has peaks very close to the Hudson bay magnetic pole’s secular variation troughs (forcing) at
~1750, ~1810, ~ 1870s, ~1930s and late 1990s
as calculated from the ETH Zurich data.
http://www.vukcevic.talktalk.net/LFC5.htm
graph at Fig. 8.
Willis Eschenbach says:
July 30, 2011 at 10:42 am
tallbloke says:
July 30, 2011 at 9:58 am
I commend Willis for doing the analysis, it adds to our knowledge. I don’t think it provides a basis for dismissing Loehle and Scafetta’s paper, or other efforts to discover the linkage between solar system dynamics and climate, but it should galvanise efforts to improve on their ‘first foray’ into this area.
For the Loehle/Scafetta paper to be valid, the cycles need to be valid. The actual 20 year cycle in the temperature data is extremely weak. Their claimed 60 year cycle in the temperature data cannot be determined to be real. The relative sizes are reversed between the barycentric cycles and the temperature cycles. I’ve shown that we can reconstruct the data (using their method) with a 40 and 60 year cycle.
If that together doesn’t provide “a basis for dismissing Loehle and Scafetta’s paper”, what more do you want?
Hi Willis,
It’s great that you can also get a good fit to the temperature data with 40 and 60 year cycles (I think you’d find it work even better with 60 and 37.6 years – your old pal Ted L liked 37.6, he found it was the best natural subdivision of the 179 year cycle of the outer planets). There are lots of cycles buried in the temperature data. But this doesn’t mean L & S are wrong to use 60 and 20 years, it just means there are several ways of going about doing a study similar to theirs which will produce similarly good looking results.
The question is, how good is the predictive power? It would take a long time to test if we sat and waited for 90 years to see if they’re right, so you have used stats techniques to determine the matter. However, while you show that 60 years is not strong in the solar speed relative to barycentre data, you haven’t done a velocity study, as Nindathana points out. It may be that direction of the vector is unimportant, but given the historical evidence for 60 year cycles, it may be.
Also, the Gnevyshev and Ohl rule tells us that odd numbered solar cycles are usually stronger than even numbered cycles. In a 60 year period, you get ~30 years of two odds and one even, followed by another ~30 year period where you get two even’s and one odd. This rule gets violated in a way we have found is predictable, and the next solar cycle 25 is one of those times. This means the L&S temperature projection is like to get a fairly early falsification in phenomenological terms, regardless of the stats tests.
You probably think I’m rambling by now, but what I’m trying to point out is that we are actually making some good progress with the planetary theory in terms of predicting solar activity. I successfully predicted a solar cycle 24 amplitude of 35-50 SSN in 2008 on CA using Ted L’s methods. L&S have tried to go a step further and predict temperatures and deduce co2 contribution. I personally think they’ve overstretched, but it’s great to see the pioneering spirit alive and well. Scafetta agreed with me that this is a first foray in to the modern climate literature for the planetary theory, and will need to be followed up with more and better developed studies.
Scafetta has complained that you haven’t analysed his preparatory work in Scafetta 2010, so take a look at that, and see if it makes a difference to your view. It would be a shame to see the development of our understanding of the relationships between solar system dynamics and climate killed off before it has a chance to get into a stronger stride.
Best to you.
tb
tallbloke says:
July 30, 2011 at 1:53 pm
I successfully predicted a solar cycle 24 amplitude of 35-50 SSN in 2008 on CA using Ted L’s methods.
First, SC24 has not peaked yet, so your ‘prediction’ cannot be said to be ‘successful’. Second, other people predict a smallish cycle as well, so that you also do that, does not show that your ‘prediction’ stand out as decisive.
Tad says:
July 30, 2011 at 11:56 am
20- and 60-year cycles that were already known to exist? Known from where? Cycles in what? L/S claimed that they were present in the barycentric velocity data. And they are … but the 60 year cycle is tiny compared to other cycles.
And why choose those two cycle lengths, rather than say the 13 year cycle that is also “already known to exist”?
Tad, there’s an infinity of cycles that “are already known to exist.” However, the fact that there is a tiny sixty year cycle in the barycentric velocity data hardly justifies using it as the largest cycle in the temperature data.
My point is simple. I have shown that the appearance of the 60-year cycle in the temperature data is extremely likely to be an artifact of the length of the record and its autocorrelated nature. In addition, there is no strong 20-year cycle in the temperature data either.
w.
Sunspot FFT power spectrum shows large trough at 20 (19.8) years and also smaller one at 60 years.
See http://www.vukcevic.talktalk.net/LFC5.htm Fig.9. However the gmf secular change shows negative imprint of relatively strong 60ish year cycle.
p.s. there is a nice sunspot line-up at the moment, 4-5 groups all in strait line (SDO 30/07/2011) http://sdo.gsfc.nasa.gov/assets/img/latest/latest_1024_4500.jpg
This caught my attention:
That is the period of time required for the Earth’s core to advance one full rotation relative to the Earth’s surface. The core has a day length that is about 2/3 seconds shorter than the surface. The magnetic field is tied to the core and the magnetic poles are not symmetrical which suggests the core is not the smooth ball of iron shown in the classroom science books. The core and the surface also do not share a common equator.
So if the core has a ragged irregular shape and it is churning in the soup between the core and the surface, there should also be a some kind of bow wave at any significant irregularity on the core. That should be revealed in the regional sea level and in undulations of the Earth’s surface over the 400 year cycle. Perhaps even in the global earthquake and volcanism record.
But if there is a direct climate impact that is driven by cosmic rays whose path is influenced by the variations in the position of the magnetic pole (among other things), that should be stand out from the noise with a period of 400 years.
Willis:
The L&S paper attempted to estimate climate sensitivity (to atmospheric CO2 concentration), but the focus here – and on the other thread – seems to be about astronomical effects on climate.
The important point is whether or not the assumptions that L&S used to derive climate sensitivity are a valid method to determine climate sensitivity. And the astronomical debate is irrelevant to that consideration.
The L&S estimate assumed there are only two cycles (of 20-year and 60-year lengths) which are significant in the climate data, and it assumed those cycles are constant over the analysis and prediction period.
Those assumptions are problematic for several reasons only one of which you are discussing here. As Mike Jonas and I discussed on the earlier thread, the issues are:
Are there ‘real’ (not merely apparent) cycles in climate data?
If there are real cycles, then do they have a cause other than being an indication of resonant frequencies in the climate system?
Are all cycles significant or only some?
Do individual cycles vary in amplitude and frequency?
In my opinion, the significant point of your analysis in this thread is stated by you when (at July 30, 2011 at 2:21 pm ) you say;
“My point is simple. I have shown that the appearance of the 60-year cycle in the temperature data is extremely likely to be an artifact of the length of the record and its autocorrelated nature. In addition, there is no strong 20-year cycle in the temperature data either.”
But that is only a part of the problem. I spelled out the entire issue in my post (July 30, 2011 at 1:02 am ) in the other thread where I wrote:
“[snip]
The issue is that apparent cycles vary but the L&S method assumes they don’t.
The amplitude of cycles varies and not all cycles continue without interruption. The L&S method assesses only two cycles 20-year and 60-year cycle length. Either or both could have increased or reduced its amplitude. And there are other cycles that could have varied, too.
For example, there is another cycle of ~900-year duration that provides the Roman, Medieaval and Present warm periods seperated by the cool periods of the Dark Age and Little Ice Age. This ~900-year cycle is certainly not sinusoidal (warming from the LIA has been approximately linear), and if it continues then it will soon enter (or has started to enter) a cooling phase. The slope of this cycle may have increased or reduced as part of its transition to cooling.
So,
variations in natural cycles could be entirely responsible for the difference between adjacent cycles which the L&S method ascribes to ‘climate sensitivity’.
or, alternatively,
variations in natural cycles may have masked almost all the difference between adjacent cycles which the L&S method ascribes to ‘climate sensitivity’.
It is not possible to determine which of these alternative possiblities is true because
(a) we lack detailed knowledge of the cycles and their causes
and
(b) there is no possibility of deconvoluting the cycles if we had detailed knowledge of the cycles and their causes.
[snip]
In this case, it is not possible to demonstrate the L&S determination of climate sensitivity is ‘good’ because we lack detailed knowledge of the cycles and their causes.
[snip]”
In summation, the climate sensitivity indicated by the L&S method is not justified by the method. The value of climate sensitivity obtained by the L&S method may be near its ‘true’ value (and I think it is) but – if so – then that could be a mere coincidence. In the absence of other information, the possible errors of the method are so great that – according to the L&S method – the ‘true’ climate sensitivity could be larger than the largest used by the IPCC or negative, or anything in between.
Richard
Leif Svalgaard says:
July 30, 2011 at 11:28 am
Geoff Sharp says:
July 30, 2011 at 9:21 am
Also when looking at the 172 year cycle in the temperature or solar proxy record is not supremely evident because the cycle has multiple prongs.[…]
If we only relied on Fourier analysis the world would be a poorer place.
——————————
Sometimes just looking at the data works too [although Fourier analysis would also pick up any cycles, even if the period is not strictly constant]. There is no correlation between your U/N 172 stuff and solar activity. Here are a direct comparison for the past 6000 years.
Wow, Leif contemplating the Wolff & Patrone paper and now analyzing JPL barycentric data, the world is a changing place. While the JPL distance values are of some use the velocity figures are quite different, you still probably wont see the 60 year signal for the good reasons outlined but you should be working with the right dataset.
But back to your comment…You must have missed my earlier post regarding how the 172 year quasi cycle is hidden from Fourier type analysis so I will repeat, it also applies to the JPL AM data.
———————-
“Also when looking at the 172 year cycle in the temperature or solar proxy record is not supremely evident because the cycle has multiple prongs. It travels in a cluster (usually 3) or multiple components that occur each 172 years. Think of it as a hand on a clock that ends in a trident, every time it goes past midnight the amount of prongs varies, sometimes it has the last prong missing or the first prong could be missing or all three are present. Add to that a variable “strength” to each prong and you see why a regular pattern cannot be teased out, but the underlying force is still there. This is how grand minima works, another example of Uranus and Neptune at work.
If we only relied on Fourier analysis the world would be a poorer place. Nature does not always conform.”
I am trying to make it as simple as possible, if you can see any problem with my reasoning I will step through it slowly if required.
In recent related threads (at WUWT & Climate Etc.) someone linked to this:
Vincze, M.; & Janosi, I.M. (2011). Is the Atlantic Multidecadal Oscillation (AMO) a statistical phantom? Nonlinear Processes in Geophysics 18, 469-475. doi:10.5194/npg-18-469-2011.
http://www.nonlin-processes-geophys.net/18/469/2011/npg-18-469-2011.pdf
Found time to look it over. 3 comments:
1. Erroneously conflates AMO with AMOC.
2. Uses 10-year-smoothed AMO definition without even mentioning more-commonly-encountered AMO definitions.
3. Based on garbage “Gaussian IID” assumption. (For a good laugh, see figure 7. Hopelessly blinded by patently untenable abstraction.)
HOWEVER, they got one very important thing right:
There’s no stationary 60 year cycle.
Regards.
Confounding once again very wisely pointed out by Ninderthana (July 30, 2011 at 10:18 am) [accepting refinement/clarification by Leif Svalgaard (July 30, 2011 at 10:30 am)] sounds sensible. Piers Corbyn appears to be on the same page and I thank him for very efficiently straightening me out (via one very concise e-mail) where I had either misunderstood &/or been misdirected by past comments of Leif Svalgaard. Regards.
Geoff Sharp says:
July 30, 2011 at 4:28 pm
Wow, Leif contemplating the Wolff & Patrone paper and now analyzing JPL barycentric data, the world is a changing place.
In stark contrast to you, I actually examine other possibilities, instaed of being stuck on one worldview.
While the JPL distance values are of some use the velocity figures are quite different, you still probably wont see the 60 year signal for the good reasons outlined but you should be working with the right dataset.
It makes no difference with dataset you are working as they all are just variations on the same reality.
But back to your comment…You must have missed my earlier post regarding how the 172 year quasi cycle is hidden from Fourier type analysis so I will repeat, it also applies to the JPL AM data.
I showed the actual data in an easy to compare format. No Fourier analysis needed.
possible, if you can see any problem with my reasoning I will step through it slowly if required.
You can begin by
1) showing that the speed or AM data are different from the distance data.
2) annotating [with circles or red dots] how the U/N perturbations align with the grand minima on the plots I provided.
Paul Vaughan says:
July 30, 2011 at 5:08 pm
HOWEVER, they got one very important thing right:
There’s no stationary 60 year cycle.
Agree, Geoff?
Regards.
Paul Vaughan says:
July 30, 2011 at 5:37 pm
Confounding once again very wisely pointed out by Ninderthana (July 30, 2011 at 10:18 am) [accepting refinement/clarification by Leif Svalgaard (July 30, 2011 at 10:30 am)] sounds sensible.
Obscure as always. If you have something to say, say it in plain English.
@John Day
So you can’t reconstruct the original signal by simply adding the spectral components, at least not without a lot of complicated bookkeeping to account for interactions between components.
@Willis
Well … since I reconstructed the signal in Figure 6 without any “complicated bookkeeping”, I’m not sure what you mean by this objection. I just calculated the individual cycles and added them together … what am I missing?
Sorry, I didn’t make myself clear. What I meant was (if the projection space axes are not orthogonal) that you get a different decomposition depending on the order in which you pick the projections to subtract for calculating residuals. The ‘bookkeeping’ would be the calculation of the interactions between the non-orthogonal components, to reconcile the variance caused by ordering.
Quoting Sephares and Staley from their original paper on periodicty analysis:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.8853&rep=rep1&type=pdf
“… a projection onto one sinusoidal basis function (in the Fourier transform)
is independent of the projections onto others, and the Fourier
decomposition can proceed by projecting onto one subspace,
subtracting out the projection, and repeating. Orthogonality
guarantees that the order of projection is irrelevant. This is
not true for projection onto nonorthogonal subspaces such as
the periodic subspaces [for Periodicity Analysis] … ”
Consider the simple example of a 2-D orthogonal system eg. X-Y axes for plotting 2-D points. If I change the X coordinate of a point, it will move in the X direction, but the Y value of the point will remain the same. Now what happens if the X-Y axes are not orthogonal, i.e. some other angle than ninety degrees (say 45 degrees). Now the Y values are dependent on the X values. If I change the X value and replot the new point will have a different Y value, dependent on the amount of non-orthogonality.
The authors go on to say that this is not always bad. It creates a kind of non-unique redundacy that they found useful. But I think it’s bad if you’re depending on the spectral analysis to create unique signatures which are invariant to spatial shifting (which Fourier (and Gabor) components offer).
There is a similar concept in wavelet analysis (that Paul Vaughan mentioned) called ‘wavelet frames’, which are wavelet bases which have a certain amount of redundant non-orthognality (with a frame parameter which allows you to adjust this redundacy from ‘loose’ (highly redundant) to ‘tight’ (almost orthogonal): http://inst.eecs.berkeley.edu/~ee225b/sp11/handouts/wavelet-vetterli-1991.pdf [see page 23]
I’m still not clear on the HADCRUT3 spectra above. Could you provide a pointer to the data you analyzed so I can do my own spectral analysis?
Thanks.
Willis
Does not the global mean temperature anomaly show a 60 years cycle as shown below?
http://bit.ly/ePQnJj
Leif Svalgaard says:
July 30, 2011 at 5:52 pm
You can begin by
1) showing that the speed or AM data are different from the distance data.
2) annotating [with circles or red dots] how the U/N perturbations align with the grand minima on the plots I provided.
1. You are not keeping up Leif. I produced this graph in May 2009, it is on my website, in my paper and produced here at least 5 times in the past few days.
http://tinyurl.com/2dg9u22/images/vel_am.jpg
Please take the time to study it properly. While AM and solar velocity are related you will notice there are times when there is some divergence in the peaks and troughs. This happens mainly near the yellow dots which is the U/N conjunction. The AM data is more closely related to the distance data but still not the same.
2. It is far better to plot the AM when looking for U/N perturbations, once again the detail you require has been published since 2008. My paper has the annotations back to 1200BC. These perturbations relate to grand minima triggers, they have nothing to do with the 60 year cycle.
http://arxiv.org/ftp/arxiv/papers/1005/1005.5303.pdf
Both yourself and Willis have refused to look at the data via other methods and rely solely on a one pass method of analysis. I have shown why the cycles are not apparent in the Fourier analysis but to no avail. Are you guys talking past me?
When it comes to solar velocity and the solar powerwave the background trend is what is important. I cant see why you fail to see this. Once again look at the powerwave diagram, see the AM wave that modulates solar cycle output, it is a 172 year modulation of the solar dynamo which is a second level background trend that will never show via your methods.
http://tinyurl.com/2dg9u22/images/Powerwave.png
A guide to understanding the powerwave below…read it.
http://tinyurl.com/2dg9u22/?q=node/218
Geoff Sharp says:
July 30, 2011 at 8:59 pm
Please take the time to study it properly. While AM and solar velocity are related you will notice there are times when there is some divergence in the peaks and troughs. This happens mainly near the yellow dots which is the U/N conjunction. The AM data is more closely related to the distance data but still not the same.
Granted that the solar velocity is a crappy fit to AM. However the distance shows the influence of U/N even more clearly, while otherwise agreeing closely with AM:
http://www.leif.org/research/Comparison-AM-Barycenter-Distance.png
Because 1) the Steinhilber solar activity curve is the best we have for the moment [based both on 10Be and 14C], and 2) we need to see the result on a bigger scale, please annotate with red dots the graphs I provided:
http://www.leif.org/research/Solar-Activity-vs.Barycenter-Distance-BC.png
http://www.leif.org/research/Solar-Activity-vs.Barycenter-Distance-AD.png
Failure to do so is a clear indication thet the curves don’t correlate, right?
Are you guys talking past me?
No, i’m showing you the raw data, no FFT, so please annotate the graphs and show us.
When it comes to solar velocity and the solar powerwave the background trend is what is important. I cant see why you fail to see this.
If the comparison fails or you fail to make it, there is no need to look at the rest.
Geoff Sharp says:
July 30, 2011 at 8:59 pm
Once again look at the powerwave diagram, see the AM wave that modulates solar cycle output
It is clear by inspection that what you label ‘Angular Momentum’ and call ‘the AM Wave on
http://www.landscheidt.info/images/Powerwave.png
is not what you show as labelled ‘Angular Momentum’ ob this graph
http://tinyurl.com/2dg9u22/images/vel_am.jpg
Perhaps you could explain that. It seems that on the first graph the AM wave have 11-yr cycles to match[?] the solar cycles, but on the second graph you show the real AM with its 20-yr period.
Geoff Sharp says:
July 30, 2011 at 8:59 pm
When it comes to solar velocity and the solar powerwave the background trend is what is important. I cant see why you fail to see this.
If the comparison fails or you fail to make it, there is no need to look at the rest.
But it is also clear that what you label ‘Angular Momentum’ on the ‘powerwave plot’
http://tinyurl.com/2dg9u22/images/Powerwave.png [seems to have ~11-yr period]
is not what you [correctly] label ‘Angular Momentum’ on this plot:
http://tinyurl.com/2dg9u22/images/vel_am.jpg [seems to have ~20-yr period, as it should]
This is perhaps one reason that we don’t see it…
Leif Svalgaard says:
July 30, 2011 at 10:02 pm
Perhaps you could explain that. It seems that on the first graph the AM wave have 11-yr cycles to match[?] the solar cycles, but on the second graph you show the real AM with its 20-yr period.
There is an explanation on my website and in my paper but happy to repeat here. The AM graph is a sine wave, but the high and low values have equal value. Think of linear acceleration/deceleration where each is just as important. Another method would be to invert every second sunspot cycle but I chose to invert the bottom half of the AM graph to show the wave. The objective is to show the power function that might be difficult to visualize using other methods.
It is important to compare the solar AM values with the Holocene record. The U/N perturbations come in different forms that need to be understood, using a different format will cause confusion. It is a big job to produce the AM charts, 3000 years should be enough data (1200BC) to do an initial check. If need be I can go back further at a later date or you could try yourself using Carl’s data, the important issue is to understand how to quantify the U/N perturbation, there are at least 2 types that vary in intensity. There are other factors that need to be known before comparison and I suggest you read my paper thoroughly. You will also see I have already done the comparison you are attempting in my paper using the Solanki data.
I will await your questions on perturbation quantification.
Geoff Sharp says:
July 30, 2011 at 11:11 pm
The AM graph is a sine wave, but the high and low values have equal value.
If they have equal value they cannot be high and low. In any event, the graph is the misleadingly mislabeled. Both the text and the left-hand scale.
Think of linear acceleration/deceleration where each is just as important.
This does not make sense. What you are doing is to invert half of the down slope and half of the up slope. There is no justification for this as the deceleration covers the whole of the down slope and the acceleration covers the whole of the up slope, not just half.
The objective is to show the power function that might be difficult to visualize using other methods.
It seems to me that the objective is to cope with the fact that the AM shows a 20-yr cycle and not a 10-yr cycle, and thus is just a misleading sleight of hand. Not science.
It is important to compare the solar AM values with the Holocene record.
This is what I do, 5000 years of it.
It is a big job to produce the AM charts
It took me all of 10 minutes.
I will await your questions on perturbation quantification.
Naw, not needed as the correlation you claim has already been shown to be non-existing. I have carefully prepared corresponding AM and solar activity records for you to put dots on. To regain a little credibility you should annotate the graphs I [with minimal effort] have prepared for you. This is like dividing a cake. One person does the slicing, the other person chooses which slice he wants. In this way, a fair division is assured. So, annotate!
Leif Svalgaard says:
July 30, 2011 at 11:32 pm
Geoff Sharp says:
July 30, 2011 at 11:11 pm
The AM graph is a sine wave, but the high and low values have equal value.
If they have equal value they cannot be high and low. In any event, the graph is the misleadingly mislabeled. Both the text and the left-hand scale.
The left hand scale is a calculated value for all values under 2E+47, so maybe a bit confusing.
Think of linear acceleration/deceleration where each is just as important.
This does not make sense. What you are doing is to invert half of the down slope and half of the up slope. There is no justification for this as the deceleration covers the whole of the down slope and the acceleration covers the whole of the up slope, not just half.
Yes, your right a bad analogy, the second half of the cycle would be more appropriate. Observations show that solar cycles are higher the further the AM chart gets away from the centre line. When you get into perturbation quantification this will become clear. Your analysis could also be useful.
The objective is to show the power function that might be difficult to visualize using other methods.
It seems to me that the objective is to cope with the fact that the AM shows a 20-yr cycle and not a 10-yr cycle, and thus is just a misleading sleight of hand. Not science.
Not at all. I can tell you have not read the paper, otherwise you would realize I am NOT trying to line up the AM cycles with the solar cycles. The AM curve is purely a background engine and not responsible for cycle length, it doesn’t matter how long the cycle is. This is the mistake that Ed Fix is making. To make it clearer, each AM cycle represents the outer and inner loop of the solar path which is around 10 years each. A large outer loop usually corresponds with a high solar cycle, a tight inner loop close to the SSB corresponds with a higher solar cycle. This is the modulating force, but there are times when the disruptive force (grand minima) over rides. The timing of the U/N perturbation should also start to become clear. This diagram will help. (you should have studied it already)
http://tinyurl.com/2dg9u22/images/carsten.jpg
It is important to compare the solar AM values with the Holocene record.
——————-
This is what I do, 5000 years of it.
It is a big job to produce the AM charts
It took me all of 10 minutes.
This exercise cannot be done with the solar distance values, you need to either plot Carl’s AM values or pull out all the JPL vector data and apply a formula.
I will await your questions on perturbation quantification.
Naw, not needed as the correlation you claim has already been shown to be non-existing. I have carefully prepared corresponding AM and solar activity records for you to put dots on. To regain a little credibility you should annotate the graphs I [with minimal effort] have prepared for you. This is like dividing a cake. One person does the slicing, the other person chooses which slice he wants. In this way, a fair division is assured. So, annotate!
No, you have prepared solar distance graphs. I have already annotated the AM graphs back to 1200BC. If they are not sufficient we will have to start again. You are not in a position yet to determine correlation.