Riding a Pseudocycle

Guest Post by Willis Eschenbach

Loehle and Scafetta recently posted a piece on decomposing the HadCRUT3 temperature record into a couple of component cycles plus a trend. I disagreed with their analysis on a variety of grounds. In the process, I was reminded of work I had done a few years ago using what is called “Periodicity Analysis” (PDF).

A couple of centuries ago, a gentleman named Fourier showed that any signal could be uniquely decomposed into a number of sine waves with different periods. Fourier analysis has been a mainstay analytical tool since that time. It allows us to detect any underlying regular sinusoidal cycles in a chaotic signal.

Figure 1. Joseph Fourier, looking like the world’s happiest mathematician

While Fourier analysis is very useful, it has a few shortcomings. First, it can only extract sinusoidal signals. Second, although it has good resolution as short timescales, it has poor resolution at the longer timescales. For many kinds of cyclical analysis, I prefer periodicity analysis.

So how does periodicity analysis work? The citation above gives a very technical description of the process, and it’s where I learned how to do periodicity analysis. Let me attempt to give a simpler description, although I recommend the citation for mathematicians.

Periodicity analysis breaks down a signal into cycles, but not sinusoidal cycles. It does so by directly averaging the data itself, so that it shows the actual cycles rather than theoretical cycles.

For example, suppose that we want to find the actual cycle of length two in a given dataset. We can do it by numbering the data points in order, and then dividing them into odd- and even-numbered data points. If we average all of the odd data points, and we average all of the even data, it will give us the average cycle of length two in the data. Here is what we get when we apply that procedure to the HadCRUT3 dataset:

Figure 2. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 2. The cycle has been extended to be as long as the original dataset.

As you might imagine for a cycle of length 2, it is a simple zigzag. The amplitude is quite small, only plus/minus a hundredth of a degree. So we can conclude that there is only a tiny cycle of length two in the HadCRUT3.

Next, here is the same analysis, but with a cycle length of four. To do the analysis, we number the dataset in order with a cycle of four, i.e. “1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4 …”

Then we average all the “ones” together, and all of the twos and the threes and the fours. When we plot these out, we see the following pattern:

Figure 3. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 4. The cycle has been extended to be as long as the original dataset.

As I mentioned above, we are not reducing the dataset to sinusoidal (sine wave shaped) cycles. Instead, we are determining the actual cycles in the dataset. This becomes more evident when we look at say the twenty year cycle:

Figure 4. Periodicity in the HadCRUT3 dataset, with a cycle length of 20. The cycle has been extended to be as long as the original dataset.

Note that the actual 20 year cycle is not sinusoidal. Instead, it rises quite sharply, and then decays slowly.

Now, as you can see from the three examples above, the amplitudes of the various length cycles are quite different. If we set the mean (average) of the original data to zero, we can measure the power in the cyclical underlying signals as the sum of the absolute values of the signal data. It is useful to compare this power value to the total power in the original signal. If we do this at all possible frequencies, we get a graph of the strength of each of the underlying cycles.

For example, suppose we are looking at a simple sine wave with a period of 24 years. Figure 5 shows the sine wave, along with periodicity analysis in blue showing the power in each of the various length cycles:

Figure 5. A sine wave, along with the periodicity analysis of all cycles up to half the length of the dataset.

Looking at Figure 5, we can see one clear difference between Fourier analysis and periodicity analysis — the periodicity analysis shows peaks at 24, 48, and 72 years, while a Fourier analysis of the same data would only show the 24-year cycle. Of course, the apparent 48 and 72 year peaks are merely a result of the 24 year cycle. Note also that the shortest length peak (24 years) is sharper than the longest length (72-year) peak. This is because there are fewer data points to measure and average when we are dealing with longer time spans, so the sharp peaks tend to broaden with increasing cycle length.

To move to a more interesting example relevant to the Loehle/Scafetta paper, consider the barycentric cycle of the sun. The sun rotates around the center of mass of the solar system. As it rotates, it speeds up and slows down because of the varying pull of the planets. What are the underlying cycles?

We can use periodicity analysis to find the cycles that have the most effect on the barycentric velocity. Figure 6 shows the process, step by step:

Figure 6. Periodicity analysis of the annual barycentric velocity data. 

The top row shows the barycentric data on the left, along with the amount of power in cycles of various lengths on the right in blue. The periodicity diagram at the top right shows that the overwhelming majority of the power in the barycentric data comes from a ~20 year cycle. It also demonstrates what we saw above, the spreading of the peaks of the signal at longer time periods because of the decreasing amount of data.

The second row left panel shows the signal that is left once we subtract out the 20-year cycle from the barycentric data. The periodicity diagram on the second row right shows that after we remove the 20-year cycle, the maximum amount of power is in the 83 year cycle. So as before, we remove that 83-year cycle.

Once that is done, the third row right panel shows that there is a clear 19-year cycle (visible as peaks at 19, 38, 57, and 76 years. This cycle may be a result of the fact that the “20-year cycle” is actually slightly less than 20 years). When that 19-year cycle is removed, there is a 13-year cycle visible at 13, 26, 39 years etc. And once that 13-year cycle is removed … well, there’s not much left at all.

The bottom left panel shows the original barycentric data in black, and the reconstruction made by adding just these four cycles of different lengths is shown in blue. As you can see, these four cycles are sufficient to reconstruct the barycentric data quite closely. This shows that we’ve done a valid deconstruction of the original data.

Now, what does all of this have to do with the Loehle/Scafetta paper? Well, two things. First, in the discussion on that thread I had said that I thought that the 60 year cycle that Loehle/Scafetta said was in the barycentric data was very weak. As the analysis above shows, the barycentric data does not have any kind of strong 60-year underlying cycle. Loehle/Scafetta claimed that there were ~ 20-year and ~ 60-year cycles in both the solar barycentric data and the surface temperature data. I find no such 60-year cycle in the barycentric data.

However, that’s not what I set out to investigate. I started all of this because I thought that the analysis of random red-noise datasets might show spurious cycles. So I made up some random red-noise datasets the same length as the HadCRUT3 annual temperature records (158 years), and I checked to see if they contained what look like cycles.

A “red-noise” dataset is one which is “auto-correlated”. In a temperature dataset, auto-correlated means that todays temperature depends in part on yesterday’s temperature. One kind of red-noise data is created by what are called “ARMA” processes. “AR” stands for “auto-regressive”, and “MA” stands for “moving average”. This kind of random noise is very similar observational datasets such as the HadCRUT3 dataset.

So, I made up a couple dozen random ARMA “pseudo-temperature” datasets using the AR and MA values calculated from the HadCRUT3 dataset, and I ran a periodicity analysis on each of the pseudo-temperature datasets to see what kinds of cycles they contained. Figure 6 shows eight of the two dozen random pseudo-temperature datasets in black, along with the corresponding periodicity analysis of the power in various cycles in blue to the right of the graph of the dataset:

Figure 6. Pseudo-temperature datasets (black lines) and their associated periodicity (blue circles). All pseudo-temperature datasets have been detrended.

Note that all of these pseudo-temperature datasets have some kind of apparent underlying cycles, as shown by the peaks in the periodicity analyses in blue on the right. But because they are purely random data, these are only pseudo-cycles, not real underlying cycles. Despite being clearly visible in the data and in the periodicity analyses, the cycles are an artifact of the auto-correlation of the datasets.

So for example random set 1 shows a strong cycle of about 42 years. Random set 6 shows two strong cycles, of about 38 and 65 years. Random set 17 shows a strong ~ 45-year cycle, and a weaker cycle around 20 years or so. We see this same pattern in all eight of the pseudo-temperature datasets, with random set 20 having cycles at 22 and 44 years, and random set 21 having a 60-year cycle and weak smaller cycles.

That is the main problem with the Loehle/Scafetta paper. While they do in fact find cycles in the HadCRUT3 data, the cycles are neither stronger nor more apparent than the cycles in the random datasets above. In other words, there is no indication at all that the HadCRUT3 dataset has any kind of significant multi-decadal cycles.

How do I know that?

Well, one of the datasets shown in Figure 6 above is actually not a random dataset. It is the HadCRUT3 surface temperature dataset itself … and it is indistinguishable from the truly random datasets in terms of its underlying cycles. All of them have visible cycles, it’s true, in some cases strong cycles … but they don’t mean anything.

w.

APPENDIX:

I did the work in the R computer language. Here’s the code, giving the “periods” function which does the periodicity function calculations. I’m not that fluent in R, it’s about the eighth computer language I’ve learned, so it might be kinda klutzy.

#FUNCTIONS

PI=4*atan(1) # value of pi

dsin=function(x) sin(PI*x/180) # sine function for degrees

regb =function(x) {lm(x~c(1:length(x)))[[1]][[1]]} #gives the intercept of the trend line

regm =function(x) {lm(x~c(1:length(x)))[[1]][[2]]} #gives the slope of the trend line

detrend = function(x){ #detrends a line

x-(regm(x)*c(1:length(x))+regb(x))

}

meanbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle means

rep(tapply(x,modline,mean),length.out=length(x))

}

countbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle number of datapoints N

rep(tapply(x,modline,length),length.out=length(x))

}

sdbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle standard deviations

rep(tapply(x,modline,sd),length.out=length(x))

}

normmatrix=function(x) sum(abs(x)) #returns the norm of the dataset, which is proportional to the power in the signal

# Function “periods” (below) is the main function that calculates the percentage of power in each of the cycles. It takes as input the data being analyzed (inputx). It displays the strength of each cycle. It returns a list of the power of the cycles (vals), along with the means (means), numner of datapoints N (count), and standard deviations (sds).

# There’s probably an easier way to do this, I’ve used a brute force method. It’s slow on big datasets

periods=function(inputx,detrendit=TRUE,doplot=TRUE,val_lim=1/2) {

x=inputx

if (detrendit==TRUE) x=detrend(as.vector(inputx))

xlen=length(x)

modmatrix=matrix(NA, xlen,xlen)

modmatrix=matrix(mod((col(modmatrix)-1),row(modmatrix)),xlen,xlen)

countmatrix=aperm(apply(modmatrix,1,countbyrow,x))

meanmatrix=aperm(apply(modmatrix,1,meanbyrow,x))

sdmatrix=aperm(apply(modmatrix,1,sdbyrow,x))

xpower=normmatrix(x)

powerlist=apply(meanmatrix,1,normmatrix)/xpower

plotlist=powerlist[1:(length(powerlist)*val_lim)]

if (doplot) plot(plotlist,ylim=c(0,1),ylab=”% of total power”,xlab=”Cycle Length (yrs)”,col=”blue”)

invisible(list(vals=powerlist,means=meanmatrix,count=countmatrix,sds=sdmatrix))

}

# /////////////////////////// END OF FUNCTIONS

# TEST

# each row in the values returned represents a different period length.

myreturn=periods(c(1,2,1,4,1,2,1,8,1,2,2,4,1,2,1,8,6,5))

myreturn$vals

myreturn$means

myreturn$sds

myreturn$count

#ARIMA pseudotemps

# note that they are standardized to a mean of zero and a standard deviation of 0.2546, which is the standard deviation of the HadCRUT3 dataset.

# each row is a pseudotemperature record

instances=24 # number of records

instlength=158 # length of each record

rand1=matrix(arima.sim(list(order=c(1,0,1), ar=.9673,ma=-.4591),

n=instances*instlength),instlength,instances) #create pseudotemps

pseudotemps =(rand1-mean(rand1))*.2546/sd(rand1)

# Periodicity analysis of simple sine wave

par(mfrow=c(1,2),mai=c(.8,.8,.2,.2)*.8,mgp=c(2,1,0)) # split window

sintest=dsin((0:157)*15)# sine function

plotx=sintest

plot(detrend(plotx)~c(1850:2007),type=”l”,ylab= “24 year sine wave”,xlab=”Year”)

myperiod=periods(plotx)

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

416 Comments
Inline Feedbacks
View all comments
tallbloke
August 4, 2011 1:41 am

Geoff, keep going, I can see what you’re driving at.
future.png gets a 404
On your other chart, would a poly fit be more useful than a flat ‘bar’? It might bring out the effective-peak trough shifts more clearly. Not strictly following the barycentric data, but I doubt the proxy is a correct representation anyway.

Richard S Courtney
August 4, 2011 2:51 am

Leif Svalgaard:
At August 3, 2011 at 9:04 pm you reply to sky who said (at August 3, 2011 at 7:45 pm)
“you might ponder the legitimacy of periodically extending the average waveform computed by the algorithm when the underlying data is aperiodic.”
by saying;
“This is what L&S did
http://wattsupwiththat.files.wordpress.com/2011/07/loehle-scafetta_fig3.png?w=640&h=494
Yes!
And that is a major part of the real problem with the L&S analysis; please see my comment in this thread at July 30, 2011 at 4:06 pm.
All this debate about planets, the Sun and the barycenter may be important and may interest you guys, but it is mere fluff in the context of the L&S analysis.
Richard

August 4, 2011 4:09 am

tallbloke says:
August 4, 2011 at 1:41 am
Geoff, keep going, I can see what you’re driving at.
future.png gets a 404
On your other chart, would a poly fit be more useful than a flat ‘bar’? It might bring out the effective-peak trough shifts more clearly. Not strictly following the barycentric data, but I doubt the proxy is a correct representation anyway.

I am not sure about the ploy fit, the record is leveled to allow for geomagnetic variance. The grand minima line by Usoskin is derived to maintain the illusion the Sun only enters grand minima occasionally and not on a regular basis, so the crap shoot mechanism of the Babcock theory is maintained. I am happy to raise the bar to the Dalton Minimum level for one sound reason. Grand minima in my view is about “phase catastrophe”. If the Hale cycle breaks down as we might witness for the first time in history it is worthy of attaining the Grand Minimum tag. For a second cycle to be killed because the preceding is violated is substantive.
I also have some doubts on the proxy records once we get back far enough, the first part(oldest) of the INTCAL98 record is via coral samples and then moves over to tree rings, the break is clearly visible. We need to keep in mind I am trying to match to a record that is not exactly a true representation of solar activity, the solar wind speed/density does have a coronal hole component. I also have some questions on the accuracy in the timeline once we get back around 5000 years ago…this will come out when we get to the BC records.

August 4, 2011 4:26 am

Richard S Courtney says:
August 4, 2011 at 2:51 am
All this debate about planets, the Sun and the barycenter may be important and may interest you guys, but it is mere fluff in the context of the L&S analysis.
You are missing the core of the L&S paper. The 60 year cycle in the temp record and PDO record is tied to the 60 year cycle in the solar velocity record about the SSB, via an unknown mechanism. The authors acknowledge the longer solar cycles aka solar forcing is not fully included in their results when determining the gap between the temp record.
Scafetta is 100% behind planetary influences being the controller of solar modulation and climate drivers, that is why the discussion is focused on these issues. Willis has tried to derail this area of science by applying one form of analysis….but unfortunately he failed and in the process showed us more of his character.

August 4, 2011 4:35 am

tallbloke says:
August 4, 2011 at 12:48 am
First of all, thanks for taking the time to so the analysis. You are right about this, and if I’d thought about it more before posting I’d have realised that the effect on barycentric distance of the eccentricity of Jupiters orbit is small compared to the effect of the Jupiter-Saturn synodic cycle.
Props for that, at least on this side of the fence we can admit when we are wrong.

Tom in Florida
August 4, 2011 4:42 am

Geoff Sharp says:
August 4, 2011 at 4:26 am
” Willis has tried to derail this area of science by applying one form of analysis….but unfortunately he failed and in the process showed us more of his character.”
I have no right to an opinion in this area due to the lack of technical knowledge but this seems to be a comment one would see at RC rather than here.

Editor
August 4, 2011 5:32 am

Geoff Sharp says: “You are missing the core of the L&S paper. The 60 year cycle in the temp record and PDO record is tied to the 60 year cycle in the solar velocity record about the SSB, via an unknown mechanism.”
The 60-year cycle in the PDO appears in the paleoclimatological reconstruction referenced by L&S but it does not appear in all reconstructions of the PDO. On the earlier thread about L&S here at WUWT, I asked L&S a number of questions about the reconstructions they referenced.
http://wattsupwiththat.com/2011/07/25/loehle-and-scafetta-calculate-0-66%c2%b0ccentury-for-agw/#comment-705914
I wrote:

Craig and Nicola: You’ve presented a very limited number of paleoclimatological reconstructions to confirm a 60-year cycle, which appears to be the backbone of this paper. Do other paleoclimatological studies support a 60-year cycle or is the 60-year cycle limited to the handful of studies presented? Does the “PDO” in the referenced paleoclimatelogical paper represent the SST of the North Pacific or the PDO as defined by JISAO?

It was a loaded question since I knew one of the answers. I had written a post about the lack of a 60-year cycle in the PDO reconstructions:
http://bobtisdale.wordpress.com/2010/03/15/is-there-a-60-year-pacific-decadal-oscillation-cycle/
L&S also referred to a Black et al (1990) Cariaco Basin Sea Surface Temperature reconstruction for the 60-year cycle in the AMO. But the Black et al (2007) reconstruction, which is based on the same cores, does not show 60-year cycles. And that was the basis for the second part of my question:

Also the Black et al (2007) Cariaco Basin Sea Surface Temperature reconstruction data…
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/contributions_by_author/black2007/black2007.txt
…(which is based on their 1990 samplings) does not appear to bear any similarity to the data from Black et al (1990) you’ve presented in your Figure 4a. Why?

L&S never bothered to answer those questions or the one that followed about their use of HADCRUT data:

Your model appears to be very dependent on the spatially incomplete sea surface temperature dataset (and soon to be obsolete) HADSST2. (Land surface data between suppliers is basically the same for the latitudes of 60S-65N.) How would the results differ with the interpolated SST products from the Hadley Centre (HADISST) and NOAA/NCDC (ERSST.v3b)?

Some paleoclimatological studies show a 60-year cycle, while others do not. Why? Do the proxies that show a 60-year cycle share a common factor that has no relationship to temperature? Are they in error because of it? Or are the numerous other studies that do not show a 60-year cycle in error? There are similar questions about the ICOADS, HADSST2, HADSST3, HADISST, ERSST.v3b, and Kaplan Sea Surface Temperature datasets, which serve as the backbone of the 60-year cycle assumption over the past 150 plus years. Some support a 60-year cycle; others do not. Which ones are right and which are wrong?

tallbloke
August 4, 2011 5:59 am

Tom in Florida says:
August 4, 2011 at 4:42 am
this seems to be a comment one would see at RC rather than here.

Knowing what he has said about what he has been through in the past, I pretty much forgive Willis his irascible behaviour, as he is always willing to quickly forgive in return. The substantive objections Scafetta and others raise to the technique Willis applies in this article are that which is important, not the personal diatribe anyone indulges in, colourful though it can be.
I must admit that every now and then I just can’t resist winding up those who love to dish it out, but can’t take it back. 🙂

August 4, 2011 6:28 am

Bob Tisdale says:
August 4, 2011 at 5:32 am
The 60-year cycle in the PDO appears in the paleoclimatological reconstruction referenced by L&S but it does not appear in all reconstructions of the PDO.
Granted, proxy records do that.

Richard Saumarez
August 4, 2011 6:41 am

John Day
Thank you for your education. I wonder, if I could ask you a very simple question, which might clarify your thinking and aid your understanding of the Shannon theorem.
If you take a signal and sample it between times T0 and T1, consider the interval between T0 and the first sample. With a sinc reconstruction, (which I am aware is a continuous function except for a limit at t=0), a point at T0 +dt is the sum of a(n)*sinc(n*sample period+dt) where n ranges from +- infinity. I agree that this calculation can be performed for any value of dt and the error can can be made arbitarily small by restriction on n.
However, what value of a(n) do you use when n is negative? If you do a DFT and perform an interpolation by expanding the spectrum, the values for a(n), when n is negative, are a(t1-n), which is a circular convolution. This is correct, only if you have an absolutely periodic signal and your sample period is exactly that period. Any other signal will give incorrect results.
So my question is how do you calculate the value of the function at T0+dt? If you can answer the question correctly, which I used to pose when teaching post-graduates, I might begin to believe that you have some understanding of the subject rather than simply producing ranting posts.

Editor
August 4, 2011 6:55 am

Geoff Sharp says: “Granted, proxy records do that.”
Then, basically, many of the discussions taking place on this thread are based solely on assumed, very possibly nonexistent, 60-year cycles in the surface temperature record, in the PDO record, and in the AMO record.

tallbloke
August 4, 2011 7:16 am

Leif Svalgaard says:
August 3, 2011 at 11:19 pm
And the analysis you have chanced upon is seriously flawed. To ‘get real’ one must perform a real analysis, like this one: http://www.leif.org/research/Jupiter-Distance-Monthly-Sunspot-Number.png

Hi Leif, further investigation reveals another graph from the same source, which on the face of it, doesn’t seem to be so easily explained. Please would you take a look:
http://www.bnhclub.org/JimP/jp/xyss.JPG
The curious thing is the period over which the sunspot number is averaged, 7.5 months. I’ll explain the oddity when you’ve given me an opinion.
Thanks

tallbloke
August 4, 2011 7:19 am

Bob Tisdale says:
August 4, 2011 at 6:55 am
Then, basically, many of the discussions taking place on this thread are based solely on assumed, very possibly nonexistent, 60-year cycles in the surface temperature record, in the PDO record, and in the AMO record.

Bob, would you agree that they equally ‘very possibly existent’, as they are ‘very possibly non-existent’?
Have we got a statistical test for that? 😉

August 4, 2011 7:46 am

tallbloke says:
August 4, 2011 at 12:48 am
I admit the confirmation bias, someone posted the graph on my blog last night and I threw it into this discussion without enough consideration. I’ve posted your excellent analysis and comment there in full.
Confirmation bias is a powerful force
Perfectly elastic (not necessarily rigid but they tend towards it) bodies will perfectly transmit force as resultant motion vestors in collision with other perfectly elastic bodies (pool ball experiment), inelastic bodies won’t (ball of putty or gas). […]
The major difference between the gas the sun is composed of and the tidal oceans on Earth is that water is incompressible and gas isn’t. So whereas Earth’s tides are raised on both sides of the planet because of the near perfect transmission of tidal force, on the Sun they won’t be. The effect of the gravitationally perturbing body will be more localised and therefore more concentrated.

This is totally off the charts. Study http://www.astro.uni-bonn.de/~izzard/doc/lectures/binary_stars/3/slides_3.pdf There are two bulges on each side of the star or the Sun. This has nothing to do with your elastic problem. Perhaps ask W&P if they believe there is only one bulge on the sun.
where the effect occurs at deeper levels (“carrying fresh fuel to deeper levels” as they put it) it will take a long time (there doesn’t seem to be a consensus on exactly how long), for the knock on effect to surface.
All estimates are from tens of thousands to even millions of years which will wash out any short-term periods.
Geoff Sharp says:
August 4, 2011 at 1:06 am
So you say a minor difference?, its only in my eyes? Let’s try to keep to the science.
Science says there is no spin-orbit coupling
These are solid physical attributes that coincide with much smaller solar disturbance observed across the Holocene during Type B.
Such as?
It is not hard to differentiate between the two types as already outlined plus the appropriate strength is shown with the color code which you seem to have ignored. Marking grand minima is as stated pointless
If one has to decide whether your cycles coincide with Grand Minima, knowing where you put them is essential.
I noticed you have not compared the AMP events (prongs) with the sunspot record.
I notice that you ignore http://www.leif.org/research/Solar-Activity-vs.Barycenter-Distance-Annotated.png
So, annotate my plots, please.

August 4, 2011 7:55 am

tallbloke says:
August 4, 2011 at 7:16 am
Hi Leif, further investigation reveals another graph from the same source, which on the face of it, doesn’t seem to be so easily explained.
Same thing: http://www.leif.org/research/Jupiter-Distance-Monthly-Sunspot-Number2.png
Note the clustering of points at both aphelion and perihelion. If you ignore that and calculate a correlation anyway R^2 is 0.0262, i.e. not significant.
The curious thing is the period over which the sunspot number is averaged, 7.5 months. I’ll explain the oddity when you’ve given me an opinion.
As there is no correlation, the oddity is not of interest.

Richard S Courtney
August 4, 2011 7:59 am

Bob Tisdale:
Sincere and heartfelt thanks for your comment at August 4, 2011 at 6:55 am which says;
“Geoff Sharp says: “Granted, proxy records do that.”
Then, basically, many of the discussions taking place on this thread are based solely on assumed, very possibly nonexistent, 60-year cycles in the surface temperature record, in the PDO record, and in the AMO record.”
I am sure all the astronomical discussions in this thread have great importance in their own right, and possibly warrant a thread on WUWT that specifically addresses them. But they are NOT the subject of this thread.
This thread is about the validity of the L&S determination of climate sensitivity based on the assumed invariance of two cycles (i.e. of 20 years and 60 years length) and the assumed irrelevance of all other cycles.
So, Geoff Sharp is plain wrong when (at August 4, 2011 at 4:26 am ) he asserts the astronomical considerations are “the core of the L&S paper. The L&S paper merely provides a conjecture as to the causes of the cycles which it assesses. And, importantly, the paper and its analysis are not affected in any way by whether or not that conjecture about cause is correct.
This thread is supposed to be discussing the above essay by Willis that argues there is insufficient evidence in the data for the existence – never mind the invariance – of the assessed two cycles. And I have repeatedly gone further than that.
Simply, in the context of the L&S determination of climate sensitivity, it does not matter one jot as to the causes of the cycles.
I again stated the real issue in my own post in this thread at July 30, 2011 at 4:06 pm . And I quote it here to save others finding it.
“Willis:
The L&S paper attempted to estimate climate sensitivity (to atmospheric CO2 concentration), but the focus here – and on the other thread – seems to be about astronomical effects on climate.
The important point is whether or not the assumptions that L&S used to derive climate sensitivity are a valid method to determine climate sensitivity. And the astronomical debate is irrelevant to that consideration.
The L&S estimate assumed there are only two cycles (of 20-year and 60-year lengths) which are significant in the climate data, and it assumed those cycles are constant over the analysis and prediction period.
Those assumptions are problematic for several reasons only one of which you are discussing here. As Mike Jonas and I discussed on the earlier thread, the issues are:
Are there ‘real’ (not merely apparent) cycles in climate data?
If there are real cycles, then do they have a cause other than being an indication of resonant frequencies in the climate system?
Are all cycles significant or only some?
Do individual cycles vary in amplitude and frequency?
In my opinion, the significant point of your analysis in this thread is stated by you when (at July 30, 2011 at 2:21 pm ) you say;
“My point is simple. I have shown that the appearance of the 60-year cycle in the temperature data is extremely likely to be an artifact of the length of the record and its autocorrelated nature. In addition, there is no strong 20-year cycle in the temperature data either.”
But that is only a part of the problem. I spelled out the entire issue in my post (July 30, 2011 at 1:02 am ) in the other thread where I wrote:
“[snip]
The issue is that apparent cycles vary but the L&S method assumes they don’t.
The amplitude of cycles varies and not all cycles continue without interruption. The L&S method assesses only two cycles 20-year and 60-year cycle length. Either or both could have increased or reduced its amplitude. And there are other cycles that could have varied, too.
For example, there is another cycle of ~900-year duration that provides the Roman, Medieaval and Present warm periods seperated by the cool periods of the Dark Age and Little Ice Age. This ~900-year cycle is certainly not sinusoidal (warming from the LIA has been approximately linear), and if it continues then it will soon enter (or has started to enter) a cooling phase. The slope of this cycle may have increased or reduced as part of its transition to cooling.
So,
variations in natural cycles could be entirely responsible for the difference between adjacent cycles which the L&S method ascribes to ‘climate sensitivity’.
or, alternatively,
variations in natural cycles may have masked almost all the difference between adjacent cycles which the L&S method ascribes to ‘climate sensitivity’.
It is not possible to determine which of these alternative possiblities is true because
(a) we lack detailed knowledge of the cycles and their causes
and
(b) there is no possibility of deconvoluting the cycles if we had detailed knowledge of the cycles and their causes.
[snip]
In this case, it is not possible to demonstrate the L&S determination of climate sensitivity is ‘good’ because we lack detailed knowledge of the cycles and their causes.
[snip]”
In summation, the climate sensitivity indicated by the L&S method is not justified by the method. The value of climate sensitivity obtained by the L&S method may be near its ‘true’ value (and I think it is) but – if so – then that could be a mere coincidence. In the absence of other information, the possible errors of the method are so great that – according to the L&S method – the ‘true’ climate sensitivity could be larger than the largest used by the IPCC or negative, or anything in between.”
The astronomical considerations are not relevant to any of that, but they have side-tracked the thread from discussing any of that. The best that could be said of the astronomical considerations is that they relate to some possible causes of the cycles assessed by L&S: but so what?
Richard

tallbloke
August 4, 2011 8:05 am

Richard, I’m not stopping you discussing the aspects you want to discuss, so what’s the problem?

tallbloke
August 4, 2011 8:12 am

Leif Svalgaard says:
August 4, 2011 at 7:55 am
tallbloke says:
August 4, 2011 at 7:16 am
Hi Leif, further investigation reveals another graph from the same source, which on the face of it, doesn’t seem to be so easily explained. http://www.bnhclub.org/JimP/jp/xyss.JPG
Same thing: http://www.leif.org/research/Jupiter-Distance-Monthly-Sunspot-Number2.png
Note the clustering of points at both aphelion and perihelion. If you ignore that and calculate a correlation anyway R^2 is 0.0262, i.e. not significant.
The curious thing is the period over which the sunspot number is averaged, 7.5 months. I’ll explain the oddity when you’ve given me an opinion.
As there is no correlation, the oddity is not of interest.

Hmmmm, I need to give this some thought. Thanks for the quick response, I’ll take a timeout to have a think on this. It seems odd that such a good correlation appears when the data is averaged at that timescale of 7.5 months. I think I might know why though, and it’s to do with interaction between Jupiter and another planet.
Thanks as always for your time.

John Day
August 4, 2011 8:24 am

S Courtney
> The best that could be said of the astronomical considerations is that they relate to
> some possible causes of the cycles assessed by L&S: but so what?
… because some of the [C]AGW arguments run like this: “We’ve detected some correlations with CO2 in the data, so CO2 must be cause. What else could it be?
The astronomical findings may provide some “what else’s”
BTW, excellent summary and assessment of this thread. Thanks.

Richard S Courtney
August 4, 2011 8:30 am

tallbloke:
At August 4, 2011 at 8:05 am you ask me:
“Richard, I’m not stopping you discussing the aspects you want to discuss, so what’s the problem?”
I answer that there are two problems.
Firstly, the Off Topic astronomical considerations have deflected this thread from discussion of its topic. Trolls often use such deflection as a deliberate tactic to inhibit discussion of a topic in a thread. In this case it has happened inadvertently (i.e. not deliberately and not by action of trolls).
Secondly, it HAS inhibited the discussion of the subject of this thread (i.e. discussion of the argument presented by Willis as to the validity of the L&S method). But that subject is worthy of discussion.
As I said;
“I am sure all the astronomical discussions in this thread have great importance in their own right, and possibly warrant a thread on WUWT that specifically addresses them. But they are NOT the subject of this thread.”
Richard

August 4, 2011 8:58 am

Leif Svalgaard says:
August 4, 2011 at 7:46 am
notice that you ignore http://www.leif.org/research/Solar-Activity-vs.Barycenter-Distance-Annotated.png
So, annotate my plots, please.

Your analysis is lousy Leif, no grouping, no recognition of AMP strength etc. It would be better if you plotted all of my annotations on the 10Be record using their respective colors (strength).
I am on the road for a few days and will address when I return. In the meantime ponder my graph again which shows the AMP strength as annotated on your graph summed to each 172 year centre.
http://tinyurl.com/2dg9u22/images/Future.png

Editor
August 4, 2011 9:03 am

tallbloke says: “Bob, would you agree that they equally ‘very possibly existent’, as they are ‘very possibly non-existent’?”
Regardless of how we present them, very possibly existent or very possibly nonexistent, the 60-year cycles in the temperature record are still an assumption. The appear in some records but not others. They are not a hard fact as portrayed by many on this thread.

John Day
August 4, 2011 11:27 am

Saumarez
[Re: Shannon sampling],…, what value of a(n) do you use when n is negative? If you do a DFT and perform an interpolation by expanding the spectrum, the values for a(n), when n is negative, are a(t1-n), which is a circular convolution.
The value of n is strictly for indexing through your set of finite temporal samples x1, x2, … , xn etc. for the purpose of summation. So, any sequence of integers will be fine. Recall that Shannon pointed out that sequences don’t have to start out at zero.
The sinc() reconstruction has nothing to do with DFT’s. That is a separate, unrelated indexing scheme.
Of course, the existence of the Fourier transform for the sequence is absolutely critical for Shannon’s proof. But not required for actual implementation, which again is simply a summation of sinc() function values over a set of samples at some precise time t.

John Day
August 4, 2011 11:47 am

Saumarez
[Re: Shannon sampling],
… forgot to add an additional requirement for the sequence indexing is that the expression (2Wt-n) evaluate to zero somewhere in the support range of the interval, such that sinc() can act as a sifting function, while interpolating the exact value of f(t).
I’ll add that the first time I read this theorem I was highly skeptical too. What if f(t) peaks between samples? How can it possibly predict that?
So I wrote a little program in Turbo Pascal to generate arbitrary band-limited functions from weighted sinusoids. Then the program would take equally spaced samples at arbitrary offsets and reconstruct them and plotted them superimposed on the original waveform. (You should try this too.)
I was amazed that the reconstruction faithfully found all the peaks and valleys between the sampling points. It was indeed ‘perfect reconstruction’, except for some minor endpoint effects at the first and last sample points.