Guest Post by Willis Eschenbach
Loehle and Scafetta recently posted a piece on decomposing the HadCRUT3 temperature record into a couple of component cycles plus a trend. I disagreed with their analysis on a variety of grounds. In the process, I was reminded of work I had done a few years ago using what is called “Periodicity Analysis” (PDF).
A couple of centuries ago, a gentleman named Fourier showed that any signal could be uniquely decomposed into a number of sine waves with different periods. Fourier analysis has been a mainstay analytical tool since that time. It allows us to detect any underlying regular sinusoidal cycles in a chaotic signal.
Figure 1. Joseph Fourier, looking like the world’s happiest mathematician
While Fourier analysis is very useful, it has a few shortcomings. First, it can only extract sinusoidal signals. Second, although it has good resolution as short timescales, it has poor resolution at the longer timescales. For many kinds of cyclical analysis, I prefer periodicity analysis.
So how does periodicity analysis work? The citation above gives a very technical description of the process, and it’s where I learned how to do periodicity analysis. Let me attempt to give a simpler description, although I recommend the citation for mathematicians.
Periodicity analysis breaks down a signal into cycles, but not sinusoidal cycles. It does so by directly averaging the data itself, so that it shows the actual cycles rather than theoretical cycles.
For example, suppose that we want to find the actual cycle of length two in a given dataset. We can do it by numbering the data points in order, and then dividing them into odd- and even-numbered data points. If we average all of the odd data points, and we average all of the even data, it will give us the average cycle of length two in the data. Here is what we get when we apply that procedure to the HadCRUT3 dataset:
Figure 2. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 2. The cycle has been extended to be as long as the original dataset.
As you might imagine for a cycle of length 2, it is a simple zigzag. The amplitude is quite small, only plus/minus a hundredth of a degree. So we can conclude that there is only a tiny cycle of length two in the HadCRUT3.
Next, here is the same analysis, but with a cycle length of four. To do the analysis, we number the dataset in order with a cycle of four, i.e. “1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4 …”
Then we average all the “ones” together, and all of the twos and the threes and the fours. When we plot these out, we see the following pattern:
Figure 3. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 4. The cycle has been extended to be as long as the original dataset.
As I mentioned above, we are not reducing the dataset to sinusoidal (sine wave shaped) cycles. Instead, we are determining the actual cycles in the dataset. This becomes more evident when we look at say the twenty year cycle:
Figure 4. Periodicity in the HadCRUT3 dataset, with a cycle length of 20. The cycle has been extended to be as long as the original dataset.
Note that the actual 20 year cycle is not sinusoidal. Instead, it rises quite sharply, and then decays slowly.
Now, as you can see from the three examples above, the amplitudes of the various length cycles are quite different. If we set the mean (average) of the original data to zero, we can measure the power in the cyclical underlying signals as the sum of the absolute values of the signal data. It is useful to compare this power value to the total power in the original signal. If we do this at all possible frequencies, we get a graph of the strength of each of the underlying cycles.
For example, suppose we are looking at a simple sine wave with a period of 24 years. Figure 5 shows the sine wave, along with periodicity analysis in blue showing the power in each of the various length cycles:
Figure 5. A sine wave, along with the periodicity analysis of all cycles up to half the length of the dataset.
Looking at Figure 5, we can see one clear difference between Fourier analysis and periodicity analysis — the periodicity analysis shows peaks at 24, 48, and 72 years, while a Fourier analysis of the same data would only show the 24-year cycle. Of course, the apparent 48 and 72 year peaks are merely a result of the 24 year cycle. Note also that the shortest length peak (24 years) is sharper than the longest length (72-year) peak. This is because there are fewer data points to measure and average when we are dealing with longer time spans, so the sharp peaks tend to broaden with increasing cycle length.
To move to a more interesting example relevant to the Loehle/Scafetta paper, consider the barycentric cycle of the sun. The sun rotates around the center of mass of the solar system. As it rotates, it speeds up and slows down because of the varying pull of the planets. What are the underlying cycles?
We can use periodicity analysis to find the cycles that have the most effect on the barycentric velocity. Figure 6 shows the process, step by step:
Figure 6. Periodicity analysis of the annual barycentric velocity data.
The top row shows the barycentric data on the left, along with the amount of power in cycles of various lengths on the right in blue. The periodicity diagram at the top right shows that the overwhelming majority of the power in the barycentric data comes from a ~20 year cycle. It also demonstrates what we saw above, the spreading of the peaks of the signal at longer time periods because of the decreasing amount of data.
The second row left panel shows the signal that is left once we subtract out the 20-year cycle from the barycentric data. The periodicity diagram on the second row right shows that after we remove the 20-year cycle, the maximum amount of power is in the 83 year cycle. So as before, we remove that 83-year cycle.
Once that is done, the third row right panel shows that there is a clear 19-year cycle (visible as peaks at 19, 38, 57, and 76 years. This cycle may be a result of the fact that the “20-year cycle” is actually slightly less than 20 years). When that 19-year cycle is removed, there is a 13-year cycle visible at 13, 26, 39 years etc. And once that 13-year cycle is removed … well, there’s not much left at all.
The bottom left panel shows the original barycentric data in black, and the reconstruction made by adding just these four cycles of different lengths is shown in blue. As you can see, these four cycles are sufficient to reconstruct the barycentric data quite closely. This shows that we’ve done a valid deconstruction of the original data.
Now, what does all of this have to do with the Loehle/Scafetta paper? Well, two things. First, in the discussion on that thread I had said that I thought that the 60 year cycle that Loehle/Scafetta said was in the barycentric data was very weak. As the analysis above shows, the barycentric data does not have any kind of strong 60-year underlying cycle. Loehle/Scafetta claimed that there were ~ 20-year and ~ 60-year cycles in both the solar barycentric data and the surface temperature data. I find no such 60-year cycle in the barycentric data.
However, that’s not what I set out to investigate. I started all of this because I thought that the analysis of random red-noise datasets might show spurious cycles. So I made up some random red-noise datasets the same length as the HadCRUT3 annual temperature records (158 years), and I checked to see if they contained what look like cycles.
A “red-noise” dataset is one which is “auto-correlated”. In a temperature dataset, auto-correlated means that todays temperature depends in part on yesterday’s temperature. One kind of red-noise data is created by what are called “ARMA” processes. “AR” stands for “auto-regressive”, and “MA” stands for “moving average”. This kind of random noise is very similar observational datasets such as the HadCRUT3 dataset.
So, I made up a couple dozen random ARMA “pseudo-temperature” datasets using the AR and MA values calculated from the HadCRUT3 dataset, and I ran a periodicity analysis on each of the pseudo-temperature datasets to see what kinds of cycles they contained. Figure 6 shows eight of the two dozen random pseudo-temperature datasets in black, along with the corresponding periodicity analysis of the power in various cycles in blue to the right of the graph of the dataset:
Figure 6. Pseudo-temperature datasets (black lines) and their associated periodicity (blue circles). All pseudo-temperature datasets have been detrended.
Note that all of these pseudo-temperature datasets have some kind of apparent underlying cycles, as shown by the peaks in the periodicity analyses in blue on the right. But because they are purely random data, these are only pseudo-cycles, not real underlying cycles. Despite being clearly visible in the data and in the periodicity analyses, the cycles are an artifact of the auto-correlation of the datasets.
So for example random set 1 shows a strong cycle of about 42 years. Random set 6 shows two strong cycles, of about 38 and 65 years. Random set 17 shows a strong ~ 45-year cycle, and a weaker cycle around 20 years or so. We see this same pattern in all eight of the pseudo-temperature datasets, with random set 20 having cycles at 22 and 44 years, and random set 21 having a 60-year cycle and weak smaller cycles.
That is the main problem with the Loehle/Scafetta paper. While they do in fact find cycles in the HadCRUT3 data, the cycles are neither stronger nor more apparent than the cycles in the random datasets above. In other words, there is no indication at all that the HadCRUT3 dataset has any kind of significant multi-decadal cycles.
How do I know that?
Well, one of the datasets shown in Figure 6 above is actually not a random dataset. It is the HadCRUT3 surface temperature dataset itself … and it is indistinguishable from the truly random datasets in terms of its underlying cycles. All of them have visible cycles, it’s true, in some cases strong cycles … but they don’t mean anything.
w.
APPENDIX:
I did the work in the R computer language. Here’s the code, giving the “periods” function which does the periodicity function calculations. I’m not that fluent in R, it’s about the eighth computer language I’ve learned, so it might be kinda klutzy.
#FUNCTIONS
PI=4*atan(1) # value of pi
dsin=function(x) sin(PI*x/180) # sine function for degrees
regb =function(x) {lm(x~c(1:length(x)))[[1]][[1]]} #gives the intercept of the trend line
regm =function(x) {lm(x~c(1:length(x)))[[1]][[2]]} #gives the slope of the trend line
detrend = function(x){ #detrends a line
x-(regm(x)*c(1:length(x))+regb(x))
}
meanbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle means
rep(tapply(x,modline,mean),length.out=length(x))
}
countbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle number of datapoints N
rep(tapply(x,modline,length),length.out=length(x))
}
sdbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle standard deviations
rep(tapply(x,modline,sd),length.out=length(x))
}
normmatrix=function(x) sum(abs(x)) #returns the norm of the dataset, which is proportional to the power in the signal
# Function “periods” (below) is the main function that calculates the percentage of power in each of the cycles. It takes as input the data being analyzed (inputx). It displays the strength of each cycle. It returns a list of the power of the cycles (vals), along with the means (means), numner of datapoints N (count), and standard deviations (sds).
# There’s probably an easier way to do this, I’ve used a brute force method. It’s slow on big datasets
periods=function(inputx,detrendit=TRUE,doplot=TRUE,val_lim=1/2) {
x=inputx
if (detrendit==TRUE) x=detrend(as.vector(inputx))
xlen=length(x)
modmatrix=matrix(NA, xlen,xlen)
modmatrix=matrix(mod((col(modmatrix)-1),row(modmatrix)),xlen,xlen)
countmatrix=aperm(apply(modmatrix,1,countbyrow,x))
meanmatrix=aperm(apply(modmatrix,1,meanbyrow,x))
sdmatrix=aperm(apply(modmatrix,1,sdbyrow,x))
xpower=normmatrix(x)
powerlist=apply(meanmatrix,1,normmatrix)/xpower
plotlist=powerlist[1:(length(powerlist)*val_lim)]
if (doplot) plot(plotlist,ylim=c(0,1),ylab=”% of total power”,xlab=”Cycle Length (yrs)”,col=”blue”)
invisible(list(vals=powerlist,means=meanmatrix,count=countmatrix,sds=sdmatrix))
}
# /////////////////////////// END OF FUNCTIONS
# TEST
# each row in the values returned represents a different period length.
myreturn=periods(c(1,2,1,4,1,2,1,8,1,2,2,4,1,2,1,8,6,5))
myreturn$vals
myreturn$means
myreturn$sds
myreturn$count
#ARIMA pseudotemps
# note that they are standardized to a mean of zero and a standard deviation of 0.2546, which is the standard deviation of the HadCRUT3 dataset.
# each row is a pseudotemperature record
instances=24 # number of records
instlength=158 # length of each record
rand1=matrix(arima.sim(list(order=c(1,0,1), ar=.9673,ma=-.4591),
n=instances*instlength),instlength,instances) #create pseudotemps
pseudotemps =(rand1-mean(rand1))*.2546/sd(rand1)
# Periodicity analysis of simple sine wave
par(mfrow=c(1,2),mai=c(.8,.8,.2,.2)*.8,mgp=c(2,1,0)) # split window
sintest=dsin((0:157)*15)# sine function
plotx=sintest
plot(detrend(plotx)~c(1850:2007),type=”l”,ylab= “24 year sine wave”,xlab=”Year”)
myperiod=periods(plotx)
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
pochas says:
August 3, 2011 at 10:09 am
What I get from this is that Fourier analysis and Periodicity analysis may not be the right tools for analyzing cyclic but non-stationary processes. Apparently Scafetta has a better one.
Spot on again. Fourier can be used, but from more than one angle is better. Hence the response from Scafetta which Willis tells us he will be replying to:
nicola scafetta says:
July 30, 2011 at 11:06 am
To Willis Eschenbach,
I am sorry that I need to contradict Willis, his analysis is very poor.
Our analysis is based on the correct thecniques, that is “multiple” power spectrum analisis agaist red noise background. I would like to insist on the word “multiple” because I used three alternative methods. The quasi 20 and 60 year cycles are quite evident in the data. This tests are done in Scafetta 2010. In L&S 2011 we simply references those results
Moreover similar cycles have been found by numerous other people in numerous climatic data and published in numerous data. So, there is very little to question.
Leif Svalgaard says:
August 3, 2011 at 9:48 am
tallbloke and Geoff [with occasional others] pushing with nastiness and insults their personal views way beyond what they are worth. No amount of sound counterarguments can restore some reasonableness into the ‘debate’ as we have seen.
We learned at your feet master.
Considering the amount of nastiness and insults you and Willis have dished out in the past, to see you girls whining about about it when a bit comes back your way is a hoot. Thanks for the laugh.
As to the question of who’s counterarguments are sound and reasonable and worthwhile, I’m sure you feel right is on your side to be the arbiters of good taste and judgement there, as do we.
Get a grip.
And Willis: Have a great flight, I’m jealous.
tallbloke says:
August 3, 2011 at 12:32 pm
Considering the amount of nastiness and insults you and Willis have dished out in the past, to see you girls whining about about it when a bit comes back your way is a hoot.
You are still at it, it seems. It would be refreshing if you could get back to science, if possible.
pochas says:
August 3, 2011 at 11:59 am
As for myself, the 60 year cycle is easily visible to the unaided eye in the recent temperature record, although I certainly don’t expect you to agree 🙂
The 60-yr is there [nobody says it isn’t] but not quite stationary. It is the 20-yr cycle that is missing.
@tallbloke
Richard Saumarez and John Day,
Please would you take a look at the DSP techniques employed in this post and leave some comment.
http://tallbloke.wordpress.com/2011/07/31/bart-modeling-the-historical-sunspot-record-from-planetary-periods/
I’d also appreciate any input you can give to the discussion about sampling rates for barycentric data towards the bottom of the comments in this thread too if you can spare the time
http://tallbloke.wordpress.com/2011/07/25/ed-fix-solar-activity-simulation-model-revealed/
Many thanks
I don’t have much time available (and especially don’t want to get drawn into these long, heated discussions).
But I would like to take a quick look at some of the data being discussed, e.g. the HadCRUT3 and barycentric datasets with the disputed decadal patterns and features. I kind of new around here and don’t know much about them.
Does there exist a list of pointers to websites where I can download datasets like these?
Thanks.
John Day,
The conversation on my site never gets heated, though occasionally long drawn out.
Hadruct3 here. Click the raw data link at the bottom of the page
http://woodfortrees.org/plot/hadcrut3vgl
Data for the distance in xyz between Barycentre and solar centre is from JPL Horizons. Click [change] to set parameters Tip: Select Body Equator for reference plane:
http://ssd.jpl.nasa.gov/horizons.cgi
For other datasets, this site has reference pages with links to sources at the bottom.
e.g.http://wattsupwiththat.com/reference-pages/global-temperature/
Link to reference pages is up near top of page
Hope that helps
@tallbloke says:
August 3, 2011 at 3:16 pm
Hi tallbloke, daft question – why do you ref the varience adjusted HadCRUT3v rather than HadCRUT3?
Leif Svalgaard says:
August 3, 2011 at 12:35 pm
It would be refreshing if you could get back to science, if possible.
Gladly.
Here’s more evidence of the Gas Giant Jupiter’s effect on solar activity.
http://www.bnhclub.org/JimP/jp/Javedist.JPG
So, can we get real please.
tallbloke says:
August 3, 2011 at 3:35 pm
Here’s more evidence of the Gas Giant Jupiter’s effect on solar activity.[…]
So, can we get real please.
You claim with your superior understanding of Newtonian mechanics that Newton’s laws are not valid for gases, because they are not elastic, so you are a bit off the reservation here. What happened to the barycenter idea? Why do you think a correlation is ‘real’ when the labels are so sloppy as they are? This is not science. Explain what you think this is. And what happened to all the other pieces: Saturn, Uranus/Neptune, etc
tallbloke says:
August 3, 2011 at 3:35 pm
Here’s more evidence of the Gas Giant Jupiter’s effect on solar activity.[…]
So, can we get real please.
“Why do you think a correlation is ‘real’ when the labels are so sloppy as they are?”
Perhaps that was hasty. It is possible you claim that when the Jupiter is closest to the Sun there are fewer sunspots, and that the farther away from the Sun Jupiter is, the more sunspots there are. Let us move Jupiter further out to increase solar activity [also increases its angular momentum]. Perhaps to infinity to really get the Sun going crazy 🙂
@tallbloke
> Data …
Thanks!
Greensand
Less noisy. But I’m sure John Day knows enough to get both and compare results.
Leif Svalgaard says:
August 3, 2011 at 3:56 pm
Let us move Jupiter further out to increase solar activity [also increases its angular momentum]. Perhaps to infinity to really get the Sun going crazy 🙂
Well, we should remember Jupiter’s orbit is more eccentric than the other gas giants, and it’s bigger (much bigger) and closer. So gravitationally it creates big perturbances on the Sun because instead of the hypothetical ~1mm tide for an Earth like semi-rigid body, a largely inelastic body such as the gaseous Sun is going to get it’s interior directly beneath Jupiter stirred around quite a lot more than an average tidal force spread across the entire body. These disturbances will create the ‘suitable flows’ which release extra fusion energy from the Sun a la Wolff and Patrone. This will create hotspots above the convection cells, with vorticity, and so sunspots.
So, why more sunspots when Jupiter is further away rather than closer to the Sun?
Well, when Jupiter is further away, it exerts less gravitational pull on the Sun and so the barycentre moves closer to the solar core. If you study the Wolff and Patrone paper closely, you’ll find out where the ‘sweet spot’ is for maximising the fusion energy release.
Of course, the other gas giants play a part in determining the barycentre, so that’s why the solar cycle doesn’t just follow the Jupiter orbital period, though it’s more often close to it than the average cycle length, just as it’s more often close to half the Jupiter-Saturn synodic period length than the average cycle length.
http://wattsupwiththat.com/2011/07/30/riding-a-pseudocycle/#comment-710197
http://wattsupwiththat.com/2011/07/30/riding-a-pseudocycle/#comment-710275
http://wattsupwiththat.com/2011/07/30/riding-a-pseudocycle/#comment-711511
Leif – Is this where you’re getting your ideas that molecules of oxygen and nitrogen in our atmosphere are elastic, i.e., ideal gas?
[And that because of this they move through the atmosphere as if ideal gas molecules without properties except for elasticity – i.e. as hard dots randomly moving at vast speeds through empty space bouncing off each other with no volume, etc. http://wattsupwiththat.com/2011/06/30/earths-climate-system-is-ridiculously-complex-with-draft-link-tutorial/#comment-706716%5D
Myrrh says:
August 3, 2011 at 5:26 pm
Leif – Is this where you’re getting your ideas that molecules of oxygen and nitrogen in our atmosphere are elastic
One can actually easily see that from the link you provided:
Myrrh says:
August 1, 2011 at 3:50 am
http://www.uwsp.edu/geo/faculty/ritter/geog101/textbook/circulation/air_pressure_p_1.html
Look at all that empty space between the molecules in Figure 6.1
randomly moving at vast speeds through empty space bouncing off each other with no volume
No, it is wrong that they have no volume. I have shown you many times that the volume of all the Nitrogen molecules in one cubic meter of [atmospheric pressure and density] Nitrogen gas is 0.00175 cubic meter, so that 1 – 0.00175 = 0.99825 cubic meters are not occupied by any molecules or by anything else, i.e. is empty space as you showed so nicely in your link. You can see them bounce around elastically here http://upload.wikimedia.org/wikipedia/commons/6/6d/Translational_motion.gif where the speed is slowed down two trillion times [otherwise their vast speeds would just look like a blur].
Willis,
The comment editor is really balky online tonight, so I’ll try to resolve our points of disagreement by typing something up in the next day or two. In the meantime you might ponder the legitimacy of periodicall extending the average waveform computed by the algorithm when the underlying data is aperiodic.
sky says:
August 3, 2011 at 7:45 pm
you might ponder the legitimacy of periodically extending the average waveform computed by the algorithm when the underlying data is aperiodic.
This is what L&S did http://wattsupwiththat.files.wordpress.com/2011/07/loehle-scafetta_fig3.png?w=640&h=494
Leif Svalgaard says:
August 3, 2011 at 11:06 am
You are too quick to jump to conclusions. The prongs do not ‘throw options’. That is your head only. You have no clear message, and if you compare with http://www.leif.org/research/Barycenter-Distance-240AD-590AD.png [green line] you see that it is very close to 1750-2100Ad, yet there are no grand minima in that interval and solar activity is very different in the two intervals 1750-2100 and 240-590. You are just chasing shadows.
Leif, there is a MAJOR difference between the 2 time periods. Your plot is really too small to appreciate the differences but I can see it immediately. If I can make some suggestions, the distance plot needs to be widened and scaled up, the solar proxy chart should be increased in the vertical plane.
The major difference is in the type of perturbation which is crucial. In my paper you will see Type A and Type B perturbations (AMP). Type B is much weaker than type A. Type B is perturbing at the end of the inner loop, Type A is perturbing at the start of the inner loop. Type B is always on the upslope of the sine wave and Type A is on the downslope. I would expect much weaker solar disturbance during the 240-590 period. Follow the colored annotations for both periods and you will see the difference in the strength of perturbance.
You guys can’t quit now. I just got my big bag o’ popcorn.
Geoff Sharp says:
August 3, 2011 at 9:37 pm
The major difference is in the type of perturbation which is crucial.
How can such a minor difference be ‘crucial’? except in your eyes. I’ll expand the scales, but you should mark ahead of time on my solar plots where you think the Grand Minima are. For that you do not need the expanded scale. On your plot the scale is much too small and one can’t see the details. Perhaps on my plot you should also mark type A and type B.
Leif Svalgaard says:
August 3, 2011 at 11:06 am
You are too quick to jump to conclusions. The prongs do not ‘throw options’. That is your head only. You have no clear message,
You will see in time that the Holocene solar proxy record follows the perturbation strength of the AM or distance charts. You have not got your head around the quantification method. The prong options are clearly labelled via the colored dots. You will need to apologize for your comments soon.
tallbloke says:
August 3, 2011 at 4:52 pm
Well, we should remember Jupiter’s orbit is more eccentric than the other gas giants, and it’s bigger (much bigger) and closer. So gravitationally it creates big perturbances on the Sun because apart from the hypothetical ~1mm tide for an Earth like semi-rigid body
that tide is calculated for a completely non-rigid body [a perfectly deformable body]. You are still hung up on the rigid/gaseous thing. Newton’s laws are equally valid for both, and, anyway, the tides are calculated under the assumption that the matter is allowed to move freely under the gravitational tidal influence. With a perfectly rigid body [which are the only ones that obey Newton’s laws according to you] there would be no tides.
largely inelastic body such as the gaseous Sun is going to get it’s interior directly beneath Jupiter stirred around quite a lot more than an average tide spread across the entire body.
No, the tides depends on the diameter of the region. As you go inwards, the tides shrink away to nothing [proportional to distance from the center]. And the tides are calculated for a gaseous sun, anyway.
These disturbances will create the ‘suitable flows’ which release extra fusion energy from the Sun a la Wolff and Patrone.
It takes 200,000 years for the energy created by that extra fusion to randomly diffuse to the convection zone, so any 11-yr signal is completely lost. This is another flaw in the W&P paper.
If you study the Wolff and Patrone paper closely, you’ll find out where the ‘sweet spot’ is for maximising the fusion energy release.
It takes 200,000 years for the energy from the ‘sweet spot’ to get out, so any 11-yr signal is completely lost.
And the analysis you have chanced upon is seriously flawed. To ‘get real’ one must perform a real analysis, like this one: http://www.leif.org/research/Jupiter-Distance-Monthly-Sunspot-Number.png
It shows first the distance as a function of the sunspot number for every month since 1749. You can see immediately by eye that there is no correlation. Instead you see a concentration [for all sunspot numbers] towards the bottom [smallest distance] and the top [largest distance]. This is purely a selection effect from the fact that there are many more months near the smallest and largest distance than at the average distance 5.2 AU, so you get many more monthly values [data points] of the sunspot number around perihelion and aphelion. This is because the distance changes less when Jupiter rounds the two ‘blunt’ ends of the orbit than at other times. There is another effect: as Jupiter moves more slowly at aphelion, the distribution will be ‘top heavy’, as you can clearly see on the graph. Finally, almost all the very high values of the sunspot number [in the oval] occurred at the maximum of solar cycle 19, so these points are not independent. The plot also shows the distribution for every bin of 10 sunspot numbers. The first one from 0 to 10, the next from 10 to 20, and so on. For every bin, you can see that there is no correlation. You can even now and then see the expected ‘top heaviness’.
So, there is nothing ‘real’ there. Don’t fall for any old correlation that you stumble upon. Confirmation bias is strong here.
Geoff Sharp says:
August 3, 2011 at 10:34 pm
You will see in time that the Holocene solar proxy record follows the perturbation strength of the AM or distance charts. You have not got your head around the quantification method.
The quantification is a posteriori: you label as it fits.
The prong options are clearly labelled via the colored dots.
Where are the A and B types?
You will need to apologize for your comments soon.
One does not apologize. The correct attitude is that one concedes something. Apology has nothing to do with it. Your attempts of personalize everything are misplaced.
Leif Svalgaard says:
August 3, 2011 at 11:19 pm
And the analysis you have chanced upon is seriously flawed. To ‘get real’ one must perform a real analysis, like this one: http://www.leif.org/research/Jupiter-Distance-Monthly-Sunspot-Number.png Confirmation bias is strong here.
First of all, thanks for taking the time to do the analysis. You are right about this, and if I’d thought about it more before posting I’d have realised that the effect on barycentric distance of the eccentricity of Jupiters orbit is small compared to the effect of the Jupiter-Saturn synodic cycle. I admit the confirmation bias, someone posted the graph on my blog last night and I threw it into this discussion without enough consideration. I’ve posted your excellent analysis and comment there in full.
that tide is calculated for a completely non-rigid body [a perfectly deformable body]. You are still hung up on the rigid/gaseous thing. Newton’s laws are equally valid for both, and, anyway, the tides are calculated under the assumption that the matter is allowed to move freely under the gravitational tidal influence. With a perfectly rigid body [which are the only ones that obey Newton’s laws according to you] there would be no tides.
Yes, Newton’s laws are equally applicable, all of them. This means that we need to consider the extent to which bodies are elastically deformable and plasticly deformable, and realise that in the case of a gaseous body like the Sun, that plastic deformation can appear to be elastic deformation due to the centre of gravity pulling the body back to sphericity. The key point in what I’ve been saying all along is that this won’t be done without non-reverting internal redistributions of matter as a result of the action of the perturbing force.
Perfectly elastic (not necessarily rigid but they tend towards it) bodies will perfectly transmit force as resultant motion vestors in collision with other perfectly elastic bodies (pool ball experiment), inelastic bodies won’t (ball of putty or gas). This is what I meant by elastic bodies obeying Newton’s (idealised) laws of motion. In the context of our discussion, it was clear that I was getting at the difference between that idealised perfectly elastic object and the big wobbly mass of plasma and gas called the Sun. You have deliberately mis-contextualised what I said in order to distract attention from my correct characterisation of the Sun as being composed of largely inelastic material and it’s about time you put that canard down because no-one else is falling for it and it just reduces my trust in you as a fair person to debate with.
the tides depends on the diameter of the region. As you go inwards, the tides shrink away to nothing [proportional to distance from the center]. And the tides are calculated for a gaseous sun, anyway.
The major difference between the gas the sun is composed of and the tidal oceans on Earth is that water is incompressible and gas isn’t. So whereas Earth’s tides are raised on both sides of the planet because of the near perfect transmission of tidal force, on the Sun they won’t be. The effect of the gravitationally perturbing body will be more localised and therefore more concentrated.
TB: These disturbances will create the ‘suitable flows’ which release extra fusion energy from the Sun a la Wolff and Patrone.
LS: It takes 200,000 years for the energy created by that extra fusion to randomly diffuse to the convection zone, so any 11-yr signal is completely lost. This is another flaw in the W&P paper.
Read the paper! They state that the effect will occur at various levels in the Sun from around 0.15r all the way to the top of the convection zone dependng on the barycentre-solar core radius . So yes, where the effect occurs at deeper levels (“carrying fresh fuel to deeper levels” as they put it) it will take a long time (there doesn’t seem to be a consensus on exactly how long), for the knock on effect to surface. But it will still happen in cyclic waves. This is probably where the longer periods in solar activity arise from, the barycentric motion has strong cycles at ~172, 934, 2250, 4500 years and who knows which longer periods.
Thanks again for sitting back and taking a while before replying, I really get a lot out of our discussions when they happen at a more leisurely and considered pace.
Leif Svalgaard says:
August 3, 2011 at 10:03 pm
Geoff Sharp says:
August 3, 2011 at 9:37 pm
The major difference is in the type of perturbation which is crucial.
————————————–
How can such a minor difference be ‘crucial’? except in your eyes. I’ll expand the scales, but you should mark ahead of time on my solar plots where you think the Grand Minima are. For that you do not need the expanded scale. On your plot the scale is much too small and one can’t see the details. Perhaps on my plot you should also mark type A and type B.
So you say a minor difference?, its only in my eyes? Let’s try to keep to the science.
The perturbation is taking place in a completely different part of the inner loop on the solar path. Type B take place after the Jupiter/Saturn opposition, the majority of the cycle is done with and already in its acceleration phase ready to go into the next loop. These are solid physical attributes that coincide with much smaller solar disturbance observed across the Holocene during Type B.
It is not hard to differentiate between the two types as already outlined plus the appropriate strength is shown with the color code which you seem to have ignored. Marking grand minima is as stated pointless, you will see when we get to the BC record that there can be long periods of Type B activity which displays a high plateau of sawtooth type trends, these plateaus are not grand minima but still important and coincide with the Roman and Minoan warming periods.
http://tinyurl.com/2dg9u22/Future.png
As a rough guide you could look at my original graph which raises the Usoskin bar but it is still very arbitrary. What matters is the strong AMP events coincide with large troughs with weaker troughs coinciding with weak Type A and Type B events. Once you can identify the quantification process it will become clear. I think there is still scope to improve in this area..
I have raised the bar to the Dalton Minimum height.
http://tinyurl.com/2dg9u22/images/c14nujs1.jpg
I noticed you have not compared the AMP events (prongs) with the sunspot record.