Guest Post by Willis Eschenbach
Loehle and Scafetta recently posted a piece on decomposing the HadCRUT3 temperature record into a couple of component cycles plus a trend. I disagreed with their analysis on a variety of grounds. In the process, I was reminded of work I had done a few years ago using what is called “Periodicity Analysis” (PDF).
A couple of centuries ago, a gentleman named Fourier showed that any signal could be uniquely decomposed into a number of sine waves with different periods. Fourier analysis has been a mainstay analytical tool since that time. It allows us to detect any underlying regular sinusoidal cycles in a chaotic signal.
Figure 1. Joseph Fourier, looking like the world’s happiest mathematician
While Fourier analysis is very useful, it has a few shortcomings. First, it can only extract sinusoidal signals. Second, although it has good resolution as short timescales, it has poor resolution at the longer timescales. For many kinds of cyclical analysis, I prefer periodicity analysis.
So how does periodicity analysis work? The citation above gives a very technical description of the process, and it’s where I learned how to do periodicity analysis. Let me attempt to give a simpler description, although I recommend the citation for mathematicians.
Periodicity analysis breaks down a signal into cycles, but not sinusoidal cycles. It does so by directly averaging the data itself, so that it shows the actual cycles rather than theoretical cycles.
For example, suppose that we want to find the actual cycle of length two in a given dataset. We can do it by numbering the data points in order, and then dividing them into odd- and even-numbered data points. If we average all of the odd data points, and we average all of the even data, it will give us the average cycle of length two in the data. Here is what we get when we apply that procedure to the HadCRUT3 dataset:
Figure 2. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 2. The cycle has been extended to be as long as the original dataset.
As you might imagine for a cycle of length 2, it is a simple zigzag. The amplitude is quite small, only plus/minus a hundredth of a degree. So we can conclude that there is only a tiny cycle of length two in the HadCRUT3.
Next, here is the same analysis, but with a cycle length of four. To do the analysis, we number the dataset in order with a cycle of four, i.e. “1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4 …”
Then we average all the “ones” together, and all of the twos and the threes and the fours. When we plot these out, we see the following pattern:
Figure 3. Periodicity in the HadCRUT3 global surface temperature dataset, with a cycle length of 4. The cycle has been extended to be as long as the original dataset.
As I mentioned above, we are not reducing the dataset to sinusoidal (sine wave shaped) cycles. Instead, we are determining the actual cycles in the dataset. This becomes more evident when we look at say the twenty year cycle:
Figure 4. Periodicity in the HadCRUT3 dataset, with a cycle length of 20. The cycle has been extended to be as long as the original dataset.
Note that the actual 20 year cycle is not sinusoidal. Instead, it rises quite sharply, and then decays slowly.
Now, as you can see from the three examples above, the amplitudes of the various length cycles are quite different. If we set the mean (average) of the original data to zero, we can measure the power in the cyclical underlying signals as the sum of the absolute values of the signal data. It is useful to compare this power value to the total power in the original signal. If we do this at all possible frequencies, we get a graph of the strength of each of the underlying cycles.
For example, suppose we are looking at a simple sine wave with a period of 24 years. Figure 5 shows the sine wave, along with periodicity analysis in blue showing the power in each of the various length cycles:
Figure 5. A sine wave, along with the periodicity analysis of all cycles up to half the length of the dataset.
Looking at Figure 5, we can see one clear difference between Fourier analysis and periodicity analysis — the periodicity analysis shows peaks at 24, 48, and 72 years, while a Fourier analysis of the same data would only show the 24-year cycle. Of course, the apparent 48 and 72 year peaks are merely a result of the 24 year cycle. Note also that the shortest length peak (24 years) is sharper than the longest length (72-year) peak. This is because there are fewer data points to measure and average when we are dealing with longer time spans, so the sharp peaks tend to broaden with increasing cycle length.
To move to a more interesting example relevant to the Loehle/Scafetta paper, consider the barycentric cycle of the sun. The sun rotates around the center of mass of the solar system. As it rotates, it speeds up and slows down because of the varying pull of the planets. What are the underlying cycles?
We can use periodicity analysis to find the cycles that have the most effect on the barycentric velocity. Figure 6 shows the process, step by step:
Figure 6. Periodicity analysis of the annual barycentric velocity data.
The top row shows the barycentric data on the left, along with the amount of power in cycles of various lengths on the right in blue. The periodicity diagram at the top right shows that the overwhelming majority of the power in the barycentric data comes from a ~20 year cycle. It also demonstrates what we saw above, the spreading of the peaks of the signal at longer time periods because of the decreasing amount of data.
The second row left panel shows the signal that is left once we subtract out the 20-year cycle from the barycentric data. The periodicity diagram on the second row right shows that after we remove the 20-year cycle, the maximum amount of power is in the 83 year cycle. So as before, we remove that 83-year cycle.
Once that is done, the third row right panel shows that there is a clear 19-year cycle (visible as peaks at 19, 38, 57, and 76 years. This cycle may be a result of the fact that the “20-year cycle” is actually slightly less than 20 years). When that 19-year cycle is removed, there is a 13-year cycle visible at 13, 26, 39 years etc. And once that 13-year cycle is removed … well, there’s not much left at all.
The bottom left panel shows the original barycentric data in black, and the reconstruction made by adding just these four cycles of different lengths is shown in blue. As you can see, these four cycles are sufficient to reconstruct the barycentric data quite closely. This shows that we’ve done a valid deconstruction of the original data.
Now, what does all of this have to do with the Loehle/Scafetta paper? Well, two things. First, in the discussion on that thread I had said that I thought that the 60 year cycle that Loehle/Scafetta said was in the barycentric data was very weak. As the analysis above shows, the barycentric data does not have any kind of strong 60-year underlying cycle. Loehle/Scafetta claimed that there were ~ 20-year and ~ 60-year cycles in both the solar barycentric data and the surface temperature data. I find no such 60-year cycle in the barycentric data.
However, that’s not what I set out to investigate. I started all of this because I thought that the analysis of random red-noise datasets might show spurious cycles. So I made up some random red-noise datasets the same length as the HadCRUT3 annual temperature records (158 years), and I checked to see if they contained what look like cycles.
A “red-noise” dataset is one which is “auto-correlated”. In a temperature dataset, auto-correlated means that todays temperature depends in part on yesterday’s temperature. One kind of red-noise data is created by what are called “ARMA” processes. “AR” stands for “auto-regressive”, and “MA” stands for “moving average”. This kind of random noise is very similar observational datasets such as the HadCRUT3 dataset.
So, I made up a couple dozen random ARMA “pseudo-temperature” datasets using the AR and MA values calculated from the HadCRUT3 dataset, and I ran a periodicity analysis on each of the pseudo-temperature datasets to see what kinds of cycles they contained. Figure 6 shows eight of the two dozen random pseudo-temperature datasets in black, along with the corresponding periodicity analysis of the power in various cycles in blue to the right of the graph of the dataset:
Figure 6. Pseudo-temperature datasets (black lines) and their associated periodicity (blue circles). All pseudo-temperature datasets have been detrended.
Note that all of these pseudo-temperature datasets have some kind of apparent underlying cycles, as shown by the peaks in the periodicity analyses in blue on the right. But because they are purely random data, these are only pseudo-cycles, not real underlying cycles. Despite being clearly visible in the data and in the periodicity analyses, the cycles are an artifact of the auto-correlation of the datasets.
So for example random set 1 shows a strong cycle of about 42 years. Random set 6 shows two strong cycles, of about 38 and 65 years. Random set 17 shows a strong ~ 45-year cycle, and a weaker cycle around 20 years or so. We see this same pattern in all eight of the pseudo-temperature datasets, with random set 20 having cycles at 22 and 44 years, and random set 21 having a 60-year cycle and weak smaller cycles.
That is the main problem with the Loehle/Scafetta paper. While they do in fact find cycles in the HadCRUT3 data, the cycles are neither stronger nor more apparent than the cycles in the random datasets above. In other words, there is no indication at all that the HadCRUT3 dataset has any kind of significant multi-decadal cycles.
How do I know that?
Well, one of the datasets shown in Figure 6 above is actually not a random dataset. It is the HadCRUT3 surface temperature dataset itself … and it is indistinguishable from the truly random datasets in terms of its underlying cycles. All of them have visible cycles, it’s true, in some cases strong cycles … but they don’t mean anything.
w.
APPENDIX:
I did the work in the R computer language. Here’s the code, giving the “periods” function which does the periodicity function calculations. I’m not that fluent in R, it’s about the eighth computer language I’ve learned, so it might be kinda klutzy.
#FUNCTIONS
PI=4*atan(1) # value of pi
dsin=function(x) sin(PI*x/180) # sine function for degrees
regb =function(x) {lm(x~c(1:length(x)))[[1]][[1]]} #gives the intercept of the trend line
regm =function(x) {lm(x~c(1:length(x)))[[1]][[2]]} #gives the slope of the trend line
detrend = function(x){ #detrends a line
x-(regm(x)*c(1:length(x))+regb(x))
}
meanbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle means
rep(tapply(x,modline,mean),length.out=length(x))
}
countbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle number of datapoints N
rep(tapply(x,modline,length),length.out=length(x))
}
sdbyrow=function(modline,x){ #returns a full length repetition of the underlying cycle standard deviations
rep(tapply(x,modline,sd),length.out=length(x))
}
normmatrix=function(x) sum(abs(x)) #returns the norm of the dataset, which is proportional to the power in the signal
# Function “periods” (below) is the main function that calculates the percentage of power in each of the cycles. It takes as input the data being analyzed (inputx). It displays the strength of each cycle. It returns a list of the power of the cycles (vals), along with the means (means), numner of datapoints N (count), and standard deviations (sds).
# There’s probably an easier way to do this, I’ve used a brute force method. It’s slow on big datasets
periods=function(inputx,detrendit=TRUE,doplot=TRUE,val_lim=1/2) {
x=inputx
if (detrendit==TRUE) x=detrend(as.vector(inputx))
xlen=length(x)
modmatrix=matrix(NA, xlen,xlen)
modmatrix=matrix(mod((col(modmatrix)-1),row(modmatrix)),xlen,xlen)
countmatrix=aperm(apply(modmatrix,1,countbyrow,x))
meanmatrix=aperm(apply(modmatrix,1,meanbyrow,x))
sdmatrix=aperm(apply(modmatrix,1,sdbyrow,x))
xpower=normmatrix(x)
powerlist=apply(meanmatrix,1,normmatrix)/xpower
plotlist=powerlist[1:(length(powerlist)*val_lim)]
if (doplot) plot(plotlist,ylim=c(0,1),ylab=”% of total power”,xlab=”Cycle Length (yrs)”,col=”blue”)
invisible(list(vals=powerlist,means=meanmatrix,count=countmatrix,sds=sdmatrix))
}
# /////////////////////////// END OF FUNCTIONS
# TEST
# each row in the values returned represents a different period length.
myreturn=periods(c(1,2,1,4,1,2,1,8,1,2,2,4,1,2,1,8,6,5))
myreturn$vals
myreturn$means
myreturn$sds
myreturn$count
#ARIMA pseudotemps
# note that they are standardized to a mean of zero and a standard deviation of 0.2546, which is the standard deviation of the HadCRUT3 dataset.
# each row is a pseudotemperature record
instances=24 # number of records
instlength=158 # length of each record
rand1=matrix(arima.sim(list(order=c(1,0,1), ar=.9673,ma=-.4591),
n=instances*instlength),instlength,instances) #create pseudotemps
pseudotemps =(rand1-mean(rand1))*.2546/sd(rand1)
# Periodicity analysis of simple sine wave
par(mfrow=c(1,2),mai=c(.8,.8,.2,.2)*.8,mgp=c(2,1,0)) # split window
sintest=dsin((0:157)*15)# sine function
plotx=sintest
plot(detrend(plotx)~c(1850:2007),type=”l”,ylab= “24 year sine wave”,xlab=”Year”)
myperiod=periods(plotx)
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Hello,
Sorry for my very bad English, I am French and I don’t speak English.
Willis, I understand the process ARMA to generate the series of temperatures with this red noise, but I don’t understand how we can see, if this red noise and the autocorrelation has not been created by the cycles (of 60 years and/or all the other cycles).
For example here in Excel, the white noise for 160 years ( with SD 0.09° and a trend 0.16°/dec) :
http://meteo.besse83.free.fr/imfix/autocoanalea.png
In the same data, I add a cycle of 60 years ( 0.2°, 0.4° peak to peak), now the noise is red. In this perfect case, it is only this cycle of 60 years that generate autocorrelation and this red noise :
http://meteo.besse83.free.fr/imfix/autocoanosc60.png
Thank for yours explanations and your very pedagogique post.
tallbloke says:
August 6, 2011 at 9:41 am
FYI 17 years is the harmonic mean of the motions of Jupiter and Saturn.
Of equal importance, it is also the age of an old truck used at my property http://maps.google.com/?ll=38.229218,-122.659744&spn=0.001163,0.003055&t=h&z=19
Leif Svalgaard says:
August 6, 2011 at 9:59 am
tallbloke says:
August 6, 2011 at 9:41 am
FYI 17 years is the harmonic mean of the motions of Jupiter and Saturn.
Of equal importance, it is also the age of an old truck used at my property http://maps.google.com/?ll=38.229218,-122.659744&spn=0.001163,0.003055&t=h&z=19
Heh, my old bike is nearly your age too. And about as quick on the uptake. 😉
http://tallbloke.files.wordpress.com/2008/07/image_117.jpg
Leif Svalgaard says:
August 3, 2011 at 9:04 pm
sky says:
August 3, 2011 at 7:45 pm
you might ponder the legitimacy of periodically extending the average waveform computed by the algorithm when the underlying data is aperiodic.
This is what L&S did
====================================================================
And I criticized them for this oversimplification on their original post.
tallbloke says:
August 5, 2011 at 1:55 pm
Gosh. So I can say “If tallbloke didn’t read one of my posts, that’s understandable. But if he read it and didn’t reply, he’s a scummy bicycle-seat sniffing coward who is terrified to actually stand behind his words” and that’s acceptable?
That kind of use of the “if/then” clause is just a cheap way of making a mean, nasty accusation that you can dodge responsibility for, tallbloke. So re-reading it changes nothing, it’s just as unpleasant as it was the first time. If you want to accuse me of something, at least have the stones not to hide behind an “if-then” construction.
That said, I do appreciate the apology, that is a measure of your strength. Regarding your other point:
So if it’s not based on the barycentric cycle, and it may depend on which direction the bowshock of the heliosphere is pointing … how does that differ from just picking cycles at random?
Part of the problem is that the motion of the sun, eight planets and an ex-planet, and a host of asteroids is very, very complex. As a result, it contains just about any cycle that you might care to name. Need 10.6, or 11.3, or 20, or 183, or 60-year cycles? No problem. For the 60 years we have which way the galactic center interacts with the heliosphere, or something like that … I’m sure you see the problem. It is ex post facto cycle fitting.
As I demonstrated before, I can get about as good a fit using 60- and 40- year cycles as they got with 60- and 20-year cycles. And I’m certain I could find a forty year cycle somewhere in the great solar orrery … but so what?
And as I have said many times and people (including yourself) keep ignoring, if you are going to say that barycentric motion is important to the climate, then you have to use the actual barycentric motion to compare with the climate. You can’t say oooooh, I see a tiny 60 year cycle in the z-axis, so I can change the phase and make it larger and fit it to the temperature data and get meaningful results. That way lies the madness of curve fitting.
Heck, I’m in Alaska right now, I’ll go count dead polar bears by helicopter, that sounds like an excellent use of funds to me …
Should there be more research on the question? Well, perhaps, but before I said “Yes” there would have to be some kind of solid preliminary findings … and the L/S analysis is a long ways from that. For starters, their null hypothesis is that the underlying rate of natural warming for the entire 20th century is 0.15C/century.
Until you (or someone else) comes up to defend and explain that choice, I wouldn’t put another penny into that line of research. And to say well, we need more funding to determine the exact cycles, I’m sorry but that’s not either an excuse for a bad paper or a reason to give more funding.
Again, let me be clear. I find the barycentric hypothesis to be definitely possible, which means that (afaik) it doesn’t break any physical laws. What I don’t find is any solid research showing that the barycentric movement is related to much of anything … sure, there are cycles in the barycentric movement, all kinds of cycles. And there are cycles in various aspects of climate. But finding two cycles that happen to appear in both means nothing.
w.
PS – I have also commented several times that it is extremely common for a tuned cycle analysis to start going off the rails as soon as it leaves the calibration data and enters the unseen data. Most people look at that and conclude the model is flawed. L/S make the outrageous claim that the model is not flawed but human interference with the climate is why their model ran off the rails … right, pull the other one.
Explaining the most common failure mode as being a result of human effect on the climate, without a scrap of evidence to back them up, is something I would expect from Gavin Schmidt or one of the GCM modelers. Perhaps while you are explaining their null hypothesis choice you could explain why Occam’s razor wouldn’t say that by far the most probable explanation is … well … that their model is not ready for prime time …
ChristianP says:
August 6, 2011 at 9:41 am
I am glad you put the graphs to illustrate a key issue. Allow me to point out, however, that the sample acf in the second graph–which generically resembles many regional temperature indices–is NOT an example of red noise. The acf of red noise decays exponentially and does not exhibit persistently negative values at long lags. Only the acf of truly oscillatory signals does that. That persistent feature is what distinguishes such signals from red noise.
sky says:
August 5, 2011 at 5:23 pm
Regarding the first issue, come back when you are willing to quote my statements about intuition, so I can see what you are on about. I was responding to a claim that Periodicity Analysis appeals to “primitive intuition”, when in fact it is a recognized analysis method.
I despise this kind of thing. You have it in your mind that I said something or other about intuition, so you spend lots of time attacking it. QUOTE MY WORDS if you disagree with them, not just one word out of context. All you have done is to prove beyond a doubt that you don’t have a clue what my position on intuition really is.
w.
Willis,
I have plane to catch. But I’ll get back to Tuesday.
sky says:
August 5, 2011 at 5:23 pm
And you know this how?
I make no assumption that the algorithm extracts “real” cycles, that’s my whole point. Again you are off attacking straw men. QUOTE WHERE I SAID that every cycle that any analysis detects is real, much less that every cycle that periodicity analysis detects is real. You truly need to work on your reading skills. My point is that the 60-year cycles in the temperature data are very likely not real, despite being detectable by anything from Fourier analysis to the unaided eye. Perhaps you could discuss that.
Gotta run, I’ll respond to the rest of your post on red noise later.
w.
Hey Willis,
Given that all they need is a fast PC and some grey matter between their ears, funding several whole roomfuls of solar system dynamics researchers wouldn’t cost anything like as much as gets wasted sending Lonnie up Kilmanjaro and the Karakorum every year. Then at least we’d get some people who would know what they were talking about when criticising other team’s results.
Scafetta agreed with me that this was a first foray which required a lot of following studies to resolve the linear trends they used in their simple upper bounding analysis. It seems pretty obvious to me that they included the 0.66C GHG warming as a way of getting it into the literature, as we’ve seen with so many other studies paying lip service to AGW to get past the gatekeepers.
It’s a trojan horse which can later be superceded, now they are through the gate, with another better analysis which accounts for longer term cycles, which is probably their intention. I think they were just being too subtle for you, and the reviewers. Which means that they did a good job.
tallbloke says:
August 6, 2011 at 2:24 pm
I think they were just being too subtle for you, and the reviewers. Which means that they did a good job.
Huh? being obscure is doing a good job?
This is a poor paper.
tallbloke says:
August 6, 2011 at 2:24 pm
I’d prefer to have someone who wouldn’t make ridiculous assumptions about the null hypothesis, and then refuse to defend it, but that’s just me.
In other words, you’re not going to answer any of my scientific questions or address any objections. Instead, you’re going to say it doesn’t matter if it’s a piece of junk, it just requires “following studies”?
And you call this science?
So you are saying that the main conclusion of their study, the subject they discuss in their abstract, the claimed GHG warning, has no support in the study itself but was just included so they could get it published? Putting a totally false claim into your paper just to get it published? I expect that of the Hockey Team. But on the our side of the discussion?
And you call this science?
Oh, right, they’re too subtle for me … subtle like a two by four across the head. According to you, they’ve included total bullsh*t just to get their garbage analysis published, in order to come back later with an actual analysis … that’s far too subtle for me, all right. I mean, once they’ve besmirched their reputation by pushing junk, just to get a paper published … do you really think a better paper will repair the damage?
Why is it so hard for folks to understand that once you have lost people’s trust, it is very difficult to regain it? Since as you claim they’ve misrepresented their results just to get their paper published, didn’t you ever consider that now that they’ve deceived people once, who is going to pay any attention to them next time? I mean, other than you?
Because “includ[ing] the 0.66C GHG warming as a way of getting [their paper] into the literature” is so far from science as to be actively deceptive. I find it repugnant myself, and I would not dream of misrepresenting results in order to get them published. That way lies chaos.
w.
sky says:
August 5, 2011 at 5:23 pm
Well, that’s good. Start with an insult, and move on from there.
It might help your understanding if you looked at the acf of the HadCRUT3 data. If you can find a “distinctly oscillatory” nature to that acf, you’re not looking at the HadCRUT3 data, which has NO SUCH OSCILLATORY STRUCTURE.
Which means, of course, that all of your claims about that structure being what “produces oscillations in the real world” must be wrong, since the HadCRUT3 data doesn’t have that structure, and yet it produces oscillations in the real world … go figure.
In fact, the HadCRUT3 data has an acf structure which is quite similar to the low-order ARIMA random data that I used … WHICH IS WHY I USED THAT LOW ORDER DATA. Had I used your recommended high-order option, the acf of the random data would have been completely different from the acf of the actual data that I was trying to simulate, and as a result the random data would have been meaningless in this context.
You really should look at the data first, sky, before getting all huffy about your superior theoretical knowledge. You’ve run aground on the reef of ugly facts.
First, what is it with you and “intuition”? I made no claims that intuition alone could solve mathematical conundrums, nor that mathematicians relied on intuition alone, that’s your fantasy.
Second, you claim that professional statisticians use “maximum entropy spectral evaluation” instead of periodicity analysis. Since I doubt you’d even heard of periodicity analysis until this post, how do you know what might be used in its place? And professional signal analysts apply, not “that algorithm”, but hundreds of algorithms depending on the situation. Saying that they use one algorithm “instead” of another is a huge oversimplification of the process of choosing an algorithm for a problem. Finally, one analyst may well choose a different algorithm than another for a given problem, making your claim, well, … I’ll just call it unlikely.
Third, my claims do not rest on periodicity analysis. They can be substantiated by the spectral analysis of your choice.
Fourth, if you have a better way to estimate whether the apparent 60-year cycle in the temperature data is real or not, bring it on. Among professional signal analysts, saying you know all about a subject and then not providing data, code, examples, or your own analysis has a specific technical term. It’s called “hand-waving”. Scientific analysis, rather than hand-waving, is what professional signal analysts employ.
Thanks, sky. If you hadn’t told me that ocean waves aren’t periodic and can actually sink ships, I’m sure I would never have realized that despite being a seaman and a commercial fisherman for a good chunk of my life and sailing across the Pacific and …
This is supposed to bring home “the reality of random cycles”? Did I or anyone say that random cycles aren’t real? QUOTE MY WORDS, I’m getting tired of folks reading their own brand of nonsense into what I said. In any case, what does the “reality of random cycles” have to do with this discussion? That example makes no sense at all, it is totally unrelated to the topics in question.
w.
sky says:
August 6, 2011 at 2:02 pm
Yes that’s right, my simulation is too simple with only one cycle + white noise and no other oscillations / variabilities that are capable to generate a real red noise as in reality.
Willis Eschenbach says:
August 7, 2011 at 12:32 am
Because “includ[ing] the 0.66C GHG warming as a way of getting [their paper] into the literature” is so far from science as to be actively deceptive.
It no doubt seems too low for the warmista, and too high for the hardline sceptics. You can’t please everyone. Seems to me they did a pretty good job defining an upper bound at 0.66C/century and managing to get it past the gatekeepers. Well done Craig and Nicola!
I find it repugnant myself, and I would not dream of misrepresenting results in order to get them published.
I think their conclusion is reasonable given the current state of knowledge. And the current state of knowledge is why more study is needed. Not just in solar system dynamics either, as I’m sure you’ll agree.
I’d prefer to have someone who wouldn’t make ridiculous assumptions about the null hypothesis, and then refuse to defend it
Scafetta defended his and Craig’s paper the day you attacked it. It’s not his fault you now have the floor to yourself four days later. You think he should keep coming back and refreshing this page on the offchance you might get around to replying to him? Maybe you should have done better research on Scafetta 2010, taken some time to think, and put your post up when you weren’t about to jet off to Alaska. That way you’d have been around to reply to the physicist you mis-aimed your blunderbuss at. Barn door at five paces. Whoosh. Dead pigeon.
According to you, they’ve included total bullsh*t just to get their garbage analysis published, in order to come back later with an actual analysis … that’s far too subtle for me, all right. I mean, once they’ve besmirched their reputation by pushing junk, just to get a paper published … do you really think a better paper will repair the damage?
…And you call this science?
You call this potty mouthed rant scientific discourse?
tallbloke says:
August 7, 2011 at 2:57 am
Maybe you should have done better research on Scafetta 2010
Which was thoroughly debunked here on WUWT:
http://wattsupwiththat.com/2010/06/04/new-scafetta-paper-his-celestial-model-outperforms-giss/
Leif, in your world you debunked it. I note Willis missed that thread, or gave it a miss.
Abstract of Scafetta 2010
We investigate whether or not the decadal and multi-decadal climate oscillations have an astronomical origin. Several global surface temperature records since 1850 and records deduced from the orbits of the planets present very similar power spectra. Eleven frequencies with period between 5 and 100 years closely correspond in the two records. Among them, large climate oscillations with peak-to-trough amplitude of about 0.1 $^oC$ and 0.25 $^oC$, and periods of about 20 and 60 years, respectively, are synchronized to the orbital periods of Jupiter and Saturn. Schwabe and Hale solar cycles are also visible in the temperature records. A 9.1-year cycle is synchronized to the Moon’s orbital cycles. A phenomenological model based on these astronomical cycles can be used to well reconstruct the temperature oscillations since 1850 and to make partial forecasts for the 21$^{st}$ century. It is found that at least 60\% of the global warming observed since 1970 has been induced by the combined effect of the above natural climate oscillations. The partial forecast indicates that climate may stabilize or cool until 2030-2040. Possible physical mechanisms are qualitatively discussed with an emphasis on the phenomenon of collective synchronization of coupled oscillators.
tallbloke says:
August 7, 2011 at 2:57 am
Scafetta is free to do what he wants. I think if he doesn’t defend his paper, he will lose credibility, so it is in his interest to defend it.
However, your claim that he defended the null hypothesis in his post is plain, flat out not true. I challenge you to show me where he defended his choice of a null hypothesis in that response. Again you bring up your claims without any facts to back them up.
So … just where did Scafetta defend his null hypothesis on this thread, or respond to any of my objections to the paper? Because as near as I can tell, he has made exactly one and only one response in this thread, he did not answer any of my questions or objections, and he did not defend his choice of a null hypothesis in that response.
Perhaps on your planet that constitutes defending your paper, tallbloke. I find it sadly inadequate.
w.
PS – on their previous thread, the only thing that Scafetta said in defense of the null hypothesis that the natural portion of the global warming is 0.15 degrees per century was this:
Maybe you consider that an explanation and defense of the choice of null hypothesis, tallbloke. For me it’s just industrial strength tap-dancing. What does the choice of 0.15° per century have to do with the start of the dataset? If it’s 0.15° per century, it is 0.15° per century no matter when the dataset started. That’s just fast and furious tapdancing to cover up a totally arbitrary choice of null hypothesis.
Willis Eschenbach says:
August 7, 2011 at 9:47 am
….
Hi Willis,
I didn’t say he defended the null hypothesis, I said he defended his paper.
The first mention of the Null Hypothesis on this thread is in your reply to Geoff Sharp at
http://wattsupwiththat.com/2011/07/30/riding-a-pseudocycle/#comment-710885 on August 2nd, long after Scafetta left his response to the original post which you didn’t reply to until yesterday, August 6th. (UK time).
However I see that you raised the null hypothesis issue several times on the original thread and didn’t get a reply. Looks to me like they generalised it from the Loehle non-tree ring proxy. These are rough heuristics we’re dealing with. Still better than the IPCC assumption of no long term underlying trend at all IMO.
My personal opinion is that they oversimplified the situation, but Ive stopped trying to second guess their thinking.
Thanks, tallbloke.
To recap, my objections to the paper were:
1. They have made an assumption that the period 1850-1942 represents a constant unchanging “natural” upward trend, with a rate of increase of 0.15°C per century. They have not justified this choice. When they don’t justify and explain their null hypothesis, their study is done before it starts—without a clear null hypothesis the whole thing goes directly to the circular file.
2. They have assumed that the signal can be meaningfully decomposed by assuming two linear trends. There is no evidence to substantiate this.
3. They have not compared the statistics for the fit of the two situations, one with two trends and one with one trend.
4. In fact, they have not provided statistical backup nor shown statistical significance for their various curve fits. This may be a result of the fact that the autocorrelation of their cyclical reconstruction is extremely high, so that as a result the fits are not statistically significant.
5. They have not used the barycentric data at all.
6. They have said that their 20 and 60 year cycles are present in the barycentric data, which they are. However, they have not used the actual phase and amplitude of those cycles. Instead, they have substituted their own phase and amplitude simply because they provide a better fit. Cute, but not science.
7. You say “I think their conclusion is reasonable given the current state of knowledge.” I say that estimating the human contribution in their manner is totally unsupported given the current state of knowledge, and that claiming it can be estimated to the nearest hundredth of a degree in this manner is un grán chiste.
8. I have shown that in very similar random ARIMA datasets, spurious artifacts in the form of what look like long period cycles appear quite frequently. However, there are no cycles in those datasets, they are random. As a result, we have no reason to believe that the 60 year cycle in the HadCRUT data is real, and good reason to believe it is an artifact.
I have raised these issues several times, and provided my data and code. Scafetta has not explained or defended a single one of these points. As far as I’m concerned, the discussion is done unless Scafetta actually wants to stand behind what he wrote and explain and defend it. As Leif commented, the study is cyclomania at its worst.
w.
Leif Svalgaard says:
August 4, 2011 at 3:39 pm
Geoff Sharp says:
August 4, 2011 at 8:58 am
Your analysis is lousy Leif, no grouping, no recognition of AMP strength etc.
——————–
It is not me trying to convince the world of anything. The null-hypothesis is that there is no correlation. You could improve things by providing a table [easier than a plot] that for each of your perturbation you give the time, the ‘strength’ [whatever you think it is], and its type. Then in another table the time of the central point of each grand minimum you think there is. That will put the whole thing on a numerical footing so it can be analyzed properly [and get you away from mere hand waving].
A table has been available since Jan 09 which shows the position of each perturbation group with the central perturbation’s planet angles also supplied. This table is compared with the Solanki/Usoskin proxy record. It is difficult to place every solar downturn into a table and of little value IMO. I have given you weighted individual AMP events that you can plot directly onto the Steinhilber graph, this will show directly the correlation if you plot the relevant color codes.
At present you are doing the handwaving, making statements that the correlations are weak or non existent without showing any examples of your claims (or those shown actually proved my case). My graphs already show the correlations.
http://tinyurl.com/2dg9u22/images/solanki_sharp_detail1.jpg
http://tinyurl.com/2dg9u22/images/c14nujs1.jpg
Geoff Sharp says:
August 7, 2011 at 11:46 pm
It is difficult to place every solar downturn into a table and of little value IMO. […]
My graphs already show the correlations.
Why is it difficult to make a table of what you consider to be a ‘downturn’?
Sorry, the graphs do not show any convincing correlation.
Leif Svalgaard says:
August 8, 2011 at 3:34 am
Why is it difficult to make a table of what you consider to be a ‘downturn’?
Because they all vary in length. I have updated my Solanki graph back to to around 1150BC showing the same color coding as before. The AMP events are entered correctly via the spreadsheet. I cannot see how you would require any more information.
http://tinyurl.com/2dg9u22/images/solanki_sharp_detail1.jpg
Sorry, the graphs do not show any convincing correlation.
This is hardly a scientific assessment and expected as I stated earlier. There is only one area around 600BC that could be questioned with the probable result being influenced by “Wilsons Law”. I am comparing a detail dataset with a proxy record spanning 3200 years, the proxy record is only a close approximation of the solar record derived from solar wind that is not a true representation of actual solar output. Even so the match is very close with all 5 MAJOR downturns coinciding with strong AMP events with the rest matching the strength of the AMP events of the day. The MWP is also correlating with some precision. The obvious feature is that no large grand minima coincides with weak AMP events.
The sunspot record is also a very close match with individual events like SC20 and SC24 matching with precision. I think you are living in denial.
Geoff Sharp says:
August 8, 2011 at 4:30 pm
“Why is it difficult to make a table of what you consider to be a ‘downturn’?”
Because they all vary in length.
So you just give the [deepest] year of the Grand Minimum. What is difficult about that?
I cannot see how you would require any more information.
The information may be there scattered about on several graphs. This makes it hard to do anything with it. So it is up to you to make a [new – if need be] table with the years, quantification, type, whatever and which Grand Minimum you think is related to your ‘perturbation’.
I think you are living in denial.
Personal comments like that does not belong in reasonable discourse. Stick to the data.
Leif Svalgaard says:
August 8, 2011 at 6:08 pm
So you just give the [deepest] year of the Grand Minimum. What is difficult about that?
Your method is not satisfactory. For a start not every downturn is a grand minimum, plus each downturn has different period lengths. Giving one date would not be sufficient to match against multiple strong AMP events when they occur. One date also does not convey the depth of the downturn.
Personal comments like that does not belong in reasonable discourse. Stick to the data.
You have more than enough data, you are now procrastinating. All the data required is available in the one graph, I can send the spreadsheet if required. If you cannot find fault with the data provided I will have to assume your correlation statements are without supporting evidence.
http://tinyurl.com/2dg9u22/images/solanki_sharp_detail1.jpg