Guest Post by Willis Eschenbach
Anthony recently highlighted a new study which purports to find that the North Atlantic Oscillation (NAO) is synchronized to the fluctuations in solar activity. The study is entitled “Solar forcing synchronizes decadal North Atlantic climate variability”. The “North Atlantic Oscillation” (NAO) refers to the phenomenon that the temperatures (and hence the air pressures) of the northern and southern regions of the North Atlantic oscillate back and forth in opposition to each other, with first the northern part and then the southern part being warmer (lower pressure) and then cooler (higher pressure) than average. The relative swings are measured by an index called the North Atlantic Oscillation Index (NAOI). The authors’ contention is that the sun acts to synchronize the timing of these swings to the timing of the solar fluctuations.
Their money graph is their Figure 2:
Figure 1. Figure 2a from the study, showing the purported correspondence between solar variations (gray shaded areas at bottom) and the North Atlantic Oscillation Index (NAOI). Original Caption: (a) Time series of 9–13-year band-pass filtered NAO index for the NO_SOL [no solar input] (solid thin) and SOL [solar input] (solid thick) experiments, and the F10.7 cm solar radio flux (dashed black). Red and blue dots define the indices used for NAO-based composite differences at lag 0 (see the Methods section). For each solar cycle, maximum are marked by vertical solid lines.
From their figure, it is immediately apparent that they are NOT looking at the real world. They are not talking about the Earth. They are not discussing the actual North Atlantic Oscillation Index nor the actual f10.7 index. Instead, their figures are for ModelEarth exclusively. As the authors state but do not over-emphasize, neither the inputs (“F10.7”) to the computer model nor the outputs of the computer model (“Filtered NAOI”) are real—they are figments of either the modelers’ or the model’s imaginations, understandings, and misapprehensions …
The confusion is exacerbated by the all-too-frequent computer modelers’ misuse of the names of real observations (e.g. “NAOI”) to refer to what is not the NAOI at all, but is only the output of a climate model. Be clear that I am not accusing anyone of deception. I am saying that the usual terminology style of the modelers makes little distinction between real and modeled elements, with both often being called by the same name, and this mis-labeling does not further communication.
This brings me to my first objection to this study, which is not to the use of climate models per se. Such models have some uses. The problem is more subtle than that. The difficulty is that the outputs of climate models, including the model used in this study, are known to be linear or semi-linear transformations of the inputs to those climate models. See e.g. Kiehls seminal work, Twentieth century climate model response and sensitivity as well as my posts here and here.
As a result, we should not be surprised that if we include solar forcings as inputs to a climate model, we will find various echoes of the solar forcing in the model results … but anyone who thinks that these cyclical results necessarily mean something about the real world is sadly mistaken. All that such a result means is that climate models, despite their apparent complexity, function as semi-linear transformation machines that mechanically grind the input up and turn it into output, and that if you have cyclical input, you’ll be quite likely to get cyclical output … but only in ModelEarth. The real Earth is nowhere near that linear or that simple.
My second objection to their study is, why on earth would you use a climate model with made-up “solar forcing” to obtain modeled “Filtered NAOI” results when we have perfectly good observational data for both the solar variations and the NAO Index??? Why not start by analyzing the real Earth before moving on to ModelEarth? The Hurrell principal component NAOI observational dataset since 1899 is shown in Figure 2a. I’ve used the principal component NAO Index rather than the station index because the PC index is used by the authors of the study.
Here you can see the importance of using a longer record. Their results shown in Figure 1 above start in 1960, a time of relative strength in the 9-13-year band (red line above). But for the sixty years before that, there was little strength in the same 9-13-year band. This kind of appearance and disappearance of apparent cycles, which is quite common in climate datasets, indicates that they do not represent a real persisting underlying cycle.
Which brings me to my next objection. This is that comparing a variable 11-year solar cycle to a 9-13-year bandpass filtered NAOI dataset seemed to me like it would frequently look like it was significant when it wasn’t significant at all. In other words, from looking at the data I thought that similar 9-13-year bandpassed red noise would show much the same type of pattern in the 9-13-year band.
To test this, I used simple “ARMA” red noise. ARMA stands for “Auto-Regressive, Moving Average”. I first calculated the lag-1 AR and MA components of the DJF NAOI data. These turn out to be AR ≈ 0.4, and MA ≈ – 0.2. This combination of a positive AR value and a negative MA value is quite common in climate datasets.
Then I generated random ARMA “pseudo-data” of the same length as the DJF NAOI data (116 years), and applied the 9-13-year bandpass filter to each pseudo-dataset. Figure 2b shows four typical random red-noise pseudo-data results:
As I suspected, red noise datasets of the same ARMA structure as the DJF NAOI data generally show a strong signal in the 9-13-year range. This signal typically varies in strength across the length of the pseudo-datasets. However, given that these are random red-noise datasets, it is obvious that such strong signals in the 9-13-year range are meaningless.
So the signal seen in the actual DJF NAOI data is by no means unusual … and in truth, well … I fear to admit that I’ve snuck the actual DJF NAOI in as the lower left panel in Figure 2b … bad, bad dog. But comparing that with the upper left panel fo the same Figure illustrates my point quite clearly. Random red-noise data contains what appears to be a signal in the 9-13-year range … but it’s most likely nothing but an artifact, because it is indistinguishable from the red-noise results.
My next objection to the study is that they have used the “f10.7″ solar index as a measure of the sun’s activity. This is the strength of the solar radio flux at the 10.7 cm wavelength, and it is a perfectly valid observational measure to use. However, in both phase and amplitude, the f10.7 index runs right in lock-step with the sunspot numbers. Here’s NASA’s view of a half-century of both datasets:
As you can see, using one or the other makes no practical difference at the level of analysis done by the authors. The difficulty is that the f10.7 data is short, whereas we have good sunspot data much further back in time than we have f10.7 data … so why not use the sunspot data?
My next objection to the study is that it seems the authors haven’t heard of Bonferroni and his correction. If you flip a group of 8 coins once and they come up all heads, that’s very unusual. But if you throw the same group of 8 coins a hundred times, somewhere in there you’ll likely come up with eight heads.
In other words, how unusual something is depends on how many places you’ve looked for it. If you look long enough for even the rarest relationship, you’ll likely find it … but that does not mean that the find is statistically significant.
In this case, the problem is that they are only using the winter-time (DJF) value of the NAOI. To get to that point, however, they must have tried the annual NAOI, as well as the other seasons, and found them wanting. If the other NAOI results were statistically significant and thus interesting, they would have reported them … but they didn’t. This means that they’ve looked in five places to get their results—the annual data as well the four seasons individually. And this in turn means that to claim significance for their find, they need to show somethings which is more rare than if they had just looked in one place.
The “Bonferroni correction” is a rough-and-ready way to calculate the effect of looking in more places or conducting more trials. The correction says that whatever p-value you consider significant, say 0.05, you need to divide that p-value by the number of trials to give the equivalent p-value needed for true significance. So if you have 5 trials, or five places you’ve looked, or five flips of 8 coins, at that point to claim statistical significance you need to find something significant at the 0.05 / 5 level, which is a p-value of less than 0.01 … and in climate, that’s a hard ask.
So those are my objections to the way they’ve gone about trying to answer the question.
Let me move on from that to how I’d analyze the data. Here’s how I’d go about answering the same question, which was, is there a solar component to the DJF North Atlantic Oscillation?
We can investigate this in a few ways. One is by the use of “cross-correlation”. This looks at the correlation of the two datasets (solar fluctuations and NAO Index) at a variety of lags.
As you can see, the maximum short-lag positive correlation is with the NAO data lagging the sunspots by about 2-3 years. But the fact that the absolute correlation is largest with the NAO data leading the sunspots (negative values of lag) by two years is a huge red flag, because it is not possible that the NAO is influencing the sun. This indicates we’re not looking at a real causal relationship. Another problem is the small correlation values. The r^2 of the two-year-lagged data is only 0.03, and the p-value is 0.07 (not significant). And this is without accounting for the cyclical nature of the sunspot data, which will show alternating positive and negative correlations of the type shown above even with random “red-noise” data. Taken in combination, these indicate that there is very little relationship of any kind between the two datasets, causal or otherwise.
Next, we can search for any relationship between the solar cycle and the DJF NAOI using Fourier analysis. To begin with, here is the periodogram of the annual sunspot data. As is my habit, I first calculate the periodogram of the full dataset. Then I divide the dataset in two, and calculate the periodograms of the two halves individually. This lets me see if the cycles are present in both halves of the data, to help establish if they are real or are only transient fluctuations. Here is that result.
As you can see, the three periodograms are quite similar, showing that we are looking at a real, persistent (albeit variable) cycle in the sunspot data. This is true even for the ~ 5-year cycle, as it shows up in all three analyses.
However, the situation is very different with the DJF NAOI data.
Unlike the sunspot data, the three NAOI periodograms are all very different. There are no cycles common to all three. We can also see the lack of strength in the 9-13-year region in the first half compared with the second half. All of this is another clear indication that there is no strong persistent cycle in the NAOI data in the 9-13-year range, whether of solar or any other origin. In other words, the DJF NAOI is NOT synchronized to the solar cycle as the authors claim.
Finally, we can investigate their claim that the variations in solar input are driving the DJF NAOI into synchronicity by looking at what is called “Granger causality”. An occurrence “A” is said to “Granger-cause” occurrence “B” if we can predict B better by using the history of both A and B than we can predict B by using just the history of B alone. Here is the Granger test for the sunspots and the DJF NAOI:
> grangertest(Sunspots,DJFNAOI) Granger causality test Model 1: DJFNAOI ~ Lags(DJFNAOI, 1:1) + Lags(Sunspots, 1:1) Model 2: DJFNAOI ~ Lags(DJFNAOI, 1:1) Res.Df Df F Pr(>F) 1 112 2 113 -1 0.8749 0.3516
The Granger causality test looks at two models. One model (Model 2) tries to predict the DJF NAOI by looking just at the previous year’s DJF NAOI. The other model (Model 1) includes the previous year’s sunspot information as an additional independent variable, to see if the sunspot information helps to predict the DJF NAOI.
The result of the Granger test (p-value of 0.35) does not allow us to reject the null hypothesis, which is that there is no causal relationship between sunspots and the NAO Index. It shows that adding solar fluctuation data does not improve the predictability of the NAOI. And the same is true if we include more years of historical solar data as independent variables, e.g.:
> grangertest(Sunspots,DJFNAOI,order = 2) Granger causality test Model 1: DJFNAOI ~ Lags(DJFNAOI, 1:2) + Lags(Sunspots, 1:2) Model 2: DJFNAOI ~ Lags(DJFNAOI, 1:2) Res.Df Df F Pr(>F) 1 109 2 111 -2 0.4319 0.6504
This is even worse, with a p-value of 0.65. The solar fluctuation data simply doesn’t help in predicting the future NAOI, so again we cannot reject the null hypothesis that there is no causal relationship between solar fluctuations and the North Atlantic Oscillation.
CONCLUSIONS
• If you use a cyclical forcing as input to a climate model, do not be surprised if you find evidence of that cycle in the model’s output … it is to be expected, but it doesn’t mean anything about the real world.
• The cross-correlation of a century’s worth of data shows that relationship between the sunspots and the DJF NAOI is not statistically significant at any lag, and it does not indicate any causal relationship between solar fluctuations and the North Atlantic Oscillation
• The periodogram of the NAOI does not reveal any consistent cycles, whether from solar fluctuations or any other source.
• The Granger causality test does not allow us to reject the null hypothesis that there is no causal relationship between solar fluctuations and the North Atlantic Oscillation.
• Red-noise pseudodata shows much the same strong signal in the 9-13-year range as is shown by the DJF NAOI data.
And finally … does all of this show that there is no causal relationship between solar fluctuations and the DJF NAO?
Nope. You can never do that. You can’t demonstrate that something doesn’t exist.
However, it does mean that if such a causal relationship exists, it is likely to be extremely weak.
Regards to all,
w.
My Customary Request: If you disagree with someone, please quote the exact words that you object to. This lets us all understand the exact nature of your objections.
again Willis, concise, clear. Thank you
1) Have you determined the distribution of the cross correlation between randomly generated sets of red data? Specifically there are minima at ~ +/- 10 years in the CCF. Since you are generatng random data, with a random inter-sample phase relationship, calculation of the limits of these minima are important.
2) The paper appears to discuss non-linear systems. The periodograms are typical of entrainment which occurs when a non-linear system is perturbed with a quasi periodic input. Therefore I don’t think that linear analysis is probably not sufficient to determine a relationship.
I agree that modelling in this case gives some questionable results.
Thanks, RCS. The minima at “± 10 years in the CCF” are the result of the cyclical nature of the solar fluctuations, and will appear no matter what you compare it with. But since the CCF data is not statistically significant as it stands, calculating the CCF vs random data is not important.
Next, while the paper discusses non-linear systems, it specifically claims that the NAOI is “synchronized” with the solar fluctuations … and I don’t need non-linear research to investigate that claim. Either their timing is the same or it is not … and it is most definitely not the same.
w.
Also in conformity with the terminology of scientific method (“as she is spoke”). Even if on further review you are challenged as to your findings, this is of illustrative valuable for that reason alone.
“””””….. If you flip a group of 8 coins once and they come up all heads, that’s very unusual. But if you throw the same group of 8 coins a hundred times, somewhere in there you’ll likely come up with eight heads. ….”””””
If you flip a set of eight coins bearing the serial numbers 1,2,3,4,5,6,7,8.
And you record which coins come up heads and which coins come up tails, how likely is it for any one of those patterns to come up ??
And is that probability any different from the probability of all heads, or all tails ??
g
on another subject. You plot the discontinuous function consisting of straight line sections from dot to do.
Then you filter it with your low pass filter and voilla, you now have a continuous function.
If one assumes that the discontinuous function is actually just samples of what presumably is actually also a continuous function; why does nobody use some sort of cubic spline or other interpolation algorithm to approximate the original continuous function from which the samples were obtained.
I make ” scatter plots ” all the time, and plot them $ Excel, and it will draw a nice smooth curve through my data points with no discontinuities; and that presumably is a more accurate presentation of that data.
g
If you make your own ‘ coins ‘ by laser cutting them out of wafers of diamond, that are perfectly flat and aligned to a particular crystal orientation (you can figure out which ones), then you now have eight identical coins with indistinguishable faces.
If you toss those coins once; each coin, then there are 256 different ways the coins can land, and none of those different patterns re likely than any other pattern.
Of course you can’t see the pattern, because you can’t distinguish the coins from each other or the two faces. But Mother Gaia knows what the pattern is, and if she would speak up she will tell you there are 256 different but equally likely patterns.
Now if instead of diamond wafers, we cut these coins from wafers of Gallium Arsenide (GaAs) , then now we can distinguish the heads and tails. There is a Gallium surface, and there is an Arsenic surface.
There still are 256 different patterns all equally likely, and the age Joe can’t tell which is which; but Mother Gaia knows. Well we can add the serial numbers to see the pattern ourselves.
So you see, the case of all heads or all tails or hthththt, or thththth, is no more unlikely than any other result.
It is the observer who arbitrarily chooses to ignore the fact that each coin is unique.
So 8 heads isn’t any more unlikely than 5 heads, or 3 heads.
So why don’t we ignore the Gallium and Arsenic faces, and then claim that we never ever get a different result when we toss eight coins; they always land the same way.
Statisticians like to claim all sorts of erroneous things about numerical configurations, that simply aren’t true.
The infamous case of the first Selective Service draft lottery, with 366 ! possible outcomes, where some statisticians claimed it was not random, is easily resolved by substituting 366 different icon pictures for the numbers 1-366.
Immediately, ALL patterns disappear, and no selection order, is more unlikely than any other including pulling the dates in calendar order.
g
Yes, always use real data.
Sometimes it must be adjusted. It’s true. (Mosh is right about that, though my method is different.) But it must be a demonstrable adjustment, fully explained, with the raw data it was derived from archived. All data. All methods.
Thanks, Evan. I would call it “quality controlled and corrected” rather than “adjusted”, but that’s just me. If I see a string of daily temperatures in degrees C that goes “15°, 14°, 15°, 16°, 160°, 15°, 16°”, I will indeed correct the “160” to read “16”.
And I agree with you and Mosh that all data need quality control and correction, because we’re human and because measurement methods change. If I know that the new thermometer reads 2° warmer than the old thermometer, I’ll correct the error instead of throwing out the data. One problem in climate science is that data is so scarce it’s hard to throw anything away. However, as you say, any adjustments need to be documented and explained, with the data saved at all steps. I believe this is the case with the Berkeley Earth data.
And you are also right that at the bottom you need to have facts. Data. Observations of the real world. Measurements. Instrument readings. Not the ideas of the programmers made real and sanctified by being passed through the unplumbable bowels of an iterative climate model, but the dimensions of
objective reality. As Robert Heinlein remarked,
My best to you,
w.
Willis
Whilst one can be reasonable certain that the 160deg entry is erroneous, and whilst one can understand why you consider that the entry ought to have read 16deg, no one can be certain that that is indeed the case.
For example, perhaps the true data was 18 deg but the 8 was carelessly written at an angle and was not a continuous stroke of the pen such that the transcriber thought that the 8 looked more like a 6 and a 0 (ie., 60), thereby transcribing 18 as 160.
When there is an obvious error in the data, it may be better to simply ignore that one entry. To seek to ‘correct’ it may impose a different ‘error’
If the data stream is too riddled with errors, perhaps the better option is to accept that the data is not fit for purpose rather than seeking to ‘correct’ almost every entry recorded. If only it were as simple as having a control whereby one knows and has measured that the new thermometer is reading 2 deg C warmer than the old. However, that is not the position in the temp data sets where there are numerous (and endless adjustments and readjustments to the past) each of different extent.
Whilst you are correct that the data sets are thin on the ground, it would be preferential to conduct an audit of the quality of the data from each weather station and ascertain those that are best sited and those with the most reliable data (consistent) and those that have the longest record of consistent data, and work with the few.
I would suggest that it is better to work with a few good quality data sets rather than lots and lots of rubbish data. Whilst everyone knows that there is no such thing as GLOBAL warming (climate is regional and so is the response), IF GLOBAL warming is truly global why would one need more than say a few hundred well spaced good quality temperature data sets? It makes no sense to include a load of rubbish just to bring up numbers.
In fact just look how sparcely sampled the globe is particularly in central Africa, the central belt of South America, central Australia, the great plains of Russia, Northern Canada, Alaska, the Poles etc.
The problem with the thermometer record is the lack of a quality audit of the stations themselves, and thence a proper quality audit of the raw data from the best sited stations. Climate scientists should have worked with the cream, not the crud..
Rv
It is not just what data and how to correct it is also who corrects it. For the same reason Willis brings up coin flips people invaribly see and create patterns in thigs that don,t have them. You can tell a random set of numbers generated by humans vs thoughs generated by a computer because humans try to make things look like there isnt a pattern. Humans would rarely to never generate a number like 5555 the would generate 54732 or some thing that doesnt look random. People will subconsciously see patterns that conform to there nature and ignore patterns that dont. If you are inclined to believe the earth is warming when you examine data that supports this view inconsistencies to the warm side while make sense and you will tend not to correct them however the opposit will be true of inconsistencies toward the coolling side. This would not be an issue if the people doing the work were of random persuasions but they are not and thus you would expect to see corrections that are one sided and i believe we do.
“Whilst one can be reasonable certain that the 160deg entry is erroneous, and whilst one can understand why you consider that the entry ought to have read 16deg, no one can be certain that that is indeed the case.”
No one can be certain that any observer ever wrote down any number accurately.
no one can be certain that calibrated instruments dont go out of calibration when you
are not looking and back into calibration when you check their calibration.
measurement is not about certainty. Logic and math (2+2 =4) is about certainty.
here is a sweet example.
In a UHI study I did I used raw data. I forgot to flip the QC switch so I even used bad data.
one data point was 15000C
I was happy because the study showed a UHI effect.. when I turned QC on… the effect … opps
got cut in half.
When you deal with historical data there are choices you have to make. picking only the “good” data
is still a choice since there is no clear definition of what “good” data is.
The best you can do is make your choices, document your choices, test other choices.
and recommend better approaches as folks move forward.
I’ve yet to see a skeptic actually try to implement and defend any approach. except perhaps jeffid and romanM. And they found that the earth has warmed.
Out of the ballpark Willis.
I look forward to seeing more “is it noise?” analysis. It’s currently my favorite topic.
Noise can also exhibit trends. You should run with that and see if the trend 135 in global temperature records is significant or not – is the trend just random variation of the long term temperature?
Here’s a paper that talks about the general topic of using noise to determine if a signal component is significant or not:
https://www.dropbox.com/s/lw1kzdfjw0ifcdo/10.1.1.28.1738.pdf?dl=0
(from http://paos.colorado.edu/research/wavelets/)
Also this book on long-term dependencies talks a bit about the topic and has lots of fun references I’m following up on:
https://books.google.es/books?id=jdzDYWtfPC0C&redir_esc=y
(paper: http://wwwf.imperial.ac.uk/~ejm/M3S8/Problems/beran92.pdf)
I’m still trying to find the original papers on the general methods of using appropriately shaped noise in Monte Carlo simulations to see if a signal has significance. Let me know what you have read please.
Best regards,
Peter
In the presence of Lorenz-type transients, the effect of systematic environmental changes on present-day climate (changes, for example, involving secular increases of CO2 or other consequences of human activities) might be so badly confounded as to be totally unrecognisable.
From j Murray Mitchell’s introductory presentation to the Climatic Change Workshop at the SCEP conference, 1970. (He was using Lorenz’s non-linearity findings to draw into question the presumption that climatic equilibrium was a ‘slave’ to external forcing. Lamb was most impressed by this short paper and it dominated his review of SCEP in Nature.)
” I am saying that the usual terminology style of the modelers makes little distinction between real and modeled elements, with both often being called by the same name, and this mis-labeling does not further communication.”
That is the secret of their modus operandi, the vague terminology allows believers a conflation of virtual and factual reality to create the illusion of impending climate disaster.
Also higher pressures results in higher temperatures not lower ones.
A stationary warm spot causes air to rise -low pressure.
Hi Willis,
In regards to the Bonferroni Correction, it applies to testing multiple hypothesis against one trial. That is, asking many questions of the data for ONE trial. It is obvious that the more questions you ask about your data sample the more likely you are to get a statistically significant answer to one of those questions (a bit like the birthday problem) even when it is in error. That is why the significance probabilities are adjusted in an attempt to take this situation into account.
If this is what you were implying then I found you were not clear enough, you seem to be saying that looking for one particular result in many different subsets would result in the answer you were seeking, which is something different.
The more trials you perform can only increase your confidence in the analysis if the results of each trial are statistically valid. And (thinking of coins) one of those results may be 8 heads. But the overall probability in the sequence of trials of 8 heads appearing should represent the actual probability of that event occuring, and you should not present the results of this one trial as proof of your hypothesis.
You seem to be saying that they use multiple views on the data to obtain the result they are looking for (I have not checked to see of this is the case.)
I am new at statistics and like you I am quite prepared to be wrong (life is a learning experience), but as I have read this, this is how I see it.
Warm Regards,
Mark.
Mark, the Bonferroni correction applies to either situation, either for multiple hypotheses in one trial, or for finding a “significant” result in repeated trials. As I said, the more places you look, the more likely you are to find a supposed “statistically significant occurrence”. The correction makes the required p-value smaller to account for that fact.
All the best,
w.
Mark, Willis is correct.
he is one of the few folks who get this dilemma.
I realized it when I was looking for GCR signals in cloud data.
The cloud data was years of daly cloud data. In geographic bins.
and at different pressure bands ( like 12 )
So think about that.
Here is what the theory says
1. When GCR increase there is possibility of increased formation of CNN and thus of clouds
But its not that simple. What if there are already 100% clouds in an area when the GCR increase?
What if there are already Enough CNN.. You see the process has a limit…
So now you start to look for increasing clouds.. where do you look? you’ve got thousands
of cells around the globe and 12 different pressre levels.
So you look globally at high clouds.. no joy, lower clouds, no joy, lowest clouds.. no joy.
Then you think.. maybe only high latitude clouds.. so you bin by latitude and test again
no joy
Then you think.. ok, I have to only look at cells which were first clear sky and then become cloudy
haha.. like clouds dont move…
Suppose in the end you find clouds over holland increase.
What have you shown?
Well when you start with a VAGUE theory and No specific prediction, you end up hunting through data.
you are doing EDA NOT hypothesis testing
Once you finish your EDA.. then you are in a position to wait for more data and see if your findings hold
Suppose in the end you find clouds over holland increase.
What have you shown?
in the netherland mountains. mountains.
wholly go lightly.
Noisy signal + Filter + Wiggle-matching = spurious results.
I, too, noted the usual wiggle-matching exercise in the original post, so didn’t comment. One variable aligns with the other perfectly… except where it doesn’t. And I noticed where you snuck the actual DJF NAOI into Figure 2b. Clever dog! Nice job all around, in fact.
Good job, Willis, thank you. I wonder why Nature’s peer review did not notice that when you run a noise through a 9-13 year bandpass filter, you find a signal. Or try a 10-12 year bandpass, or a 10.5-11.5 year bandpass. A signal is always there, what a surprise – and a level of mathematical sophistication in the climatologic community is such they consider it worth publishing. (Needless to say, the signal will be there even with a 15-17 year bandpass, but then the Sun would no longer be an obvious culprit.)
They use ‘model earth’ because they hate reality. It is obvious they are aiming at pleasing powerful people who want to cry ‘wolf’ or ‘fire’ in a crowded theater so they can fleece the sheep. Nothing is allowed to stand in their way so ‘researchers’ who cry wolf and fire are richly rewarded so we get an army of howling lunatics yelling we are going to roast to death unless we tax CO2 exhalations.
As usual, you are far more adept at presenting the case than I will ever hope to be. Especially since mothballing my old mac and the Statview software I loved so much. These days, long days in the classroom, and many weekends doing the same, leaves my evenings reserved for putting my feet up in an easy chair. Besides, laying out pearls before swine will not result in the swine being anything other than the pig it was before wisdom was offered.
No need for cynicism dear, Willis’ essay validates my notion that science can be explained to anyone so long as the person doing the explaining is aiming at the truth!
Pamela, always good to hear from you, and thanks for your kind words. However, I can’t agree about the pearls. I scatter my ideas on the electronic winds and watch them take to the air, and I trust that some of them will find fertile ground. Throw and go, and onwards.
My best wishes to you, glad to hear that your teaching job is working out well.
w.
Running models with innovative inputs and publishing the results has become a cottage industry but the findings are not converging into a unified synthesis of a theory of global warming. Thank you for this report. Really appreciate the high quality of the Eschenbach posts.
“the authors haven’t heard of Bonferroni and his correction”
The so called Bonferroni correction was proposed in a paper by Holm in 1979.
I am not sure why it is called the Bonferroni correction or whether there was someone named Bonferroni who had something to do with Holm’s paper. [Holm,S. (1979). A simple sequentially rejective multiple test procedure, Scandinavian Journal of Statistics,6:2:65-70.]
It is an important issue and one often ignored in climate science and by the IPCC
The IPCC uses an apha value of 0.1 per comparison
At that rate if you make say 5 comparisons the probability of finding at least one false “effect” in random numbers is 1-(1-0.1)^5 = 41%.
This is why climate science is littered with spurious findings.
Also their use of alpha values of 0.05 and 0.10 is inconsistent with “Revised standards for statistical evidence” published by the NAS in which they propose an alpha of 0.001 to improve reproducibility of results. Here is the link: http://www.pnas.org/content/110/48/19313.abstract
from the paper: Given the quasi-oscillatory behaviour of the solar cycle and that
4% of the total NAO variance can be explained by the solar
variability in our SOL experiment, we believe that this mechanism
could potentially improve decadal predictions. Although this
contribution is relatively small regarding the NAO total
variance, it represents a significant increment to other sources
of predictable decadal variability35
from you: However, it does mean that if such a causal relationship exists, it is likely to be extremely weak.
Thank you for your essay.
Thanks for the comments, Matthew.
They claim that 4% of the ModelEarth NAO variance is due to an imaginary sun. I don’t doubt that that is possible. Imaginary suns are known to have strange powers.
I, on the other hand, was talking about the real world, wherein I can’t find evidence that any of the real NAO variance is explainable by the real sun.
So I see no equivalence of the two statements. Only one of them is about reality.
Best regards,
w.
They claim that 4% of the ModelEarth NAO variance is due to an imaginary sun. I don’t doubt that that is possible. Imaginary suns are known to have strange powers.
I was wondering how you would word your response. I didn’t want to write a leading question. No, really! That’s a good response. I have to reread the paper, but like you I did not find a direct model to data comparison. Their 4% figure comes across as a sort of extremely high upper bound.
It’s kind of like: No matter how you torture the data, they’ll only confess to 4%, and it isn’t credible.
Somebody invented the phrase “metaphysical anguish”, to denote the case where a scientist or philosopher has done much work and come up with little. iirc, it was Lovejoy in “The Great Chain of Being”. Nowadays we call it cognitive dissonance. After all that work, the authors just can’t not publish; and besides, they have to publish or perish. Thanks again for your work.
True, we need to keep in mind that no results ARE a result. Just because your hypothesis is rejected doesn’t mean you haven’t accomplished anything. The day academics equated falsifying a hypothesis with lack of advancement was the day Science started to decay.
Willis,
Don’t these people have others in their field take a look at the premise, data, suppositions and conclusions – even BEFORE they would submit to peer review.
You often poke giant holes in researchers papers – many glaring – that these “scientists” seem to miss the obvious?
Has science come to this?…….
‘Fraid so …
w.
The giant holes are almost everywhere you look. Here is the story of how Kerry Emanuel, a climate scientist of great repute, tried to find a rising trend in hurricane activity. First he tried the ACE index (wind speed squared) but no trend was found.So he arbitrarily cubed the wind speed and called it the PDI (power dissipation index). But no trend could be found. So he took a moving average of the PDI. Still no trend. Then he took the moving average of the moving average and voila, there was the trend he was looking for. This paper was not only published but became a seminal paper in hurricane research. Thayer Watkins (SJSU) did a detailed analysis of the Emanuel paper Eschenbach-style. It would make a great WUWT post. Here is the link:
Oops, here is the link:
http://www.sjsu.edu/faculty/watkins/movingaveraging.htm
Wow. 1990 wants its web site back.
I’d never heard of PDI until a few days ago when reference to it appeared in a WUWT post. When I looked it up, I could not understand why it had been created when a metric, ACE, already existed.
Thanks for your explanation (and the link)! I have consigned PDI and Kerry Emanuel to the trash bin of my mind.
Here is what they say about statistical significance: Statistical significance analysis. Given the high degree of serial correlation in the
low-pass filtered time series, the significance of correlation between filtered NAO
and F10.7 indices were assessed using a nonparametric random phase test27. This
method preserves the spectrum and auto-correlation of the original data. In
practice, we generate 1,000 synthetic random filtered NAO time series having the
same power spectrum as the original one and we correlate each against the original
F10.7 time series. The 1,000 correlation coefficients are used to construct a
probability distribution of correlations. Regarding the composites, the significance
level is estimated using a bootstrapping technique with replacement. The procedure
is to select two random subsets from the original time series with the lengths equal
to the two original composite subsamples. This procedure is repeated 1,000 times
and a distribution of the differences is constructed. Finally, correlations and
composite distributions are used to determine the likelihood of the derived signals
arising by chance. One-tailed tests are used.
A one-tailed test with the alpha level set to 0.1 is not very impressive. Could you count how many hypotheses they were testing with their procedure? It looked to me like only 1 significance test (on the lag, or phase), but it followed a lot of judgments and choices. And that is following a history of publications of many models on these data. I have sort of come to a point where, unless there are true out of sample data, it’s just another modeling experiment for which any honest calculation of a “p-value” is close to impossible. It goes into the large collection of such models. It might be interesting to revisit it in 40 years. Sounds anti-intellectual, I know. Your effort was informative though, and I appreciate it.
I had not previously been aware that the paper was open to the public. Thank you again for the link.
This 145 year study is limited by quality of proxies available.
For example, there is no reason why the F 10.7 cm microwave index should track EUV irradiance having much shorter wave length. The justification for using it is that the proxy has correlated well for a few relatively short periods of observation in the distant past.
See Does the F10.7 index correctly describe solar EUV flux during the deep solar minimum of 2007–2009? and The ionosphere under extremely prolonged low solar activity.
Willis,
I do not think that you are being at all fair to Thiéblemont et al. Science often proceeds in small increments, and this would seem to be a case of that.
You wrote: “”The authors’ contention is that the sun acts to synchronize the timing of these swings to the timing of the solar fluctuations.”
Where do they claim that? What I found (last sentence of abstract and introduction) was a much weaker statement: “The synchronization is consistent with the downward propagation of the solar signal from the stratosphere to the surface.”
You wrote: “Why not start by analyzing the real Earth before moving on to ModelEarth?”
That is exactly what they did. The very first sentence of the introduction: “There is increasing evidence that variations in solar irradiance at different time scales are an important source of regional climate variability”. That is followed by a brief discussion with a whole list of references.
You wrote: “If you use a cyclical forcing as input to a climate model, do not be surprised if you find evidence of that cycle in the model’s output … it is to be expected, but it doesn’t mean anything about the real world.”
Well the first sentence of the abstract is: “Quasi-decadal variability in solar irradiance has been suggested to exert a substantial effect on Earth’s regional climate.”
The key word here is SUBSTANTIAL. The issue is not whether there is some effect (of course there must be some signal, however tiny), it is whether it might be large enough to matter and whether the claimed lag is reasonable. Those are not questions that can be answered without investigation.
You wrote: “The cross-correlation of a century’s worth of data shows that relationship between the sunspots and the DJF NAOI is not statistically significant at any lag, and it does not indicate any causal relationship between solar fluctuations and the North Atlantic Oscillation”
The paper cites a dozen or so peer-reviewed papers claiming the opposite. I have no idea if they are right or if you are right. But if you want to convince scientists that published results are wrong, you actually have to address the analyses that were published.
I am not saying there is nothing wrong with the paper, Given its extremely weak conclusion, they do seem to be over-hyping it. There is a lot of that going around, in all fields of science. But viewed as an investigation into the question “is this idea even plausible” it may well have some value. They never seem to actually come out and say that is what they are doing; that is another common failing of scientific papers, related to the over-hyping. And they never really make it clear that it is plausible; probably yet another symptom of the over-hyping.
But your criticism is off base and unfair.
Mike M. (period) September 18, 2015 at 8:58 pm Edit
Seriously? Well, how about starting with the title of their study, which is:
Duh … gotta say, Mike, you are starting out very poorly when you say they didn’t make a claim that the sun acts to synchronize the timing of the NAOI when it is in the damn title …
No, they did NOT analyze the real Earth at all. They merely referred to other people who had done so, then went off to play with their models. Their analysis had nothing to do with the real earth, it was 100% models …
Despite the fact that (as you point out) it can’t be answered without investigation, they did none. instead, they just pulled out a model. I did the investigation that they didn’t do, and found nothing.
And the key word in “Quasi-decadal variability in solar irradiance has been suggested to exert a substantial effect on Earth’s regional climate.” is not “substantial”, it is “ suggested“. Call me crazy, but “suggested” is not much of a scientific claim …
I can cite you hundreds of scientific papers making bogus claims, INCLUDING THIS ONE. And make no mistake, this one will be cited by the next fool that comes along. So the fact that they have cited other papers means nothing.
Dear heavens, what do you think I am doing in this post? I AM ADDRESSING THE ANALYSIS THAT THEY PUBLISHED! I can only do that one paper at a time, and I have done it over and over now dozens of times. I have looked at every paper that people have recommended to me as showing an influence of the tiny ~ 11-year solar fluctuations on the climate, and found nothing.
Now, as you say, you are obviously a newbie at this game, because you state that you don’t know if the “dozen” papers they cite are correct or not. So for your homework, how about you read the dozen papers, identify the one that you think is the strongest of them, and come back and tell us why you think it is right? Then I’ll be glad to analyze it myself. But I’m not running off to investigate the claims of every loon who thinks a change in total solar insolation of ± 0.04% (plus or minus FOUR THOUSANDS OF A PERCENT) over the sunspot cycle has an effect on the earth’s climate, whether his claims are peer-reviewed or not. There’s not enough time in three lifetimes to do that.
If you want to venture into the “is this idea even plausible” zone, please go ahead. But if you think that you can determine if an idea is plausible by running it through a climate model, then you are just as foolish as the authors are.
Say what? What is the “it” that you think is “plausible”, and what difference does “plausible” make in science? You seem to misunderstand what science is. Science is generally an attempt to determine what is actually true, not a quest to determine the limits of plausibility. “Plausible” means nothing to me, I’m interested in facts. The world is a big place, there are lots of surprises, almost anything is “plausible” in some scenario or another … so freakin’ what? Yes, it’s plausible that the tiny solar variations affect the climate … so freakin’ what? What we want to know is if they DO affect the climate, plausibility means nothing.
Mike, your objections are incredibly naive and uninformed. I encourage you to read the “dozen or so peer-reviewed papers” that you seem to put so much stock in before you make any more foolish statements. The number of bogus, unsupported claims made in the solar field is astounding. For example, the next paper down the pike is very likely to cite this farrago of nonsense that we are discussing, and some other person just as naive as you will say something to me like:
“But Willis, they cited the paper that showed that the sun synchronizes the NAOI, so they must be doing real authentic science, it was peer-reveiwed” …
Peer-reviewed? Don’t make me laugh. These days, sadly, the peer-review process done by the journals is meaningless. Real peer-review takes place here at WUWT, and this piece has failed badly.
w.
PS—You might also want to take a look at my other posts on this subject before venturing into the thicket of solar claims:
Congenital Cyclomania Redux 2013-07-23
Well, I wasn’t going to mention this paper, but it seems to be getting some play in the blogosphere. Our friend Nicola Scafetta is back again, this time with a paper called “Solar and planetary oscillation control on climate change: hind-cast, forecast and a comparison with the CMIP5 GCMs”. He’s…
Cycles Without The Mania 2013-07-29
Are there cycles in the sun and its associated electromagnetic phenomena? Assuredly. What are the lengths of the cycles? Well, there’s the question. In the process of writing my recent post about cyclomania, I came across a very interesting paper entitled “Correlation Between the Sunspot Number, the Total Solar Irradiance,…
Sunspots and Sea Level 2014-01-21
I came across a curious graph and claim today in a peer-reviewed scientific paper. Here’s the graph relating sunspots and the change in sea level: And here is the claim about the graph: Sea level change and solar activity A stronger effect related to solar cycles is seen in Fig.…
Sunny Spots Along the Parana River 2014-01-25
In a comment on a recent post, I was pointed to a study making the following surprising claim: Here, we analyze the stream flow of one of the largest rivers in the world, the Parana ́ in southeastern South America. For the last century, we find a strong correlation with…
Usoskin Et Al. Discover A New Class of Sunspots 2014-02-22
There’s a new post up by Usoskin et al. entitled “Evidence for distinct modes of solar activity”. To their credit, they’ve archived their data, it’s available here. Figure 1 shows their reconstructed decadal averages of sunspot numbers for the last three thousand years, from their paper: Figure 1. The results…
Solar Periodicity 2014-04-10
I was pointed to a 2010 post by Dr. Roy Spencer over at his always interesting blog. In it, he says that he can show a relationship between total solar irradiance (TSI) and the HadCRUT3 global surface temperature anomalies. TSI is the strength of the sun’s energy at a specified distance…
The Tip of the Gleissberg 2014-05-17
A look at Gleissberg’s famous solar cycle reveals that it is constructed from some dubious signal analysis methods. This purported 80-year “Gleissberg cycle” in the sunspot numbers has excited much interest since Gleissberg’s original work. However, the claimed length of the cycle has varied widely.
The Effect of Gleissberg’s “Secular Smoothing” 2014-05-19
ABSTRACT: Slow Fourier Transform (SFT) periodograms reveal the strength of the cycles in the full sunspot dataset (n=314), in the sunspot cycle maxima data alone (n=28), and the sunspot cycle maxima after they have been “secularly smoothed” using the method of Gleissberg (n = 24). In all three datasets, there…
It’s The Evidence, Stupid! 2014-05-24
I hear a lot of folks give the following explanation for the vagaries of the climate, viz: It’s the sun, stupid. And in fact, when I first started looking at the climate I thought the very same thing. How could it not be the sun, I reasoned, since obviously that’s…
Sunspots and Sea Surface Temperature 2014-06-06
I thought I was done with sunspots … but as the well-known climate scientist Michael Corleone once remarked, “Just when I thought I was out … they pull me back in”. In this case Marcel Crok, the well-known Dutch climate writer, asked me if I’d seen the paper from Nir…
Maunder and Dalton Sunspot Minima 2014-06-23
In a recent interchange over at Joanne Nova’s always interesting blog, I’d said that the slow changes in the sun have little effect on temperature. Someone asked me, well, what about the cold temperatures during the Maunder and Dalton sunspot minima? And I thought … hey, what about them? I…
Splicing Clouds 2014-11-01
So once again, I have donned my Don Quijote armor and continued my quest for a ~11-year sunspot-related solar signal in some surface weather dataset. My plan for the quest has been simple. It is based on the fact that all of the phenomena commonly credited with affecting the temperature,…
Volcanoes and Sunspots 2015-02-09
I keep reading how sunspots are supposed to affect volcanoes. In the comments to my last post, Tides, Earthquakes, and Volcanoes, someone approvingly quoted a volcano researcher who had looked at eleven eruptions of a particular type and stated: …. Nine of the 11 events occurred during the solar inactive phase…
Early Sunspots and Volcanoes 2015-02-10
Well, as often happens I started out in one direction and then I got sidetractored … I wanted to respond to Michele Casati’s claim in the comments of my last post. His claim was that if we include the Maunder Minimum in the 1600’s, it’s clear that volcanoes with a…
Sunspots and Norwegian Child Mortality 2015-03-07
In January there was a study published by The Royal Society entitled “Solar activity at birth predicted infant survival and women’s fertility in historical Norway”, available here. It claimed that in Norway in the 1700s and 1800s the solar activity at birth affected a child’s survival chances. As you might imagine, this…
Willis,
“Mike, you are starting out very poorly when you say they didn’t make a claim that the sun acts to synchronize the timing of the NAOI when it is in the damn title”
It does not seem to be in the abstract or conclusions; that tells me that the title is click bait. So it is part of the authors’ excessive hype. That makes them guilty of bad behavior, not bad science. Such hype has become depressingly common in science (not just climate science). I should have noted this in my post, but I tend to just tune such misleading titles out.
“No, they did NOT analyze the real Earth at all. They merely referred to other people who had done so, then went off to play with their models.”
In the scientific literature, those are pretty much the same thing. If you include a repeat of prior analysis, you will be told to remove it, unless nobody notices.
“So the fact that they have cited other papers means nothing. ”
By that standard the scientific literature means nothing. Progress in research is incremental, so you have to rely on what has been published. That should not be done blindly, but it has to be done. The flood of poor research in the ever expanding literature is indeed a problem and the normal corrective mechanisms seem (to me at least) to be failing. But to just throw out the entire published literature is not a solution.
“You seem to misunderstand what science is. Science is generally an attempt to determine what is actually true, not a quest to determine the limits of plausibility.”
You are the one who misunderstands science. You can not really determine what is true, you can only determine what is false and narrow down the limits of what may be true. Assessing plausibility is part of the narrowing down process. It is a small step, but science is incremental.
“Plausible” means nothing to me, I’m interested in facts.”
I am very surprised to hear you say that. I have read many of your articles with great interest. I don’t think I have seen you establish any facts. I have seen some very interesting investigations into what might or might not be plausible.
Perhaps there is a semantics issue here. I have considerable expertise in the area of chemical kinetics. We teach students that in comparing a proposed mechanism to experimental data, there are two possible results: either the mechanism is proven wrong or the mechanism is plausible.
And what all this boils own to….its the SUN idjits.
Say what? I have no idea what this means.
w.
It seems to me that if the sun’s activity were to cause NOAI, it would also cause a twin synchronously in the Pacific.
Also from your comment re quality control changes: “.. any adjustments need to be documented and explained, with the data saved at all steps. I believe this is the case with the Berkeley Earth data.”
It brings up the question, now that the record has been rejiggered to end the dreaded “pause”, what does this do to such as BEST. Will they make adjustments to jibe with T. Karl’s swift tailoring and the new paper also showing that there is no pause? Has Steve Mosher taken this on?
http://wattsupwiththat.com/2015/09/17/the-latest-head-in-the-sand-excuse-from-climate-science-the-global-warming-pause-never-happened/
Variance of monthly mean indices AO and AAO.
http://www.cpc.ncep.noaa.gov/products/precip/CWlink/daily_ao_index/history/variance.gif
Lots of valid points Willis but one fundament misconception:
You are misunderstanding what you are seeing. The waxing and waning of the amplitude and the change of direction around 1945 is typical of what you get when you mix two close, purely harmonic functions. Try plotting a few examples and you will see what I mean.
This kind of pattern does not mean that cycles are not there, it is a result of them slipping in and out of phase due to their different periods. Some times they cancel sometimes they add.
I think you are confusing this with the fact that climate data ( eg temps ) are sometimes in phase with solar leading people to conclude prematurely there is a link and then they close their eyes to periods when it is perfectly out of phase with solar or simple does not match.
Even this latter case could be that there is some other periodic driver ( one could think of circa 9y lunar effects ) mixing with solar in the way I describe above.
As you rightly say, climate is a complex system. Simplistic dismissal can be as wrong as simplistic attribution. Be careful.
Mike September 18, 2015 at 11:50 pm
I understand very well the nature of beat frequencies and of two cycles moving into and out of sync. But you misunderstand the nature of the periodogram. It doesn’t care if the cycles are in and out of frequency, it finds them regardless.
As a result, when a cycle disappears from a periodogram, it is NOT just moving into and out of synchronization with some other cycle as you imagine.
When it disappears from a periodogram, it means it is gone. Disappeared. No longer present. Pining from the fjords.
w.
This is an important and valid statistical point but I think you are wrong in saying the authors have looked and not reported negative results. This is a collective error of climatology. NAO is often analysed with only winter months probably because it “works better”. That justifies your point. I think it likely that the authors were just following this established idea, rather than having looked and not reported.
They should have looked if the want to do this. Your criticism has much wider application than just this paper. It is now an institutionalised error.
[Try again]
Lots of valid points Willis but one fundament misconception:
You are misunderstanding what you are seeing. The waxing and waning of the amplitude and the change of direction around 1945 is typical of what you get when you mix two close, purely harmonic functions. Try plotting a few examples and you will see what I mean.
This kind of pattern does not mean that cycles are not there, it is a result of them slipping in and out of phase due to their different periods. Some times they cancel sometimes they add.
I think you are confusing this with the fact that climate data ( eg temps ) are sometimes in phase with solar leading people to conclude prematurely there is a link and then they close their eyes to periods when it is perfectly out of phase with solar or simple does not match.
Even this latter case could be that there is some other periodic driver ( one could think of circa 9y lunar effects ) mixing with solar in the way I describe above.
As you rightly say, climate is a complex system. Simplistic dismissal can be as wrong as simplistic attribution.
Be careful.
What your ARMA tests show is that random data contain periodicities that can be picked out with a bandpass filter and will add together to produce this sort pattern. The question is whether the pattern is synchronised with solar.
Their fig 2 is clearly drifting out of phase by the end of the their simulated data. That makes the case for the “phase-locking” they suggest very tenuous.
Mike: Even this latter case could be that there is some other periodic driver ( one could think of circa 9y lunar effects ) mixing with solar in the way I describe above.
Mike: This is a collective error of climatology. NAO is often analysed with only winter months probably because it “works better”.
These are the basic two reasons for what I called my “anti-intellectual” approach to trying to assess the “statistical significance” of results of new analyses of well-worked data sets. You purely and simply can not discern how much data selection has gone on before based on a probably adventitious co-occurrence spotted via graphical or some other analysis. And then you can not tell how often what started as a straightforward hypothesis got reformulated into multiple mathematical models until at last one of them produced a apparently statistically significant improvement in fit. The procedure has an associated p-value of approximately 1, since a dedicated team is likely eventually to find good model fits to (selected) data even if the time series being investigated are statistically independent. Without out of sample data for a true test of model fit, the p-value is essentially a measure of the dedication and skill of the team, not a measure of whether the associated model fit statistic could have an unusual value under the null hypothesis. Such dedication and skill deserve respect, but the results ought not be deemed reliable indicators of what’s happening in the world.
You may be aware that studies have revealed a high proportion of non-reproducible results in medical, neurophysiological and psychological journals. They result from researchers being too casual in addressing problems such as these.
Mike wrote: “As you rightly say, climate is a complex system. Simplistic dismissal can be as wrong as simplistic attribution.”
Well said.
Mike M. (period) September 19, 2015 at 8:27 am
Oh, please. This is bad behavior on both of your parts. You are accusing me of “simplistic dismissal” without quoting or indicating what is “simplistic” about an analysis which includes a theoretical discussion of the effects of cyclical inputs on climate models, a cross-correlation-function (CCF) investigation, a Fourier analysis, a red-noise significance analysis, and a Granger causality test, along with much discussion of the results. I invite you both to stuff your accusations of a “simplistic dismissal” where the solar fluctuations can’t reach it, and next time to quote what you claim is simplistic.
w.
Dismissal of one misguided idea by appealing to another one is exactly what is being done here. Totally absent is any analytic comprehension of RANDOM signals with CONTINUOUS power spectra of various bandwidths, which manifest the waxing and waning of irregular wave-forms often seen in nature. Periodograms are simply not an adequate analysis tool for such geophysical signals.
1sky1 September 19, 2015 at 4:58 pm
Lemme see …
• No citations of any scientific papers, good or bad … check.
• No quotations to indicate what he is babbling about … check.
• Lots of CAPITAL LETTERS … check.
• Claims of knowledge of the “right way” to do it, but without any worked examples … check.
• Statements that other (unnamed) people are “misguided” … check.
Yep … 1sky1 is verifiably back again.
w.
That random signals have a different spectral structure than periodic signals, requiring different analysis techniques, is common knowledge among qualified signal analysts. Wholly oblivious to this, Willis proves incapable of uniquely identifying even a pair of pure sinusoids without leakage through the spectral window of the periodogram, If he spent half the time in studying standard texts (e.g., Parzen, Bendat & Piersol, Koopmans, etc.) as he does on carping on lack of rigidly formulaic approach to his learning in my critical comments, he would address the issues raised, instead of hiding his obvious failings behind a facade of patronizing ad hominems.
1sky1 September 21, 2015 at 2:18 pm
Heck, no need to be imaginative, this comment of your is adequately covered by my previous reply …
Yep … 1sky1 is verifiably back again. Same style, same lack of content.
1sky1, I’ll start taking you seriously when you analyze the same data I analyzed and you present your results. Where I come from the cowboys would describe you as “All hat and no cattle” The last time I asked you to do this you whined that it would take you “months” to do the analysis, boo hoo, and you wimped out completely … what’s your excuse this time?
More to the point, perhaps, if what the study is analyzing (the North Atlantic Oscillation Index) is merely a “random signal”, than how about you apply your blinding brilliance to the study itself, and you tell us where they went wrong in analyzing a “random signal”. Because they assuredly don’t think the signal is merely “random”, they claim to find meaning in it.
And indeed, there may well be meaning in it … so why are you so sure that it is a “random signal”? Perhaps you could detail the tests you applied to the signal to determine that it is “random” … me, I think you’re just blowing smoke out of your fundamental orifice, so a list of all of the tests that you surely must have performed before baldly stating that the dataset is “random” would help your cause greatly.
I await your analysis … not holding my breath, though …
w.
When someone is manifestly oblivious of the fact that the true periodogram of a pair of pure sinusoids consists of only two lines, not a dozen as shown in Willis’ figure , and that the difference between random signals and periodic ones is the spectral density continuum and decaying acf, rather than line structure and periodic acf, it’s apparent that he’s the one who should worry about being taken “seriously” as a signal analyst. And when someone pretends that the Texas saying of “all hat and no cattle” hails from cowboys in Hawaii, it’s apparent that someone’s flinging cow-chips.
The whole idea that criticism is valid only if the entire problem is reworked by the critic is a hilarious evasion. Because of WUWT’s permissiveness in publishing quirky, personal conceits and totally inept conjectures, I’ll never allow my work to appear on these pages.
Mike September 19, 2015 at 12:10 am Edit
Thanks for that clarification, Mike. Again I say, the waxing and waning and moving into and out of phase is immaterial to the periodogram. As you suggest, here is example. Here are a couple of sine waves moving into and out of phase. You can see that they have the typical pattern that we are discussing, waxing and waning over time as they move into and out of syncronization.



And here is the periodogram of the same data:
As you can see, your claim is 100% wrong—the fact that the harmonic function move into and out of phase has absolutely no effect on the periodogram. If a cycle is there in the data, it’s there in the periodogram, regardless of whether it is “waxing and waning” or not.
w.
If applying a moving average filter to random data produces an oscillatory output, then the filter is ill designed for the application.
Any electronic engineer will tell you that what you are doing here is the equivalent of a low pass filter, which can gave various values of ‘Q’. Too much ‘Q’ will give you the illusion of a tone, where none exists in the input data.
This is yet another effect well known to engineers that is coming to light in other areas. The spatial filtering applied by the neural net software used for pattern recognition that resulted in hallucinatory appearances of buildings and dogs faces in clouds and other photographs is yet another example of emphasising a pattern, where no pattern exists, simply because you have looked too hard for it (turned up the gain).
http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html
Leo Smith September 19, 2015 at 12:23 am
Leo, this is why I ask people to quote what they object to. I have no idea what “moving average filter” you are referring to, or even who is the “you” that you are talking about.
w.
What is the MA in ARMA ??
Putting aside the fact that MA is a very bad filter to choose, I found your comment about negative MA being common is climate data interesting. It probably reflects long term negative feedbacks dominating the system. ie random input , autoregressive integrating nature of oceans and persistent negative feedbacks constraining changes.
F10,7 is perfectly entitled, as it relates to changes in the UV and is not consistent with the number of spots.
Say what? Did you not see Figure 3 in the head post?
w.
The amount of spots and F10,7 differ from each other.
http://www.solen.info/solar/images/solar.png
For the last twenty years, EUV irradiance has diverged significantly from the F10.7 cm microwave index, the Mg II index, sunspots, and the solar quiet variation.
This effect is directly observed by increased space junk accumulation and less drag on satellites, for example.
Well you’re looking in the right way Willis but I think your lack of experience in periodic analysis is leading to jump to incorrect conclusions.
What your fig 6 shows is a strong 9y peak in the latter half and two smaller peaks at 8 and 10y in the full dataset. It is very possible that this is the same 9y periodicity being split into two peaks by it being modulated by another climate effect.
To find the actual periods indicated, you need to work in frequency rather than period, and it’s half the sum and half the difference. Converting back to periods that gives 9y modulated by 80y. Due to the very crude resolution of you plot the latter period has a large error margin but it’s interesting. This is quite likely the famous “circa 60y” periodicity.
I don’t like this ‘winter only’ idea at all. I think this should be detectable in the monthly data and that would give sufficient frequency resolution to be more accurate about the long term modulation.
I think your analysis of the actual data shows there is more likely a lunar driver in NAO than a solar one.
Much of the false attribution to solar cycles is due to not recognising the lunar contribution and confounding it with solar when they work in unison and ignoring the times when they oppose each other and the solar thing does not work.
Interesting hypothesis. Got some graphs data and source code to show this?
Peter
Hi Peter,
discussed here a few years ago.
https://climategrog.wordpress.com/2013/03/01/61/
Also BEST team published a paper on NH land surface temps showing 9.1+/-0.1 ( IIRC ) periodicity. Scafetta also reports similar frequency and demonstrates from JPL ephemeris that it is lunar.
IMO it is failure to resolve a mix of 8.85y lunar apsides and 18.3y/2 , the eclipse cycle that is indicates repetitions in earth-moon-sun geometries.
Mike,
I have also found a 9.4 +0.4/-0.3 years variation in the peak mean latitude anomaly of the Summer sub-tropical High pressure ridge over eastern Australia. I propose that the 9.4 year signal is compatible with a 9.1 year lunar tidal signal made up from a mixture of a 9.30 year Draconic + 8.85 year Anomalistic lunar tidal signal. I also propose a simple “resonance” model that assumes that if lunar tides play a role in influencing the mean latitude anomaly, it is most likely one where the tidal forces act in “resonance” with the changes caused by the far more dominant solar-driven seasonal cycles. With this type of model, it is not so much in what years do the lunar tides reach their maximum strength, but whether or not there are peaks in the strength of the lunar tides that re-occur at the same time within the annual seasonal cycle.
Wilson, I.R.G., Lunar Tides and the Long-Term Variation of the Peak Latitude Anomaly of the Summer Sub-Tropical High Pressure Ridge over Eastern Australia, The Open Atmospheric Science Journal, 2012, 6, 49-60. http://benthamopen.com/ABSTRACT/TOASCJ-6-49
ABSTRACT:
This study looks for evidence of a correlation between long-term changes in the lunar tidal forces and the interannual to decadal variability of the peak latitude anomaly of the summer (DJF) subtropical high pressure ridge over Eastern Australia (LSA) between 1860 and 2010. A simple “resonance” model is proposed that assumes that if lunar tides play a role in influencing LSA, it is most likely one where the tidal forces act in “resonance” with the changes caused by the far more dominant solar-driven seasonal cycles. With this type of model, it is not so much in what years do the lunar tides reach their maximum strength, but whether or not there are peaks in the strength of the lunar tides that re-occur at the same time within the annual seasonal cycle. The “resonance” model predicts that if the seasonal peak lunar tides have a measurable effect upon LSA then there should be significant oscillatory signals in LSA that vary in-phase with the 9.31 year draconic spring tides, the 8.85 year perigean spring tides, and the 3.80 year peak spring tides. This study identifies significant peaks in the spectrum of LSA at 9.4 (+0.4/-0.3) and 3.78 (± 0.06) tropical years. In addition, it shows that the 9.4 year signal is in-phase with the draconic spring tidal cycle, while the phase of the 3.8 year signal is retarded by one year compared to the 3.8 year peak spring tidal cycle.
If I get time, I’ll have a look at the full monthly dataset.
https://climatedataguide.ucar.edu/sites/default/files/nao_pc_monthly.txt
Mike,
I think that Greg Goodman makes the same point here in his 2013 conclusions [cited by you at:
[https://climategrog.wordpress.com/2013/03/01/61/]. Greg was aware of my earlier work in 2012, where I effectively came to the same general conclusion.
In his 2013 conclusions Greg said:
“What is shown here is a much more significant, global effect. The presence of this strong 9 year cycle will confound attempts to detect the solar signal unless it is recognised. When the two are in phase (working together) the lunar effect will give an exaggerated impression of the scale of the solar signal and when they are out of phase the direct relationship between SSN and temperatures breaks down, leading many to conclude that any such linkage is erroneous or a matter of wishful thinking by less objective observers.
Such long term tidal or inertial effects can shift massive amounts of water and hence energy in and out of the tropics and polar regions. Complex interactions of these cycles with others, such as the variations in solar influence, create external inputs to the climate system, with periods of decadal and centennial length. It is essential to recognise and quantify these effects rather than making naive and unwarranted assumptions that any long term changes in climate are due to one simplistic cause such as the effects of trace gas like CO2.”
Mike: What your fig 6 shows is a strong 9y peak in the latter half and two smaller peaks at 8 and 10y in the full dataset. It is very possible that this is the same 9y periodicity being split into two peaks by it being modulated by another climate effect.
Maybe, but the burden of proof is on the claimant in science. How could you demonstrate, given that these data have been selected and that many models have already been tried, that you are not merely building on a non-reproducing adventitious pattern in a graph? To me, the answer is Only by clearly modeling the behavior over the next few decades, and then showing as the data accumulate that the model has been accurate.
Notice also that, on this hypothesis, the data have been recorded over at most 1 full period of the oscillation of the pair of processes. Unless the period has been clearly hypothesized in advance of studying the data, it is unlikely that any of the statistical analyses produce reliable results.
I’ll end this note as I began: Maybe.
The actual planetary-wide lunar signal is clearly visible in the historical data:
Ian R. G. Wilson, Nikolay S. Sidorenkov
Long-Term Lunar Atmospheric Tides in the Southern Hemisphere
The Open Atmospheric Science Journal, 2013, 7: 51-76
http://benthamopen.com/ABSTRACT/TOASCJ-7-51
You might also want to look at the latest work by Sidorenkov, Bizouard and Zotov:
http://syrte.obspm.fr/jsr/journees2014/pdf/
Sidorenkov N.: The Chandler wobble of the poles and its amplitude modulation
and
Bizouard C., Zotov L., Sidorenkov N.: Lunar influence on equatorial atmospheric angular momentum
Mike September 19, 2015 at 1:00 am
Piss off, and come back when you are not starting out by making ad hominems my experience. Starting out by insulting the person you are discussing things with is counterproductive, foolish, and very revealing about how strong you think your case is. A man only throws mud when he is out of scientific ammunition …
And while you are at it, your claim about periodograms is simply wrong. See my explanation and graphs above.
w.
Look bud, if you spoke to my face like that you’d have a fight on your hands and you’d probably have more manners and sense anyway.
Doing so from the safety of your keyboard is pathetic and cowardly.
Neither do I think it is the kind of conduct expected of users of this site.
Mike September 19, 2015 at 5:18 pm
You walked in and started off your post by disparaging my level of experience. Yes, you are right, that is both pathetic and cowardly. So I slapped your face.
Now you want to whine because I slapped your face … don’t like getting your face slapped?
Then don’t walk in and open up by insulting me.
And your claim about periodograms is still wrong.
w.
Lacking experience is not something to be ashamed of or perceive as an insult and it was not intended to be one. That is in your own head.
Your vulgar, over the top reaction reveals you are hyper-sensitive to that sort of criticism which reveals your insecurity.
Despite having made it your pass time to rip into authors of published papers pointing out their ignorance and mistakes, often in the most unsubtle and impolite ways ( recent Shaviv incident for example ) , you are unable to take even the most well-mannered criticism yourself.
You did not “slap my face”, mouthing off from the safety of your keyboard does not impress anyone. You just showed everyone what a jerk you can be, just in case they have not noticed your track record.
Rather than admit you over-reacted, you make a disingenuous attempt to justify yourself. Very impressive.
Mike September 20, 2015 at 2:28 pm
Mike, you claim that saying:
is just “well-mannered criticism” and not an insult of any kind … really?
Mike, I think your lack of experience in social situations is leading you to jump immediately to insulting people without even realizing you’re doing it …
Now … did you perceive that as an insult? Of course you did. Nobody likes to be told they lack experience.
And because you made that insult the very first sentence of your comment, it was clear that you were here to harass rather than to discuss.
Which is why I slapped your face. I won’t put up with that. You want to talk to me about climate science, we’ll talk as equals, regardless of our experience.
If you’re done being all hurt and want to return to the science, let me know. I’m glad to discuss whatever your vast experience has revealed to you that is denied to us less-experienced mortals.
w.
Mike, one more point. You mentioned the “Shaviv incident”.
I had wrongly said that Nir Shaviv and his co-authors were deceptive. This was incorrect, impromer, unwarranted, unfounded, and over the top. When I realized what I had done, I apologized sincerely to Dr. Shaviv and his co-authors for what I had said. And I admit whenever asked that I was wrong.
Now … what more would you have me do, Mike? Seriously, what more can I do when I’ve gone over the line and caused some injury? All I know to do is to acknowledge my mistake, apologize to those I’ve hurt, learn from my errors, and move on. So that’s what I did.
Next, compare and contrast my actions in the “Shaviv incident” to your actions here …
w.
Just has a quick look at full NAO monthly from the same ‘PC’ EOF analysis data. ie no seasonal selectivity.
Long period is close to 100y, nothing around 60y.
There is a notable peak at 18.2y , 9.4y and 5.8y , also a lesser peak at 11.88y ( don’t have +/- on those figures )
So prima facea evidence of lunar + solar influences mixed together. Like I said above, unless we recognise the lunar signal it will confound all efforts to detect a possible solar signal and lead to either false attribution or false negation.
I should emphasise that the long periodicity is very unreliable due to the length of the data. There is significant change on that time-scale but the period should not be taken seriously. It cannot be reliably estimated by Fourrier type techniques. Neither is there any reason to regard it as periodic change in the sense that it is repetitive. It is simple a description of the change in the data: don’t extrapolate !!
In the full data I don’t see the kind of split peak around 8 and 10 that W found in the DJF subset. There may be a pattern in winter data that is not present all year round.
I should emphasize that, since you know the 100 year cycle is “very unreliable”, that you citing and pointing to it is evidence that you are willing to push ideas with no foundation …
w.
You are being typically dismissive without even thinking about what I wrote.
Let me try again:
If the data goes up a bit and down again in 80 there will be an 80 cmpt in the FA. That is part of the description of that limited segment of data. That does not mean that it is cyclic in the sense that it did the same thing prior to the data or will continue to do it in the future.
So I’m not “push ideas with no foundation ” I am explicitly pointing out that this should not be taken to indicate anything more and added: don’t extrapolate !!
If you took time to read before firing off you would not look so foolish.
In the DJF series I find a peak at 86y which is close to the modulation of circa 80y indicated by the strong peaks around 8 and 10y. Since it has been reduced to annual data by the seasonal snipping process there is little resolution in the spectrum.
This may well be a result of the windowing function applied during the analysis. The data is 115y and gets scaled down to zero or ‘tapered’ at each. This is effectively modulating the data !
As you said above, the long cycles like your claimed 86 year cycle are not reliable in the slightest. So your continued pushing of those cycles merely reveals that your agenda is more important to you than your science …
You need at least three, and preferably four, full cycles to say anything about what is going on, and often that is not long enough. For example, look at the differences in the cycles in the first and second halves of the data, each of which are 58 years in length. A strong 9 year cycle shows up in one … but not in the other, and 56 years is about 6 full cycles of nine-year data.
So you blathering on about 86 year cycles and 80 year cycles and such is totally meaningless.
w.
There is a very clear peak at 22y in the DJF data. The authors may have masked out the real solar signal by their preconceived ideas that it should been seen at 11y.
Mike, if you are indeed finding significant correlation with either the 22 year solar magnetic cycle or the interplay of the 11 year solar sunspot and the lunar cycle, a full blog post would be much appreciated.
Willis’es various searches for a correlation between the solar cycles and earth’s weather always coming up negative is always surprising to me. I would love to see a correlation that actually stands up upon serious analysis.
I am not sure why you find that surprising? The strongest signal we get from the Sun by far is the minimum to maximum difference in w/m2. No other parameter of the Sun comes close to that metric. Those who purport to say there is one always, ALWAYS, add a nefarious amplification process not clearly articulated (even CO2 needs an amplification device). As to the Solar sourced 0.1% change in w/m2, Earth’s intrinsic factors are capable of much greater effects on solar irradiance at any stage of a solar cycle before it strikes the ground or ocean surface. And since our only way of measuring solar effects on Earth’s climate is to measure the temperature anomaly of that climate, there is no way to extract various intrinsic and extrinsic sources from that temperature data. The only thing we can currently do is take top of the atmosphere measures of changes in TSI and translate that into w/m2, which can then be calculated to produce a change of 0.1 degree Celsius under clear sky conditions. As to the degree Celsius change in a column of ocean water from clear sky TSI variation, all bets are off. I can’t see that size of small print without my cheaters.
Well, I’m still bopefull the Svensmark hypothesis will prove out. I assume you will grant me that the cosmic ray variation is significant and follows the sun’s 11 year cycle:
http://www.puk.ac.za/opencms/export/PUK/html/fakulteite/natuur/nm_data/data/SRU_Graph.jpg
Further, the effects on clouds found days after Forbush seems convincing.
Thus I am both surprised and disappointed each time.a supposed 11-year cycle in the climate data fails to prove out.
And where did you see me say that ?? I said there was a peak in the FT. I also said I did not like the idea of DJF averages.
Later I looked at the monthly data. There is not 22y peak. There is a 11.88y but not a dominant one. There is also 18.2 and 9.4 which could be lunar but are not close enough to be strongly attributable.
I don’t usual bother with this kind of mangled, over processed data : PC of EOF etc. This is the sort of games that climatology picks up from econometrics. They normally play havoc with spectral content of the data so the whole thing become a waste of time.
Mike,
I have shown that there is a strong lunar tidal signal that is present in both the [southern] summer (DJF) mean sea level pressure (MSLP) and sea-surface temperature (SST) anomaly maps for the Southern Hemisphere between 1947 and 1994.
Ian R. G. Wilson, Nikolay S. Sidorenkov
Long-Term Lunar Atmospheric Tides in the Southern Hemisphere
The Open Atmospheric Science Journal, 2013, 7: 51-76
http://benthamopen.com/ABSTRACT/TOASCJ-7-51
The sole reason for picking the summer months is to [largely] remove the seasonal component of the changes in MSLP and SST that are caused by the Sun. In essence, we removed most of the annual variations produced by the Sun in the belief that it could potentially mask an underlying long-term lunar tidal signal.
This paper also shows that if you do not allow for:
a) a possible confounding between cyclical solar variations (e.g. at the annual time scale, as well as at the 11 & 22 year solar cycle) and cyclical lunar variations (e.g. at 8.85, 9.3, 13, 18, 18.6, 20.3, 31, 62 years)
b) possible modulation of inter-annual cyclical variations by much longer term climate cycles e.g. the 60 – 80 year cycle.
you might come to the erroneously conclusion made by Willis.
I have read your paper. Seems like pretty tortured data AND analysis techniques to me. And I am unimpressed with your comparison to temperatures from just two placed on Earth. What did your peer review panel say if you’ve a mind to share that info?
The sole reason for picking the summer months is to [largely] remove the seasonal component of the changes in MSLP and SST that are caused by the Sun.
The best way to remove an annual signal when looking for decadal and longer changes is a low pass filter. Have you looked at frequency characteristics of your 3:9 rectangular wave chopper plus the averaging?
What happens if the magnitude of the annual signal you think you have removed increases ? You will read that as global warming even if the all year mean stays the same.
I’m familiar with your paper and the wave number=4 pattern is intriguing. It would be a shame if it was an artefact of poor processing. Does it still appear if you use a low pass filter like a gaussian to remove the annual cycle, rather than chopping out the summers only?
Mike September 19, 2015 at 4:03 am
Cut the data in half and see if the cycle is still there … OOPS! In one half of the data it’s at 22 years, and in the other half, guess what? It’s at 18 years.
Did you not read the part in the head post about why I look at both halves of the data? It’s to prevent me from making foolish mistakes like your claim about the 22 year cycle …
w.
And having cut the 115y of data in half you have about 58 data points of very noisy data in which you are seeking to confirm or refute the presence of a 22y periodicity. You are then probably distorting both with a taper function. How accurately to think you either of your results are ? Are they contradictory as you foolishly assume or may the two results be consistent with each other within the accuracy of technique?
I already said I did not like the DJF idea and that period is closer to 25y when using all the monthly data so it may not be solar at all. I was just saying in applying a filter ( apparently without looking at all the data first ) they may be blinkering what they were doing anyway. They would presumably have reported a 22y had they looked and seen it.
Lord help me.
Linear doesn’t mean “in a line”. It doesn’t even mean “unique” (which is how I suspect you’ve misunderstood the term), simple or straight forward. Linear means that the form of the response does not depend upon the amplitude of the input.
This paper http://nldr.library.ucar.edu/repository/assets/osgc/OSGC-000-000-003-926.pdf doesn’t even use the word linear and figure 1 shows a non-linear response so I can’t figure out where your misunderstanding might be.
“semi-linear transformation” WTH? What is a semi-linear transformation? Is that from Hildebrand? Oh that’s right. You can’t understand Hildebrand. It’s just made up. nm.
“they must have tried the annual NAOI” How do you know this? You don’t. Just because you would make a mistake doesn’t mean others would have. In fact, come to think of it, if I had to bet, I’d say they had a good reason to do this and you just don’t understand. Then, in the the habit of finding fault with your betters, you assume they, not you, did something wrong.
Skimming through the rest of the post, a new idea has come to mind. You’ve read a paper you didn’t understand that made some conclusions which you decided aren’t that good….. So what? There are lots of papers that strike me the same way. In fact, I’ve published some. Sometimes a researcher spends a few weeks/months going down a certain path or trying a certain analysis method or something and is unimpressed with the result. It’s not as if the conclusions are wrong per se. It’s just that the work didn’t pan out the way one hoped and is unimpressive. The result should still be published even if it might run afoul of some amateur who thinks every paper should end with……. E=mc^2!
Dinostratus wrote:
“You’ve read a paper you didn’t understand that made some conclusions which you decided aren’t that good….. So what? There are lots of papers that strike me the same way. In fact, I’ve published some. Sometimes a researcher spends a few weeks/months going down a certain path or trying a certain analysis method or something and is unimpressed with the result. It’s not as if the conclusions are wrong per se. It’s just that the work didn’t pan out the way one hoped and is unimpressive. The result should still be published even if it might run afoul of some amateur who thinks every paper should end with……. E=mc^2!”
Exactly right.
Dinostratus September 19, 2015 at 5:53 am
I certainly hope someone does.
Say what? “Linear” on my planet means that the input and the response graph in a straight line. Not sure what your meaning is.
Had you read my other links you might have noticed that Kiehl’s analysis was not entirely correct, and that when looked at properly the response is linear …
Generally, I use the term to mean a transformation that is “piecewise linear”, in other words, linear enough that we can treat it as linear in the region of interest. For example, the relationship between temperature and thermal radiation is actually T^4 … but in many climate analyses it is treated as though it were linear. I also use the term to mean “linear plus noise”, where the response doesn’t graph exactly on a straight line but it is clear that the underlying relationship has a basically linear form.
Any other terms you want explained, feel free to ask … I sometimes don’t use the most sciency terms because I’m writing for the educated layman and not the specialist, so I can see how you might get confused.
Not from Hildebrand, and not from Hilfinger either … and since you haven’t either quoted or identified just what you are talking about, it appears you’re just making meaningless noises that are intended to impress the credulous.
It doesn’t matter either way. Since they are examining only a quarter of a dataset, they still need to use the Bonferroni correction, whether they are my “betters” or not …
So to sum up:
• You haven’t identified any problems with my CCF analysis, and
• You haven’t said there is anything wrong with my Fourier analysis, and
• You didn’t mention any difficulties with my Granger causality analysis, and
• You haven’t noted any issues with my use of red-noise to show problems with their work …
… so instead you are reduced to nit-picking about my terminology, misunderstanding when the Bonferroni correction needs to be used, and hurling various unsupported insults.
Classy, dino … real classy. Come back when you have a scientific objection, and we can discuss it. Until then, at least your fanboi Mike M. thinks you are wonderful, so there’s some consolation …
w.
“Linear” on my planet means that the input and the response graph in a straight line.”
No.
No.
No.
No one gives a F what you mean on your planet. These are precise mathematical definitions. A linear response does NOT mean a straight line. It doesn’t. Your posts are just made up BS where you use your own words for misunderstood concepts and find fault with your betters.
I’d said:
You said:
Classy.

Google images for “linear relationship”
Notice a common thread about those images, Dino? Is it just a coincidence that the “input and the response graph in a straight line” in every one of them, just like I said? Here’s a definition from the web:
That’s what I said I meant by linear. Now, there assuredly may be other meanings, but that is the one I was using.
Finally, as I said above, it appears you can find nothing wrong with my scientific analysis, so you endlessly whine about my terminology … someday you might get back to the science. Let me know when that happens, OK?
w.
“it appears you can find nothing wrong with my scientific analysis”
Well you’re off by only two words this time. I can’t find your scientific analysis.
Thanks, Willis. Very incisive review.
Using modeled output as input I think guarantees inconsequential results.
There are 5 items that influence the NAO phase Willis not one.
-NAO
AP INDEX LESS THEN 5
HIGH LATITUDE VOLCANIC ACTIVITY
QBO NEGATIVE
AMO IN WARM PHASE
LOWER THEN NORMAL ARCTIC SEA ICE.
Reverse items for a +NAO PHASE.
And you mix them in what proportion in your recipe? Does your recipe work going backwards?
This is a guide that if you think about it makes sense. Is it 100%? No it is not but I think on balance it is correct.
The order of most important versus least important in my mind is as follows;
ap index 5 or less- low solar tends to effect ozone distribution in that the polar stratosphere warms more then the lower latitudes. Favors a -NAO.
qbo negative when ap index is less then 5 from past data seems to equate to a very neg. AO/NAO.
high latitude volcanic activity -because it tends to warm the polar stratosphere in contrast to lower latitudes.
amo warm phase- will give rise to more troughs lower pressure in higher latitudes.
less arctic sea ice- similar to amo warm phase effects.
Salvatore, Pamela asked for numbers, not handwaving claims about “seems to equate” and “tends to effect” and the like …
w.
I may have used a more domestic phrase than I should have. But yes, your model please, in numbers. In a calculable algebraic expression.
Salvatore, as usual your comment is full of claims and extremely short on data …
w.
Willis. A half dozen more 11 year cycles for you.
http://notrickszone.com/2015/09/15/review-by-german-experts-show-that-even-the-11-year-solar-cycle-has-undeniable-impact-on-global-climate/
No thanks, J. I don’t do data dumps. I just looked at the first one and found it is only peripherally about the 11-year solar cycle, and there is no associated dataset. You’re just attempting to waste my time, and I don’t go on snipe hunts for any man.
If you will read through the papers and identify the one that you think is the strongest, and provide a link to the DATASET that they actually used, I’m happy to look at it. Most such papers use datasets that are not available, and without that, all that is left are their claims.
I can analyze data. I cannot analyze claims in the absence of data.
w.
Let me add that I am always highly suspicious of claims that the 11-year cycle can be found in some kind of proxy data from long ago … if that were so, why is it not evident in today’s actual climate data?
According to your citation, they’ve found evidence in tree rings from 1010 to 1110 AD … and foraminifera from 707 BC–AD 1979 … and lamination thicknesses during the Bolling-Allerod … and in an unknown “climate proxy record” during the Maunder Minimum …
Perhaps that impresses you, but if so, consider that the same phenomena that allegedly made the tree rings from 1010 to 1110 show an 11-year cycle still exist today … so why don’t we find the same variations in today’s tree rings all over the world?
w.
Even the 11± year cycle does not cooperate well with the solar cycle.
See Anomalously low solar extreme-ultraviolet irradiance and thermospheric density during solar minimum and the failure of Ionospheric total electron contents (TECs) as indicators of solar EUV changes during the last two solar minima.
You actually must collect data near the edge of space to measure EUV irradiance, a task outside the scope of this 145 year study.
Willis you say that they must have looked at the annual data and found it wanting and the other seasons and found them wanting as well. How do you know they did so? Is it not possible they starting by looking at winter for theoretical reasons? You sound pretty sure you know what they did and why they did it, but why? I’ve seen a lot teleconnection studies immediately jump to the winter data, I think everyone in this field just knows that is where there are interesting things in the data.
“Simulations by the NCAR Thermosphere-Ionosphere-Electrodynamics General Circulation Model are compared to thermospheric density measurements, yielding evidence that the primary cause of the low thermospheric density was the unusually low level of solar extreme-ultraviolet irradiance.”
Thermosphere density changes cause the formation of waves in the stratosphere that have an impact on the pressure, in particular, above the polar circles.
http://onlinelibrary.wiley.com/doi/10.1029/2010GL044468/abstract
Willis writes:
“However, it does mean that if such a causal relationship exists, it is likely to be extremely weak.”
I am not surprised, they should have been considering the solar wind coupling in the polar regions instead.
From the paper:
“Recently, analysis of long-term SLP and sea surface temperature observations suggest that the surface climate response to the 11-year solar cycle maximizes with a lag of a few years. These findings were supported by a large ensemble of short-term idealized coupled ocean-atmosphere model experiments, which indicate that the lagged response of the NAO arises from ocean–atmosphere coupling mechanisms. Atmospheric circulation changes associated with the NAO affect the underlying Atlantic Ocean by modulating surface air temperature, atmosphere-ocean heat fluxes, as well as mid-latitude wind stress. This induces a typical sea surface temperature tripolar pattern anomaly that can persist from one winter to the next and amplify the initial atmospheric solar signal over the subsequent years through positive feedbacks onto the atmosphere.”
The cold season NAO became increasing negative as the AMO warmed from the mid 1990’s. That doesn’t strike me as a positive feedback:
http://www.cpc.ncep.noaa.gov/products/precip/CWlink/pna/season.JFM.nao.gif
http://downloads.bbc.co.uk/looknorthyorkslincs/ahlbeck_solar_activity.pdf
This makes my case Willis.
Look at figure 6 in the above pdf I sent. Data supports my thought of a negative QBO /VERY LOW SOLAR ACTIVITY equates to a negative AO which is a very good proxy for the NAO.
Solar forcing synchronizes decadal North Atlantic climate variability is interesting, also.
Anyway, a citation of this paper is Role of the QBO in modulating the influence of the 11 year solar cycle on the atmosphere using constant forcings and doesn’t have a pay barrier when the EPDF is opened on my computer (I have been a paying customer, recently, so others may not have the same experience since it’s not marked for open access).
The polar stratospheric winds blow eastward during solar maximums and blow westward during solar minimums, affecting temperature, ozone level, and so on.
Mr. Eschenbach–
(If you read this at all), I appreciate and am in fact stunned at how, time after time, you take on all comers here in your posts. It gives me the image a chess grand master playing an exhibition of simultaneous chess, but with higher stakes. I don’t know how you do it. I appreciate your communication skills, by the way, and what must be your courage in putting yourself “out there”.
Thanks, William, much appreciated. Regarding your question of how I do it, I do it the old-fashioned way. I put in lots of hard work and lots of time, and I re-read and edit my posts and comments extensively before publishing.
One recent change for me is that following my lifelong precept of “Retire Early … And Often”, I’m retired again, which gives me more time. But I’ve done the same thing for the last 550+ posts that I’ve put up at WUWT, through a variety of jobs, living both in the US and overseas. My problem is that I’m endlessly fascinated by the climate, and there is always so much to learn, that whatever spare time I have is totally consumed.
As to putting myself “out there”, I don’t like being shown to be wrong any more than the next man. Paradoxically, however, being shown wrong is the most important thing that can happen to me on the web—it can and does save me literally months of wasted work going down a wrong path.
Anyhow, thanks for your good thoughts. They do help to balance out the abuse that I take, abuse that I occasionally deserve but which mostly is just unpleasant, untrue mud-slinging.
Regards,
w.
In a very different approach, to studying relationships between solar and terrestrial cycles, I have taken advantage of the exceptionally long period of low solar activity (SSN<10) between solar cycles 23-24 (www.ocean-sci-discuss.net/12/103/2015/ doi:10.5194/osd-12-103-2015: P E Binns, Atmosphere-ocean interactions in the Greenland Sea during solar cycles 23-24, 2002-2011). Unusually, the main parameter measured was the day-to-day variability in the in the Sea Surface Temperature ‘field’; I also treated the 3409 SST ‘fields’ as objects and classified them using Cluster Analysis. Results:-
1) A statistically significant difference in day-to day variability during the period with SSN<10 (using both parametric and non-parametric tests).
2) During the transition from summer to winter, there are systematic, inter-annual changes in variability, which relate to the level of solar activity.
3) The ‘topography’ of the late summer SST fields exhibit symmetry about the years of low solar activity.
Detailed examination of infra-red satellite images show that the day-to-day variability of the SST field was strongly (but not exclusively) associated with the passage of weather systems. [Yes, this is semi-quantitative, unlike pressure tracking algorithms, but you catch and record a lot more detail]. I found some correlation with the daily NAO index, but only in winter month and during the period of lowest solar activity (r=0.54).
A mechanism for this apparent relationship between solar activity and surface weather? There is a substantial mainstream literature (summarised in the paper) relating solar activity to the North Atlantic climate and the NAO. The proposed mechanism involves the solar ultra-violet band (apparently much stronger than previously thought) acting on the stratosphere and the effects propagating down into the troposphere.
The initial reaction to this paper was very positive and it was accepted at ‘Discussion’ level. Subsequent refereeing was not positive and stimulated a detailed technical exchange, only one comment of which was posted.
Mr Eschenbach,
First off, thank you for posting (and defending and, when warranted, correcting) your ideas here in public. I think that used to be called “science.”
I am not a cyclomaniac, but I think I know where a signal might hide: the local onset time of clouds in the tropics. If I understand your thermostat hypothesis correctly, it predicts that any trend that would tend to warm the surface would be counteracted by the clouds forming a bit earlier in the day.
As to where one could find a long enough dataset that includes cloud formation times, I’ve no idea. But someone here might.
Thanks again for the years of enlightening reading.
A W
Thanks, Andrew. See here and here for discussion of that very topic …
w.
Willis,
I see I engaged in excessive brevity. I am reasonably certain I’ve digested all your posts on the thermostat hypothesis. FWIW, I’ve not seen anything else that explains what data we’ve got any better.
My intent was to suggest targeting for your cyclomaniac Whack-a-Mole hammer. There is an implication of the thermostat hypothesis that I had missed: To the extent that there is a solar-cycle influence on temperature, it may show up _only_ in the tropical cloud onset time. It may be that the thermostat corrects for the putative solar influence so completely that only cloud formation time bears the imprint.
It seems to me that the data may well be in the buoy data you’ve used in the past to examine cloud vs temperature relationships (e.g., midnight temperature vs cloud formation time), but I’m not sure how far back it goes.
So, back to brevity: If there is an 11 year signal, I’d expect to see it in the local time of cloud formation.
Thanks again for the free ice cream, and enjoy this iteration of retirement,
A W