Guest Post by Willis Eschenbach
Anthony recently highlighted a new study which purports to find that the North Atlantic Oscillation (NAO) is synchronized to the fluctuations in solar activity. The study is entitled “Solar forcing synchronizes decadal North Atlantic climate variability”. The “North Atlantic Oscillation” (NAO) refers to the phenomenon that the temperatures (and hence the air pressures) of the northern and southern regions of the North Atlantic oscillate back and forth in opposition to each other, with first the northern part and then the southern part being warmer (lower pressure) and then cooler (higher pressure) than average. The relative swings are measured by an index called the North Atlantic Oscillation Index (NAOI). The authors’ contention is that the sun acts to synchronize the timing of these swings to the timing of the solar fluctuations.
Their money graph is their Figure 2:
Figure 1. Figure 2a from the study, showing the purported correspondence between solar variations (gray shaded areas at bottom) and the North Atlantic Oscillation Index (NAOI). Original Caption: (a) Time series of 9–13-year band-pass filtered NAO index for the NO_SOL [no solar input] (solid thin) and SOL [solar input] (solid thick) experiments, and the F10.7 cm solar radio flux (dashed black). Red and blue dots define the indices used for NAO-based composite differences at lag 0 (see the Methods section). For each solar cycle, maximum are marked by vertical solid lines.
From their figure, it is immediately apparent that they are NOT looking at the real world. They are not talking about the Earth. They are not discussing the actual North Atlantic Oscillation Index nor the actual f10.7 index. Instead, their figures are for ModelEarth exclusively. As the authors state but do not over-emphasize, neither the inputs (“F10.7”) to the computer model nor the outputs of the computer model (“Filtered NAOI”) are real—they are figments of either the modelers’ or the model’s imaginations, understandings, and misapprehensions …
The confusion is exacerbated by the all-too-frequent computer modelers’ misuse of the names of real observations (e.g. “NAOI”) to refer to what is not the NAOI at all, but is only the output of a climate model. Be clear that I am not accusing anyone of deception. I am saying that the usual terminology style of the modelers makes little distinction between real and modeled elements, with both often being called by the same name, and this mis-labeling does not further communication.
This brings me to my first objection to this study, which is not to the use of climate models per se. Such models have some uses. The problem is more subtle than that. The difficulty is that the outputs of climate models, including the model used in this study, are known to be linear or semi-linear transformations of the inputs to those climate models. See e.g. Kiehls seminal work, Twentieth century climate model response and sensitivity as well as my posts here and here.
As a result, we should not be surprised that if we include solar forcings as inputs to a climate model, we will find various echoes of the solar forcing in the model results … but anyone who thinks that these cyclical results necessarily mean something about the real world is sadly mistaken. All that such a result means is that climate models, despite their apparent complexity, function as semi-linear transformation machines that mechanically grind the input up and turn it into output, and that if you have cyclical input, you’ll be quite likely to get cyclical output … but only in ModelEarth. The real Earth is nowhere near that linear or that simple.
My second objection to their study is, why on earth would you use a climate model with made-up “solar forcing” to obtain modeled “Filtered NAOI” results when we have perfectly good observational data for both the solar variations and the NAO Index??? Why not start by analyzing the real Earth before moving on to ModelEarth? The Hurrell principal component NAOI observational dataset since 1899 is shown in Figure 2a. I’ve used the principal component NAO Index rather than the station index because the PC index is used by the authors of the study.
Figure 2a. Hurrell principal component based North Atlantic Oscillation Index. Red line shows the same data with a 9-13-year bandpass filter applied. DATA SOURCE
Here you can see the importance of using a longer record. Their results shown in Figure 1 above start in 1960, a time of relative strength in the 9-13-year band (red line above). But for the sixty years before that, there was little strength in the same 9-13-year band. This kind of appearance and disappearance of apparent cycles, which is quite common in climate datasets, indicates that they do not represent a real persisting underlying cycle.
Which brings me to my next objection. This is that comparing a variable 11-year solar cycle to a 9-13-year bandpass filtered NAOI dataset seemed to me like it would frequently look like it was significant when it wasn’t significant at all. In other words, from looking at the data I thought that similar 9-13-year bandpassed red noise would show much the same type of pattern in the 9-13-year band.
To test this, I used simple “ARMA” red noise. ARMA stands for “Auto-Regressive, Moving Average”. I first calculated the lag-1 AR and MA components of the DJF NAOI data. These turn out to be AR ≈ 0.4, and MA ≈ – 0.2. This combination of a positive AR value and a negative MA value is quite common in climate datasets.
Then I generated random ARMA “pseudo-data” of the same length as the DJF NAOI data (116 years), and applied the 9-13-year bandpass filter to each pseudo-dataset. Figure 2b shows four typical random red-noise pseudo-data results:
As I suspected, red noise datasets of the same ARMA structure as the DJF NAOI data generally show a strong signal in the 9-13-year range. This signal typically varies in strength across the length of the pseudo-datasets. However, given that these are random red-noise datasets, it is obvious that such strong signals in the 9-13-year range are meaningless.
So the signal seen in the actual DJF NAOI data is by no means unusual … and in truth, well … I fear to admit that I’ve snuck the actual DJF NAOI in as the lower left panel in Figure 2b … bad, bad dog. But comparing that with the upper left panel fo the same Figure illustrates my point quite clearly. Random red-noise data contains what appears to be a signal in the 9-13-year range … but it’s most likely nothing but an artifact, because it is indistinguishable from the red-noise results.
My next objection to the study is that they have used the “f10.7″ solar index as a measure of the sun’s activity. This is the strength of the solar radio flux at the 10.7 cm wavelength, and it is a perfectly valid observational measure to use. However, in both phase and amplitude, the f10.7 index runs right in lock-step with the sunspot numbers. Here’s NASA’s view of a half-century of both datasets:
Figure 3. Monthly sunspot numbers (upper panel) and monthly f10.7 cm radio wave flux index (lower panel). SOURCE
As you can see, using one or the other makes no practical difference at the level of analysis done by the authors. The difficulty is that the f10.7 data is short, whereas we have good sunspot data much further back in time than we have f10.7 data … so why not use the sunspot data?
My next objection to the study is that it seems the authors haven’t heard of Bonferroni and his correction. If you flip a group of 8 coins once and they come up all heads, that’s very unusual. But if you throw the same group of 8 coins a hundred times, somewhere in there you’ll likely come up with eight heads.
In other words, how unusual something is depends on how many places you’ve looked for it. If you look long enough for even the rarest relationship, you’ll likely find it … but that does not mean that the find is statistically significant.
In this case, the problem is that they are only using the winter-time (DJF) value of the NAOI. To get to that point, however, they must have tried the annual NAOI, as well as the other seasons, and found them wanting. If the other NAOI results were statistically significant and thus interesting, they would have reported them … but they didn’t. This means that they’ve looked in five places to get their results—the annual data as well the four seasons individually. And this in turn means that to claim significance for their find, they need to show somethings which is more rare than if they had just looked in one place.
The “Bonferroni correction” is a rough-and-ready way to calculate the effect of looking in more places or conducting more trials. The correction says that whatever p-value you consider significant, say 0.05, you need to divide that p-value by the number of trials to give the equivalent p-value needed for true significance. So if you have 5 trials, or five places you’ve looked, or five flips of 8 coins, at that point to claim statistical significance you need to find something significant at the 0.05 / 5 level, which is a p-value of less than 0.01 … and in climate, that’s a hard ask.
So those are my objections to the way they’ve gone about trying to answer the question.
Let me move on from that to how I’d analyze the data. Here’s how I’d go about answering the same question, which was, is there a solar component to the DJF North Atlantic Oscillation?
We can investigate this in a few ways. One is by the use of “cross-correlation”. This looks at the correlation of the two datasets (solar fluctuations and NAO Index) at a variety of lags.
Figure 4. Cross-correlation, NAO index and sunspots. NAO index data source as above. Sunspot data source.
As you can see, the maximum short-lag positive correlation is with the NAO data lagging the sunspots by about 2-3 years. But the fact that the absolute correlation is largest with the NAO data leading the sunspots (negative values of lag) by two years is a huge red flag, because it is not possible that the NAO is influencing the sun. This indicates we’re not looking at a real causal relationship. Another problem is the small correlation values. The r^2 of the two-year-lagged data is only 0.03, and the p-value is 0.07 (not significant). And this is without accounting for the cyclical nature of the sunspot data, which will show alternating positive and negative correlations of the type shown above even with random “red-noise” data. Taken in combination, these indicate that there is very little relationship of any kind between the two datasets, causal or otherwise.
Next, we can search for any relationship between the solar cycle and the DJF NAOI using Fourier analysis. To begin with, here is the periodogram of the annual sunspot data. As is my habit, I first calculate the periodogram of the full dataset. Then I divide the dataset in two, and calculate the periodograms of the two halves individually. This lets me see if the cycles are present in both halves of the data, to help establish if they are real or are only transient fluctuations. Here is that result.
As you can see, the three periodograms are quite similar, showing that we are looking at a real, persistent (albeit variable) cycle in the sunspot data. This is true even for the ~ 5-year cycle, as it shows up in all three analyses.
However, the situation is very different with the DJF NAOI data.
Unlike the sunspot data, the three NAOI periodograms are all very different. There are no cycles common to all three. We can also see the lack of strength in the 9-13-year region in the first half compared with the second half. All of this is another clear indication that there is no strong persistent cycle in the NAOI data in the 9-13-year range, whether of solar or any other origin. In other words, the DJF NAOI is NOT synchronized to the solar cycle as the authors claim.
Finally, we can investigate their claim that the variations in solar input are driving the DJF NAOI into synchronicity by looking at what is called “Granger causality”. An occurrence “A” is said to “Granger-cause” occurrence “B” if we can predict B better by using the history of both A and B than we can predict B by using just the history of B alone. Here is the Granger test for the sunspots and the DJF NAOI:
> grangertest(Sunspots,DJFNAOI) Granger causality test Model 1: DJFNAOI ~ Lags(DJFNAOI, 1:1) + Lags(Sunspots, 1:1) Model 2: DJFNAOI ~ Lags(DJFNAOI, 1:1) Res.Df Df F Pr(>F) 1 112 2 113 -1 0.8749 0.3516
The Granger causality test looks at two models. One model (Model 2) tries to predict the DJF NAOI by looking just at the previous year’s DJF NAOI. The other model (Model 1) includes the previous year’s sunspot information as an additional independent variable, to see if the sunspot information helps to predict the DJF NAOI.
The result of the Granger test (p-value of 0.35) does not allow us to reject the null hypothesis, which is that there is no causal relationship between sunspots and the NAO Index. It shows that adding solar fluctuation data does not improve the predictability of the NAOI. And the same is true if we include more years of historical solar data as independent variables, e.g.:
> grangertest(Sunspots,DJFNAOI,order = 2) Granger causality test Model 1: DJFNAOI ~ Lags(DJFNAOI, 1:2) + Lags(Sunspots, 1:2) Model 2: DJFNAOI ~ Lags(DJFNAOI, 1:2) Res.Df Df F Pr(>F) 1 109 2 111 -2 0.4319 0.6504
This is even worse, with a p-value of 0.65. The solar fluctuation data simply doesn’t help in predicting the future NAOI, so again we cannot reject the null hypothesis that there is no causal relationship between solar fluctuations and the North Atlantic Oscillation.
• If you use a cyclical forcing as input to a climate model, do not be surprised if you find evidence of that cycle in the model’s output … it is to be expected, but it doesn’t mean anything about the real world.
• The cross-correlation of a century’s worth of data shows that relationship between the sunspots and the DJF NAOI is not statistically significant at any lag, and it does not indicate any causal relationship between solar fluctuations and the North Atlantic Oscillation
• The periodogram of the NAOI does not reveal any consistent cycles, whether from solar fluctuations or any other source.
• The Granger causality test does not allow us to reject the null hypothesis that there is no causal relationship between solar fluctuations and the North Atlantic Oscillation.
• Red-noise pseudodata shows much the same strong signal in the 9-13-year range as is shown by the DJF NAOI data.
And finally … does all of this show that there is no causal relationship between solar fluctuations and the DJF NAO?
Nope. You can never do that. You can’t demonstrate that something doesn’t exist.
However, it does mean that if such a causal relationship exists, it is likely to be extremely weak.
Regards to all,
My Customary Request: If you disagree with someone, please quote the exact words that you object to. This lets us all understand the exact nature of your objections.