Guest Post by Willis Eschenbach
The lack of cycles in the solar wind isn’t surprising when you analyze this paper. There is very little sign of any kind of annual cycle, which makes perfect sense because the sun doesn’t run by earthly clocks … the sun doesn’t know much about “one year for earthlings”.
Over at the Hockey Schtick, I saw a post discussing a new study (paywalled here) of the solar wind as a possible amplifying mechanism for the sun’s effect on climate. It’s called “Effects on winter circulation of short and long term solar wind changes”, by Zhou, Tinsley, and Huang (hereinafter ZTH2014). To support their hypothesis that solar wind affects the North Atlantic Oscillation (NAO) and Arctic Oscillation (AO), they’ve stacked the records of times of low solar wind, aligned at the minimum in the wind speed. Well, not exactly the minimums of the wind speed, come to find out it’s the minimums of a most strange triangular filter of the wind speed. But only in the winter, not the summer. Well, not exactly the winter, but the five month period November-May. Then they sub-divided the stacks into times of “low volcanic activity” and “high-volcanic activity”. Then they further subdivided them into times when the interplanetary magnetic field (IMF) points up, and times when the IMF points down … seems like waterboarding the data to me, but I was born yesterday, what do I know? Here’s their money graph:
Figure 1. Fig. 2 from ZTH2014. “Stacked” analysis aligned on the minima of the solar wind speed in the winter months. “SWS_MIN” means minimum solar wind speed.
Figure 1 shows the claim they are making, that on a daily level the North Atlantic Oscillation (NAO) and the Arctic Oscillation (AO) are affected by minima in the solar wind. Looking at the Hockey Schtick article, I realized I didn’t know much at all about the solar wind. I mean, I knew what most folks know, that the solar wind is the result of constantly varying high-speed ejection of a variety of charged particles from the sun. But I didn’t know how it changed over time, how fast it blew, what a solar wind gust or a gale looked like, nothing. So here’s what I found out.
As usual, I started by getting all the data. It took some digging, but I finally found the hourly data here. Of course, it’s in the form of a whole stack of individual files, one per year from 1963 to 2014 … so I had to write the code to download them all, and then extract the information I wanted. I ended up with 324,204 hourly observations of solar wind. I averaged them out day by day, to match the time intervals of the ZTH2014 study, and I ended up with the data shown in Figure 2.
Figure 2. All daily average solar wind observations in the OMNI-2 dataset. The “winter” data, shown in blue, uses the same definition of “winter” as is used in ZTH2014, viz November through March (5 months).
This is a most interesting graph, 51 years of data from more than a dozen satellites. First off, we can see the usual ~11-year sunspot cycle in the data, with a swing of about 50-100 km/sec.
Next, the solar wind has a clear minimum speed of around 300 kilometres per second. For those interested, this is about a million kilometres per hour … and it rarely blows much slower than that, summer or winter.
Next, they say that they have no less than 887 days that were identified as SWS_MIN, or solar wind speed minimums. As there are 51 years in the dataset, this means that over a typical winter there are 17 identified solar wind speed minima … and they are stacking them up 800 deep or so.
What I think I’ll do next is to see how the solar wind speed varies over the course of the year. Hang on … OK, I just created that graph, and it turns out I didn’t learn a whole lot in the process …
Figure 3. Solar wind speed by day of year. Portion of the year shown in blue is the “winter” (NDJFM) as defined in the study.
There is very little sign of any kind of annual cycle, which makes perfect sense because the sun doesn’t run by earthly clocks … the sun doesn’t know much about “one year for earthlings”.
So … we’ve seen the 51-year record of the solar wind, and the lack of an annual cycle. Let me move on to their physical explanation for the purported solar wind/atmospheric pressure connection. From the paper:
The responses on the day to day time scale have been shown to involve both the relativistic electron flux (REF), precipitating from the radiation belts at subauroral latitudes, and stratospheric volcanic aerosols. A strong correlation between the REF and the SWS has been examined by Li et al (2001a,b). Tinsley et al. (1994, 2012) described a link between space weather and lower atmospheric dynamics through the global electric circuit. Minima in the SWS and deep minima in the REF are associated with the HCS crossings, as shown by Tinsley et al. (1994, Fig. 5).
The REF can penetrate down to upper stratospheric levels, and the Bremsstrahlung radiation that they produce can impact the electric conductivity down to lower stratospheric levels and change the stratospheric electrical column resistance. The consequent changes in the ionosphere-earth current density (Jz) that flows as the downward return current in the global electric circuit was considered to be the physical link to the tropospheric cloud and dynamical changes, especially when there is high stratosphere aerosol loading due to volcanic eruptions, which will increase the proportion of the stratospheric column resistance to that of the whole atmosphere column. Observations of minima in tropospheric potential gradient and Jz at HCS crossings have been reported by Reiter (1977), and Fischer and Muhleisen (1980) also observed such potential gradient minima.
Now, is their explanation possible?
Sure. Lots of things are possible. I have long held that the electromagnetic aspects of weather and climate were the unknown unknowns in the climate game. For example, to many scientists’ surprise, it was discovered within the last decade or so that the cloud nuclei, the seeds that cloud droplets form around, are not mainly dust or sea salt crystals as was once thought … much of the cloud nuclei are microbes of various kinds. Which led to a new question … how did they get up so high in the sky? Turns out that the electrical forces are what make thunderstorms able to loft these tiny creatures that high up into the atmosphere …
So I don’t have a problem with the idea that electromagnetic forces are way understudied in the climate system, and thus may play a larger role in climate than is immediately apparent.
The part I’m missing in their explanation, however, is the connection of the solar wind to the variations in pressure that make up the NAO.
Finally, the most important question … does their study hold water? In regards to this question, let me list my objections to their study. Note that these are not objections to the results, these are objections before we’ve gotten to their results. Here, in no particular order, are the problems that I have with the study.
• No Archiving Of Data As Used: First and foremost, they have not archived the 887 magical dates on which they claim that there are “solar wind minima”. Without that, there’s no way to determine if they’ve made any errors. As a result, to date it’s just advertising, not science.
• High number of “minima”: Next, the “winter” portion of the dataset contains 6,684 days with data. There are 887 days they call “solar wind speed minima” during the winter. That means that a “minimum” occurs every 6,684 days / 887 minima equals 7.5 days per minimum, about once a fricken’ week …
Once a week? Give me a break. I realized this was a problem as soon as I looked at Figure 1 at the head of this post. I thought, 887 wintertime “minima” in half a century? That’s about eighteen minima every winter …
• Divide and Conquer: This study employs a much-abused technique I call “Divide and Conquer”. It works like this: you look for a theorized effect. But you can’t find it in the data. So then you divide the data into two piles, say into winter and summer data. Then you look for the effect again.
But you still can’t find the theorized effect in the data. So then you sub-divide the data again, say into “volcanic” and “non-volcanic” data. Now you have four piles of data. But you still can’t find the effect. So then you divide the data into maxima and minima, that gives you eight piles of data. You ignore the maxima, presumable because they didn’t show the effect.
So then you divide the minimum data into 887 overlapping two-month-long chunks centered on some subset of all of the minima, and you average the chunks together … at which point you find something and declare that your study is a resounding success …
I’m sure you can see the problem with this kind of analysis. If you keep dividing, eventually you will find something. No surprise.
• Bad Statistics, No Cookies: However, if you insist on using the “divide and conquer” plan as they have done, each time you subdivide the data you need to adjust the threshold for statistical significance. In climate science, the usual level for assigning statistical significance is a “p-value” of 0.05. That’s one in twenty. But if you look in more and more places, to be significant, the p-value needs to be lower. How much lower? Glad you asked. Here’s a handy chart … for mathematicians, the p-value is calculated as
p-value = 1 – 10^( log(1-p) / n )
where “n” is the number of trials, “p” is the single-trial p-value (0.05 in this case) and log is logarithm base 10.
Figure 4. Change in the required p-value to be significant at the single-trial 0.05 level with a given number of trials.
In this case, they have the original data, and then the two halves split summer/winter. Then they have four quarters after they’ve sub-spit by volcanic/non-volcanic. Then they’ve divided off the minimum values from the maximum values … already they’ve looked in fifteen different places for the claimed effect. So if they find something, in order for it to be significant it has to have a p-value of less than 0.004 … four in a thousand …
• Length of “Winter”: Picking five months for “winter” gives every appearance of special pleading. I mean, the first thing you’d try for “winter” is December-January-February. Then maybe the six months between the equinoxes, October to March. Either of those might be OK, although you’d need to adjust the significance threshold as required … but using a five-month winter is just fine-tuning the results.
• Proportionality of Effect: One of the ways we determine if there is causal relationship between A and B is that there is some kind of proportionality, whether linear or non-linear, between the cause and the effect. In their case, they are claiming that variations in maximum solar wind speeds don’t affect the North Atlantic Oscillation, but variations in minimum solar wind speeds do affect the NAO … how is that supposed to work? It seems odd, particularly when the solar wind minima vary so much less than the solar wind maxima.
• Width of Stacked Data: To dig out the signal, they’ve “stacked” the data. This means that they have aligned a number of years of results based on the minima in the wind speed. This is shown in Figure 1. It’s a legitimate technique, but note that their stacks are 2 months in width. This means that each individual layer in the stack contains on average eight different minima (60 days divided by 7.5 days per minimum).
• Uneven Duplication of Stacked Data: Like the minimum years, on average, every day in the dataset appears in the full stacked data about 8 times. However, the number of times that a given day appears in the total 887-layer stack is quite variable. Days during periods where a number of minima are in close succession will be over-represented in the stack, and vice versa.
• Missing Data: The early years have a lot of missing data. Coverage of 365 days/year is not achieved until 1995. It’s not clear what effect this has on their calculation of “minima”.
• Calculation of Minima: The previous problems are bad enough. But here’s where they go totally off the rails. In the ZTH2014 paywalled study they say they use the method for calculating minima from a previous paywalled study. The method of determining the location and depth of the minima in that study turns out to be quite baroque, viz:
The minima and their relative depth are selected with reference to a sliding window of 13 days, with the preceding and following ‘shoulder’ values (ps) being the mean for days 1 through 3 and 11 through 13, and the deviation from the shoulders (pm) being the mean for days 6 through 8, so that the percentage deviation is: y = ((pm-ps)/ ps) 100%.
Dear heavens … do you see what they are doing? I get a thrill when I see a bizarre mathematical transformation like that, it’s like coming across some strange new primitive life form. That is an extremely crude method of fitting a sawtooth wave to the data, one that will function as a strange form of triangular filter. Since the centers of the two ends of the filter (points 1-3 and 11-13) are 11 days apart, I thought that it would emphasize any cycles in the data with a period of 11 days, although from the description the frequency response of such an odd creature could only be guessed at. When I read that, I could hardly wait to see how their “minima finder” algorithm munges some real data. Before we get to the real data, however, here’s the bandpass of their triangular “minima” filter. It shows the amplitude of sinusoidal signals after they have been savaged by their method.
Figure 5. Bandpass of their 13-point-wide filter. Values are given as a percentage of the maximum.
That is one strange bandpass filter. As I suspected, the maximum bandpass is at a period of 11 days … and it has a curious problem in the present application. Solar wind data has a clear ~27-28 day cycle. This is the result of the ~27-28 day rotation period of the sun. Unfortunately, the amplitude this 28-day cycle is severely attenuated by their procedure, down to about a third of its original value. So their filter is minimizing a real cycle in the data, while artificially enlarging any random 11-day cycles. Most curiously, it totally wipes out all 3-day and 5-day cycles. I suspect that this has to do with the fact that they are not using some of the 13 points of the filter width—points 4-5 and 9-10 are not involved in the calculation. But that’s a guess.
Having seen all of that, without further ado, it’s time to look at real data. Here’s a typical winter from the OMNI-2 solar wind dataset. The time of the winter is 1998-1999, merely because it happens to be the first one I picked. This one has the full complement of 154 days of data, so it should have 154 days divided by 7.5 days per minimum equals about twenty minima. So I’ve included their bizarre “minima” calculations as well, scaled to the same mean and standard deviation as the solar wind data for easy comparison … hold your nose, here we go …
Figure 6. Daily average solar wind data (blue) and calculated “minima” (red). Minima calculated using the procedure quoted just above. The one gold, two tan, one blue, and two violet shaded areas show sections of particular interest. “Minima” are scaled to the mean and standard deviation of the solar wind speed data.
Like I feared when I first read their description, this procedure does indeed do weird things to data … to start with, in the right hand tan shaded area is what was the lowest minimum speed of the entire winter (blue line), a day when the solar wind dropped below 300 kilometres per second, a relative calm spell … with the wind only blowing a mere million kilometres per hour or so …
But once their procedure gets through with it, the red line shows that what was the record minimum of the period is now about the tenth lowest minimum. And the two minima in the other tan area have suffered the same fate, reduced to insignificance.
The violet shaded areas show the opposite effect. In the right hand violet shaded area, there was no real minimum of any kind (blue line). But under the new regime (red line), it’s the fourth largest minimum in the record. And it’s the same in the left hand violet shaded area. A significant false minimum has been created out of nothingness.
Next, the gold shaded area (center) shows what was a fairly minor player in the original data (blue line). But after their triangular filter works it over (red line), it becomes the deepest, most evident minimum of the winter.
Finally, the most outré part. See the blue shaded area? Their whizbang method actually shifts the dates of the two actual minima during that time (blue line) by two days each …
Like I said, it’s a most baroque method for picking “minima”, one that manufactures minima where none exist in the data, distorts the actual sizes of the minima, turns the deepest minimum into a minor player, and shifts the dates of some of the minima but not others, all the while minimizing the natural ~27-28 day cycle that actually exists in the data.
Conclusions? Other than the fact that looking at this study for four days now makes my head ache? Well, between all of the problems, that’s enough for me to say that the study is definitely not ready for publication. To recap, those problems were:
• No Archiving Of Data As Used
• High Number of “Minima”
• Uses a “Divide and Conquer” Method
• Bad Statistics
• Strange 5-month Length of “Winter”
• No Proportionality of Effect
• About Eight Minima in Each Layer of the Stacked Sections of Data
• Uneven Duplication of Stacked Data
• Missing Data
• Mondo Bizarro Method of Minima Calculations
Best thing about their study? I understand much, much more about the solar wind than I did a week ago.
Finally, people often read more into my statements than I intend. For example, when I say I can’t find an 11-year cycle in the 10Be data used as a proxy for cosmic rays, people interpret it as if I had said there are no cycles at all in climate data. Generally, I mean what I say, and not some generalization of my statement.
So please note that in this case, I am NOT saying that solar wind has no effect on the climate. It may well have such an effect, although the lack of any 11-year signal in temperature datasets argues strongly against it.
What I am saying is that the manifold flaws and problems in the study of Zhou, Tinsley, and Huang 2014 mean that they are a long ways from demonstrating that such a purported effect actually exists
My best wishes to each of you,
w.
The Usual Request: If you disagree with something I’ve said, please quote the exact words you disagree with. That avoids lots of misunderstanding.
Data: The press release at the Hockey Schtick post is here, my thanks for the discussion of the solar wind study. I have collated the OMNI-2 data into a single CSV file containing the daily average solar wind speed data called OMNI-2 Solar Wind.csv
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Willis, would it be fair to say some of these people are out of their league! Or meddling where they should not. Or looking for the accolades.
Glad to see that once again the science is “settled”, like the weather. Thanks for the read.
I remember a wonderful statement by an older geology professor to his younger colleague: “Most studies are pure crapola!”
His younger colleague was incredulous, saying something like: “You mean more than half?”
To which the older professor replied: “Oooooh, no, more like 80-90%. Remember, a “C” grade is average, and a “C” grade student becomes a “C” grade scientist who will publish a “C” grade study which will be CRAP. Science is hard. Even an “A” student will still produce a crap study if he or she isn’t careful.”
The young professor said: “You sound really cynical.”
The old professor laughed, saying: “It’s called experience. Talk to me in a couple decades and see how cynical you’ve become!”
Comparison of solar wind ‘changes’ affect on the geomagnetic field (Look at the magnitude of the changes of AP which is how the solar wind speed change affects the geomagnetic field. Ap events and the time between Ap events is the variable to plot not solar wind speed (Blue graph). I will provide an over of the mechanism in the next couple of comments.
In May 16, 2005 the average of the three hour ap indices was 105.4
In May 17, 2014 the average of the three hour ap indices was 5.4
2005 May 15 Vs 2014 May 16
http://www.solen.info/solar/old_reports/2005/may/20050516.html
Solar flux measured at 20h UTC on 2.8 GHz was 103.0. The planetary A index was 105 (STAR Ap – based on the mean of three hour interval ap indices: 105.4).
Three hour interval K indices: 55984445 (planetary), 56973345 (Boulder).
http://www.solen.info/solar/
Last major update issued on May 17, 2014 at 06:20 UTC.
Recent activity
The geomagnetic field was quiet on May 16. Solar wind speed at SOHO ranged between 336 and 429 km/s.
Solar flux at 20h UTC on 2.8 GHz was 138.7 (decreasing 30.6 over the last solar rotation). The 90 day 10.7 flux at 1 AU was 149.3. The Potsdam WDC planetary A index was 5 (STAR Ap – based on the mean of three hour interval ap indices: 5.4). Three hour interval K indices: 22112221 (planetary), 12112311 (Boulder).
http://www.google.ca/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCgQFjAA&url=http%3A%2F%2Fsait.oat.ts.astro.it%2FMSAIt760405%2FPDF%2F2005MmSAI..76..969G.pdf&ei=vyJ3U4maH8_woATknYCwBg&usg=AFQjCNE1HIIaQdO213fgDBS9nT2fvY3-Rg&bvm=bv.66917471,d.cGU&cad=rja
Once again about global warming and solar activity
We show that the index commonly used for quantifying long-term changes in solar activity, the sunspot number, accounts for only one part of solar activity and using this index leads to the underestimation of the role of solar activity in the global warming in the recent decades. A more suitable index is the geomagnetic activity which reflects all solar activity, and it is highly correlated to global temperature variations in the whole period for which we have data.
The real terrestrial impact of the different solar drivers depends not only on the average geoffectiveness of a single event but also on the number of events. Figure 5 presents the yearly number of CHs, CMEs and MCs in the period 1992-2002. On the descending phase of the sunspot cycle, the greatest part of high speed solar wind streams affecting the Earth comes from coronal holes (Figure 5), in this period their speed is higher than the speed of the solar wind originating from other regions, and their geoffectiveness is the highest. Therefore, when speaking about the influence of solar activity on the Earth, we cannot neglect the contribution of the solar wind originating from coronal holes. However, these open magnetic field regions are not connected in any way to sunspots, so their contribution is totally neglected when we use the sunspot number as a measure of solar activity.
Plain ‘common garden’ solar wind calculations are the most likely waste of time.
If there is an effect it is via CMEs solar flairs records. These tend to coincide with SS cycles but are shifted towards falling side.
Effect can be observed via the Ap index, but again that is even more controversial matter.
If we take so called Gleissberg cycle of about ~78 years, and ‘cherry pick’ one from early 1880s to 1960, then available AP monthly data can be broken in three long sections. Using 15 year moving average , than by a simple ‘fiddle’ with the data, as it is clearly shown in the graph a reasonable (if one is not to allergic to the ‘fiddle’) agreement with land temperature data can be obtained.
It would require lot of explanations, sadly they are not currently available, but at least a possibility of a link exists.
Willis
It seems to me that there are quite a few scientists who are trying to enhance their own innate pattern recognition skills with partially digested signal analysis to find what they want to find. It’s a confirmation bias positive feedback.
We all have a tendency to find confirmation of that which we already believe, which is why there are studies like this all of the time.
There are people who blame peer review, but my view is that if scientists were paid as much to check other people’s work and publish the results, yes there would be more bun-fights but there would also be fewer, better scientific papers.
That said, using a guillotine on the data just focussing on 5 months of the year and then a bizarre filter which does not maximize the signal being sought means that this paper should have been sent back to the authors.
Experimental data is a precious limited resource, and a competent scientist doesn’t throw away any data without good cause.
Dear Willis, an excellent peer review of the solar wind article. There seems to be a lot of “wind” in their methodology. thanks.
This paper is a significant advancement in helping us determine the things that don’t make a difference.
Hey Willis, you have my grateful and welcome thanks for creating that OMNI daily solar wind speed file. I have been looking at different daily files of solar outputs and could only find the hourly ones of the solar wind, and I have been tediously going through them working out daily averages…now you have saved me tons of work. Great! Thanks again!
Divide and conquer was used rather aggressively to produce one of the “97%” studies. Nuff said 🙂
http://wattsupwiththat.com/2012/07/18/about-that-overwhelming-98-number-of-scientists-consensus/
At one time early in the essay a five-month period of November through May is referred to, later corrected to November through March; just a typo to fix. Aside from that, interesting and well-written. Thanks for the article!
““Effects on *winter* circulation of short and long term solar wind changes”, by Zhou, Tinsley, and Huang.” ….. “Well, not exactly the winter, but the five month period November-May.”
Since they specified that their months of *winter* are: Nov-May, I guess they were taking measurements from somewhere in the Northern Hemisphere. .. Anyway, just like there is no *annual signature in the data, there should not be any difference (from the Sun’s point of view) between the winter and summer of the Earth’s Northern Hemisphere. … Makes me wonder if they would get similar results from somewhere in the *Southern Hemisphere !!
(maybe I am being a little sarcastic!)
holts7 says:
May 17, 2014 at 2:33 am
You’re welcome, holts, my pleasure.
w.
Peter Yates says:
May 17, 2014 at 3:19 am
Thanks, Peter. I thought that as well, then I realized that they are looking at the effect of solar wind on the NAO, which if it existed presumably might be different winter/summer …
w.
Peter Yates says:
May 17, 2014 at 3:19 am
Anyway, just like there is no *annual signature in the data, there should not be any difference (from the Sun’s point of view) between the winter and summer of the Earth’s Northern Hemisphere. …
Yes, there is, see Svalgaard’s papers on the subject, in mean time here is what the Ap index show:
http://www.vukcevic.talktalk.net/Ap-Bz.htm
When I first saw the title, I thought you were going to comment on this paper (which thankfully is not paywalled). Scott et. al. claim to have found a 120-day solar wind cycle centered around a very sharp fluctuation (down them up) in the solar wind speed, and linked that to frequency of lighting strikes in the 40 days following the fluctuation. I know zippo about solar physics so I have no idea whether there is anything known which exhibits a 120-day cycle, but their “money graph” sure looks convincing. Can you say after your look at solar wind data whether the 120-day cycle is real?
One thing I’ve learned reading WUWT and Climate Audit discussing the ways raw data is manipulated is don’t trust anyone’s money graph until you’ve read all the fine print. In studies like this one, researchers’ “raw data” resembles real data the way Cheese Whiz resembles cheese.
Thanks for another fine read.
” Turns out that the electrical forces are what make thunderstorms able to loft these tiny creatures that high up into the atmosphere …”
Just one more reason why I find it easy to believe that it does occasionally rain fish or frogs. 😉 Even so, those claims still sound so apocryphal!
“p-value = 1 – 10^( log(1-p) / n )”
I have but a layman’s understanding of high level statistics. But can I ask a simple question to see if I understand the substance of that equation?
Example: trial A gives you p=0.05; trial B gives you p=0.05; a meta analysis will necessarily give you p<0.05. Is that – more or less – what you're saying? If so, then I understand the concept.
BTW I see an obvious correlation in the right half of figure 1. If sub-dividing data can make the data fit an arbitrary hypothesis, then why isn't the left half equally correlated? I do follow most of what you're saying, and you aren't making any definite claims, but to my untrained eyes, it seems that there is a case to be made.
Thanks Willis, this was really interesting. I feel I learned a lot about statistics and what NOT to do. Sometimes the best learning comes from looking at bad examples. Hopefully the authors see this analysis and get back to you?
Gee if I wanted to publish something of real discovery, I would get it reviewed here first. Well done Willis I hope leif aint grinding his teeth in [jealousy] (joke). LOL
Willis,
In your figure 3, the average solar wind speed seems to take a jump up from about 415 Km/s to about 425 Km/sec (values are hard to pick off your graph) on the first of the year. Is the Sun making whoopee on New Years eve, or is that an artifact of your filter?
Willis says:
“This is a most interesting graph, 51 years of data from more than a dozen satellites. First off, we can see the usual ~11-year sunspot cycle in the data, with a swing of about 50-100 km/sec.”
To be honest, if the dates were not there, I’m not so sure from the velocity data alone that you would be able to say for sure where the sunspot cycles are:
http://snag.gy/uIZON.jpg
Daily velocity has a swing of ~ +/- 250km/s.
“for mathematicians, the p-value is calculated as
p-value = 1 – 10^( log(1-p) / n )”
For non-mathematicians, you can match the results in Willis’ Figure 4 to within 2% by the somewhat easier formula p = 0.05/n. (the Bonferroni correction).
n Bonferroni Willis error
1 0.0500 0.0500 0.000
2 0.0250 0.0253 0.013
3 0.0167 0.0170 0.017
4 0.0125 0.0127 0.019
5 0.0100 0.0102 0.021
6 0.0083 0.0085 0.021
7 0.0071 0.0073 0.022
8 0.0063 0.0064 0.023
9 0.0056 0.0057 0.023
10 0.0050 0.0051 0.023
11 0.0045 0.0047 0.023
12 0.0042 0.0043 0.024
13 0.0038 0.0039 0.024
14 0.0036 0.0037 0.024
15 0.0033 0.0034 0.024
The increase in pressure over the South Pole as a result of blocking the polar vortex in the stratosphere, caused an increase in temperature and decrease in growth ice.
The magnetic field of the solar wind reduces the cosmic ionizing radiation in the zone ozone.
http://www.cpc.ncep.noaa.gov/products/stratosphere/strat_a_f/gif_files/gfs_t70_sh_f00.gif
http://www.cpc.ncep.noaa.gov/products/intraseasonal/temp50anim.gif
http://www.cpc.ncep.noaa.gov/products/intraseasonal/z200anim.gif
http://arctic.atmos.uiuc.edu/cryosphere/antarctic.sea.ice.interactive.html
I couldn’t finish reading your post Willis, after reading this.
“”””Next, the solar wind has a clear minimum speed of around 300 kilometres per second. For those interested, this is about a million kilometres per hour … and it rarely blows much slower than that, summer or winter.””””
Try this.
Use the solar wind speed of solar cycle 24 and you should have found that it REGULARY during this cycles min and beyond dropped below 300 km/s.
See Ulrichs post.. 6:03am
The Earth’s magnetosphere undergoes physical changes between fast and slow solar wind. The configuration is spending more time in its slower speed configuration during cycle 24. Less blow back, less time spent recovering, less rigid? (Dr. S.). The angle at which solar wind and plasma interact from the magnetosphere on down changes..Being somewhat more collapsed and relaxed..less flexed..
more fun stuff
Even the north magnetic pole slowed down that ought to tell us something about the change in rigidity and angle of solar [wind] interactions..