Dr. Leif Svalgaard sent this to me via email saying “Anthony, here is a short note I just submitted to arXiv. You are welcome to make of it what you want, if anything”. I choose to publish it without comment for our readers to consider.
Up to Nine Millennia of Multimessenger Solar Activity
Leif Svalgaard, Stanford University
Abstract
A nine-millennia reconstruction of decadal sunspot numbers derived from 10Be and 14C terrestrial archives for 6755 BC to 1885 AD has been extended to the present using several other messengers (Observed Sunspot Number, Group Number, range of the diurnal variation of the geomagnetic field, and the InterDiurnal Variation of the geomagnetic Ring Current) and scaled to the modern SILSO Version 2 sunspot number. We find that there has been no secular up tick of activity the last three hundred years and that recent activity has not been out of the ordinary. There is a sharp 87.6-year peak in the power spectrum, but no significant power at the Hallstatt 2300-year period. The reconciliation of the cosmogenic record with the modern sunspot record could be an important step to providing a vetted solar activity record for the use in climate research.
Introduction
Wu et al. (2018) (hereafter WEA) present a multi-proxy reconstruction of solar activity over the last 9000 years, using all available long-span datasets of 10Be and 14C messengers in terrestrial archives. Cosmogenic isotopes are produced by cosmic rays in the Earth’s atmosphere and their measured production/depositional flux reflects changes in the cosmic ray flux in the past. The cosmic ray flux is modulated by solar magnetic activity, which can be quantified in terms of the heliospheric modulation potential characterizing the energy spectrum of Galactic Cosmic Rays reaching the top of the atmosphere at a given time. The WEA reconstruction is given as decadal averages centered on the midpoint of each decade and runs from 6755.5 BC to 1885.5 AD. The reason for stopping in 1885 was that the (Suess 1955) effect of extensive fossil fuel burning makes it problematic to use 14C data after the mid-19th century; in addition, radiocarbon data cannot be used after the 1950s because of nuclear explosions that led to massive production of 14C. The modulation potential series is not a stable proxy for solar activity since the modulation potential is a relative index whose absolute value is model dependent (e.g. Herbst et al. 2017). Therefore, WEA converted the reconstructed modulation potential to a more practical and certainly more widely used index: the sunspot number, its current version designated SN (version 2, Clette et al. 2014). The conversion was done via the open solar magnetic flux following an ‘established’ procedure (e.g. Usoskin et al. 2003, 2007). As the ‘procedure’ was developed for version 1 of the sunspot number, the newer version 2 data were scaled down by a factor of 0.6 for the calibration, in spite of the so-called k-factor (the 0.6) not being constant over time (Clette & Lefèvre 2016). It seems a step backwards to cling to the obsolete version 1 of the sunspot number scale, so we undo the spurious down-scaling of version 2. We shall not here quibble about details of the conversion procedure except to note that one would expect (even require) that the SN-reconstruction should match the actual observed SN-series for the time of overlap. WEA suggest that their reconstructed values be multiplied by 1.667 to place them on the SN V2-scale. Figure 1 shows that this is not enough. A factor of 2.0 seems to be necessary to match the two scales, likely meaning that the WEA calibration is too low by about 20%.

More Messengers
When comparing two series it can be difficult to decide which one is too low or too high. It could simply be wrong. Luckily, there are several other messengers directly pertaining to solar activity: the independently derived sunspot Group Number, GN (Svalgaard & Schatten 2016) back to 1610, the range of the diurnal variation of the geomagnetic field, rY (Svalgaard 2016; Loomis 1873) good back to the 1810s, and the InterDiurnal Variation of the geomagnetic Ring Current, IDV (Svalgaard & Cliver 2005, 2010; Svalgaard 2014; Cliver & Herbst 2018; Owen et al. 2016; Bartels 1932) back to the 1830s. Decadal means for these are given in Table 1 together with the (linear) regression-equations to convert them to the SN V2 scale. Applying the conversions we can now plot the messages all on the same scale, Figure 2.

In scaling rY and IDV we have first constructed a composite of SN V2 and GN*(on SN V2 scale).

IDV*(V2) = 18.71˟IDV(nT) – 91.27. Column 12 gives the average of columns 7-11, with its standard deviation in column 13, based on the number of values, N in column 14, going into the average. The table can also be found in the Excel file (see below) associated with this article.
We can now put our Multimessenger reconstruction in the context of solar activity over the last millennium, Figure 3. It is encouraging that our reconstruction matches the WEA reconstruction very well (R2 = 0.87) for their time of overlap, illustrating the power of the Multimessenger approach in reconciling various time series. We note that a ~100-year quasi-wave is clearly seen by eye over the last three centuries (only) and also that there has not been any significant secular change (e.g. an often claimed increase) over the same time interval, the lack of which had already been established (e.g. Clette et al. 2014).

This convergence of the recent cosmogenic and solar activity records (see also Muscheler et al. 2016) lends credence to the admissibility of making a leap of faith back to the beginning of the WEA reconstruction nine millennia ago, Figure 4, even if we have to admit that it is not clear if the very long-period variations are of solar origin. On the other hand, it seems clear that recent activity has not been extraordinary (Berggren et al. 2009).

The combined time series from 6755 BC to 2015 AD is available as an Excel file at
https://leif.org/research/Nine-Millennia-SN.xls
Periodic Activity?
When you have 8770 years (878 data points) of data, the urge to look for cycles is overwhelming. Figure 5 shows the magnitude of the FFT in the time domain of the full sunspot number time series (combining the WEA and Multimessenger series). Although there are better and more powerful methods (e.g. Wavelets), any real periodic activity would show up in the FFT spectrum. We computed the FFT for the entire series and also for three subsets: the first half, the second half, and the middle half in order to see if periods (‘cycles’) would be persistent and coincident in all of them. Three long-term cycles are often assumed to exist (e.g. Damon & Sonett, 1991): the ~2300-year Hallstatt (or Bray) Cycle, the 208-year de Vries (or Suess) Cycle, and the 88-year Gleissberg Cycle. Figure 5 shows that the Hallstatt Cycle (found in climate records) is not significant in the solar record. There does seem to be power at periods between 200 and 240 years, but the power is perhaps too broadly distributed to qualify as a strong periodicity, although there is a narrow peak at half the period (104 years), a variation also visible by eye in Figure 3. With lots of peaks between 250 and 1200 years it is no surprise that some of them just coincide around 350 years. On the other hand, the 87.6-year Gleissberg peak is sharp and prevalent in the whole series and in all three sub-intervals.

Conclusion
The Wu et al. (2018) reconstruction of the sunspot number since 6755 BC combined with modern Multimessenger proxies covering the 19th century until today goes a long way to reconcile the cosmogenic solar activity record with recent assessments of long-term solar activity.
References
Bartels, J. 1932, Terr. Magn. Atmos. Electr., 37, 1-52.
Berggren, A.-M., Beer J., Possnert, G., et al. 2009, Geophys. Res. Lett., 36(11), L11801.
Clette, F., Svalgaard, L., Vaquero, J. M., et al. 2014, Space Sci. Rev., 186, 35-103.
Clette, F., Lefèvre, L. 2016, Solar Phys., 291(9-10), 2629-2629.
Cliver, E. W., Herbst, K. 2018, Space Sci. Rev., 214(2), Id 56.
Damon, P. E., Sonett, C. P. 1991, in The sun in time, Univ. Arizona Press, 360-388.
Herbst, K., Muscheler, R., Heber, B. 2017, J. Geophys., 122(1), 23-34.
Loomis, E. 1873, Amer. J. Sci. Ser. 3, 5, 245-260.
Muscheler, R., Adolphi, F., Herbst, K., et al. 2016, Solar Phys., 291(9-10), 3025-3043.
Owens, M. J., Cliver, E. W., McCracken, K. G., et al. 2016, J. Geophys. Res., 121(7), 6048-6063.
Suess, H. E. 1955, Science, 122 (3166), 415-417.
Svalgaard, L. 2014, Ann. Geophysicae, 32(6), 633-641.
Svalgaard, L. 2016, Solar Phys., 291(9-10), 2981-3010.
Svalgaard, L., Cliver, E. W. 2005, J. Geophys. Res., 110(12), A12103.
Svalgaard, L., Cliver, E. W. 2010, J. Geophys. Res., 115(9), A09111.
Svalgaard, L., Schatten, K. H. 2016, Solar Phys., 291(9-10), 2653-2684.
Usoskin, I. G., Solanki, S. K., Kovaltsov, G. A. 2007, A&A, 471, 301.
Usoskin, I. G., Solanki, S. K., Schüssler, M., et al. 2003, Phys. Rev. Lett., 91, 211101.
Wu, C. J., Usoskin I. G., Krivova, N., et al. 2018, A&A, 615, A93.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
You mean, stable main sequence stars are stable?
small typo
https://leif.org/research/Nine-Millennia-SN.xls.
I see 440y of data there , not 9ka . Wrong file ?
Interesting stuff, hope to see the whole dataset.
I’m curious why you chose to resample at 10y. Just because it’s a round number ?
Averaging is a good way do reduce random noise, it is not good at removing a systematic signal. This seems an odd choice for resampling data were there is known to be a circa 11y periodicity. I will comment further when I have studied the methodology more closely.
Columns A is the decade. Goes from 6755 BC to 2015 AD for 8770 years, 879 data points.
The sampling was not mine, but was performed by Wu et al. Dictated by the resolution of the early cosmic ray data.
Sorry, not reading clearly, I was confused by the layout. Long data fine.
I wouldn’t know where to start. Looking forward to reading some expert analysis.
HotScot
I can relate. As an Instrumentation Engineer Data is what I do, But Solar Science must be the most arcane discipline in existence.
To begin with could someone explain how figure 4 was created?
Solar Science is weird.
The black bit is from Wu et al. (2018), until the red bit, which is Leif’s “multimessenger” reconstruction for AD 1615 to 2015.
Wu, C. J., Usoskin I. G., Krivova, N., et al. 2018, A&A, 615, A93.
http://cc.oulu.fi/~usoskin/personal/aa31892-17.pdf
Abstract
Aims. The solar activity in the past millennia can only be reconstructed from cosmogenic radionuclide proxy records in terrestrial archives. However, because of the diversity of the proxy archives, it is difficult to build a homogeneous reconstruction. All previous studies were based on individual, sometimes statistically averaged, proxy datasets. Here we aim to provide a new consistent multi-proxy reconstruction of the solar activity over the last 9000 yr, using all available long-span datasets of 10Be and 14C in terrestrial archives.
Methods. A new method, based on a Bayesian approach, was applied for the first time to solar activity reconstruction. A Monte Carlo search (using the χ^2 statistic) for the most probable value of the modulation potential was performed to match data from different datasets for a given time. This provides a straightforward estimate of the related uncertainties. We used six 10Be series of different lengths (from 500–10 000 yr) from Greenland and Antarctica, and the global 14C production series. The 10Be series were resampled to match wiggles related to the grand minima in the 14C reference dataset. The stability of the long data series was tested.
Results. The Greenland Ice-core Project (GRIP) and the Antarctic EDML (EPICA Dronning Maud Land) 10Be series diverge from each other during the second half of the Holocene, while the 14C series lies in between them. A likely reason for the discrepancy is the insufficiently precise beryllium transport and deposition model for Greenland, which leads to an undercorrection of the GRIP series for the geomagnetic shielding effect. A slow 6–7 millennia variability with lows at ca. 5500 BC and 1500 AD in the long-term evolution of solar activity is found. Two components of solar activity can be statistically distinguished: the main component, corresponding to the “normal” moderate level, and a component corresponding to grand minima. A possible existence of a component representing grand maxima is indicated, but it cannot be separated from the main component in a statistically significant manner.
Conclusions. A new consistent reconstruction of solar activity over the last nine millennia is presented with the most probable values of decadal sunspot numbers and their realistic uncertainties. Independent components of solar activity corresponding to the main moderate activity and the grand-minimum state are identified; they may be related to different operation modes of the dynamo.
John Tillman, Thanks for the response. The references are useful.
But what I was really looking for an assessment as to why splicing two completely different proxies together is acceptable and how they made the two join up at the same point.
M,
Beg your pardon.
Best to let Leif explain.
But there is overlap between radionuclide proxies and observations in the 17th, 18th and 19th centuries, so splicing in this case might not be bogus, as with the HS.
It always astounds me that scientists will take a cloud of data points and claim to find correlations and trends and stuff.
With apologies to Mr. Spock:
The correlation shown in the graph is the correlation between the cloud and the straight line , not the cross-correlation of the two data sets.
The fitted line ( the scaling factor ) will be somewhat under-estimated due to regression dilution but the data is not too spread out. This could be constrained by flipping axes and fitting the other way around. How close is the fit to 0.5 ?
Javier and other cycle advocates will have an interesting time with this study. I can eyeball sort of a connection with sunspot numbers and temperature for the period around 1 to 1800, but nowhere else.
Does seem to show MWP, LIA and recent optimum could be solar.
However, that 300y period was consistently higher than the previous 300y period, yet similar to the 300y period before that.
OHC could easily be expected to buffer global changes an respond on a multi-centennial scale. Activity ramped up in mid 1700s and remained high. This would be consistent with a multi-century warming as a result on increased solar.
The lower activity between 1250 and 1700 would seem consistent with the hypothesis that LIA was of solar origin.
Here is the SN Mulitmessenger with a 100 year triple running mean filter. The progressive increase of the last 3 centuries and the LIA timing seems to match quite closely to the proxy.
[MODS]Perhaps you could promote this to visible image.
> The reason for stopping in 1885 was that the (Suess 1955) effect of extensive fossil fuel burning makes it problematic to use 14C data after the mid-19th century; in addition, radiocarbon data cannot be used after the 1950s because of nuclear explosions that led to massive production of 14C.
It is always a pleasure when real scientists do the right thing. No attempts to “adjust” for the known disruption like some do with Urban Heat island effects. No attempts to graft two data sets like the “hockey stick.” Open acknowledgement of the data matrix being sparse in places.
If the rest of the field were as professional as this paper the understanding of climate would be decades ahead.
Why not just tack on real life data to the end of the proxy chart for the remaining years.
LOL
Ja. Ja. I seem to remember that Leif was sceptical about Gleissberg 87 years.
I figured it out myself.
Click on my name to figure it out for yourself.
Except that the 88-year cycle has not benn visible the last 400 years.
And as with Tom but nowhere else Halla (above), that’s inconvenient.
Last 400 years? Or roughly, not seen since the Maunder Minimum?
What do you think changed?
I don’t believe that the 88 years cycle appearing in most of 9000 years of data is a statistical fluke. What changed?
++PLS
Leif
Exactly when is the 15th century minimum?
The 1690’s were of course a famously cold interval, but the 1430’s may have been just as bad, though there are much fewer sources. I remember Lamb telling about how he tried to figure out just which winter in the 1430’s it was that was so extremely cold, since the records seemed to be conflicting, until he realized they all were!
In Leif’s Fig 3, the Spörer Minimum starts around AD 1430 rather than the previously 14C-derived 1460.
So the new reconstruction passes that test.
Although I’ve also seen AD 1420 for its onset.
Leif’s latest shows the Oort and Wolf Minima about where previously recovered.
My latest figures for the past GB cycle put it at 86.5 yrs.
Admittedly it is not 100 % constant as it depends a bit on a number factors.
It would be helpful have a really dumbed-down summary of the key take-aways for the nonscientific community.
I’m not sure it could be dumbed down enough for me …..
?dl=0
Leif, many thanks for sharing this here. I greatly admire your accomplishments and have learned much from you website.
Ditto, in spades. This is really good stuff. We are witnessing real progress being made.
This is excellent. Thanks very much for the article and for posting it here.
Probably a silly question from a mining engineer/geologist: looking at the fourier analysis, are the peaks at multiples of a single Gleissberg cycle supportive of peaks at near 200y, etc.,?
Thank you Dr.
Figure 2 shows a 1950 mid century optimum Modern Optimum after which solar activity despite a bump in the 1990s is on the decline corresponding nicely to the rapid mode of atmospheric circulation since the 1970s well known climatic shift (see https://hacenearezkifr.files.wordpress.com/2018/05/leroux-1993c.pdf ) . Figure 3 shows the solar activity coming out of the LIA is on an uptick, despite the absence of the large unprecedented modern grand maximum claimed by some and abundantly debunked by Leif, but still on a clear undeniable uptick, compatible with the 0.8 C warming known. The minimum is centered around 1500 AD. The Medieval Warm Period shows up nicely before that.
In Figure 4, the Roman Warm Period is clearly visible as is the HCO corresponding to favorable conditions leading to the emergence of civilization in Mesopotamia and Egypt.
Only man made global temperature algorithms that popped up during the Adjustocene 2000 period do not fit this work. It could be the deleterious effects of the evil CO2 gas because doubting the Global Temperature output would be heresy… or simply that the climatic changes observed that are compatible with the solar data invalidated the CO2 ruler of the world claim.
So thank you Dr. for this great reconstruction.
… is a strawman argument that isn’t all true because of the most recent activity:
Recent 9 cycles had highest v2 sunspot # average
1712 -SC 5: 78.7 , 18.4% less than SC 15-23
SC 6-14: 71.7 , 22.4% less than SC 15-23
SC 15-23: 95.1 –
The 1900s had the highest v2 century average
1700-1799: 76.2, 15.3% less than 1900s
1800-1899: 72.7, 19.2% less than 1900s
1900-1999: 90.0 –
One of the big reasons why CAGW and the ensuing nonsense has been bad for all of scientific inquiry.
Hi doc, long time no see.
Congratulations on the paper.
I would edit out following lines
“When you have 8770 years (878 data points) of data, the urge to look for cycles is overwhelming. ”
and
“On the other hand, the 87.6-year Gleissberg peak is sharp and prevalent in the whole series and in all three sub-intervals.”
‘87.6’ (10×87.6 is far too close to your 878 point sampling points) is most likely an artefact of the Fourier mince-machine
Hi doc, long time no see.
Congratulations on the paper.
I would edit out following lines
“When you have 8770 years (878 data points) of data, the urge to look for cycles is overwhelming. ”
and
“On the other hand, the 87.6-year Gleissberg peak is sharp and prevalent in the whole series and in all three sub-intervals.”
‘87.6’ (10×87.6 is far too close to your 878 point sampling points) is most likely an artefact of the Fourier mince-machine
First half 5×88.5
So is second half 5×87.7
Yes, that was a bit of a red flag for me too, though 10 is a binary division. I see no mention of taper window being applied so artifacts will be strong.
I will try to do some alternative spectral analysis and see what comes out.
That’s PR and validation is all about.
I have done a chirp-z analysis on the autocorrelation fn of SN using a different window functions and find a clear peak at 87.8 years.
specral analysis of a-c produces the power spectrum rather than simple F.A.
Kaiser-Bessel; extended cosine windows and o windowing, all clearly show this peak at the same value. It appears real and strong.
strongest at 207 years, also 355a.
longer periodicities at circa 500a and 1600-1700a .
Taking the first and last 500 data points, thus breaking the sub-multiple of 878, produces the same peak. It is only about half as strong in the later data but still stands out from the noise and very close to the same period.
My main worry with this is way Wu et al resampled the data at 10 year intervals and whether this has introduced aliasing with the circa 11y Hale cycle. That could produce aliasing of the order of centuries.
Does anyone have a link or the title of that paper?
Leif that is some really nice useful work for understanding the climate.
Imagine a straight line at 94 v2 SSN across Figures 3 & 4. Activity above that line means a warmer climate, and below it a colder climate.
What strikes me is the sunspot number time series irregularity, with significant long low periods, but with somewhat similar rates of increase and decrease throughout, indicating sporadic yet consistent solar processes.
Similar slope rates like in Figure 4 are also evident in ice core data.
Long-term solar irregularity prevents us from realistically being able to forecast climate change beyond a single solar cycle. All anyone can do beyond that is create what-if scenarios.
Leif, the link to the Excel sheet was incorrect. You had it as
https://leif.org/research/Nine-Millennia-SN-xls
when it is
https://leif.org/research/Nine-Millennia-SN.xls
I’ve updated the head post with the correct address. Many thanks for including the data.
w.
It seems that there should be some relevance to the claim that the work is supposed to disprove earlier claims that solar activity had increased in recent times.
The dots do not connect themselves (for me at least). Is the point intended to be that recent warming is not to be explained by increased solar activity?
I apologize if this question demonstrates my ignorance.
Leif, I agree with Vukcevic above. The “87.6” Fourier result is absolutely an artefact of the Fourier analysis.
Here is the CEEMD analysis of your data …
And here is the periodogram of the same data …
In neither case is there an 87.6-year cycle. It’s one of the problems with standard Fourier analysis. It only samples it at a relatively small number of points, so things get aliased to the nearest point.
w.
Willis,
According to the standard definition a periodgram is nothing more than the absolute
value squared of the Fourier transform of the signal. Hence the periodgram and the
Fourier transform should show the same information – unless you care about the phase of
the frequency components. Hence either you or Dr. Svalgaard have made an error in
calculating the fourier spectrum.
Percy, I don’t use the FFT (Fast Fourier Transform) that Dr. Svalgaard uses. I use a different type of Fourier transform. I invented it myself, and I called it the “Slow Fourier Transform“.
I later found out here thanks to Tamino that it was first used in the 1980’s. It’s called a “Date-Compensated Discrete Fourier Transform”, or DCDFT (Ferraz-Mello, S. 1981, Astron. J., 86, 619).
(Sometimes people ask me if it bothers me when I invent something and then find out later that someone invented it before me. Quite the opposite—it proves two things. First, it proves that my method is a correct one, and second, it proves that I actually understand what I’m doing, or I couldn’t have invented it … but I digress).
In any case, a regular FFT only samples the signal at a small number of periods. The DCDFT samples at as many periods as you’d like. As a result, it is not subject to the kind of error that caught Dr. Svalgaard above.
Best regards,
w.
The Gleissberg Cycle is just another name for centennial solar minima, quoted as between 80 and 130 years. The long term astronomical mean is 107.9 years.
Ulric, I’m sorry, but anyone claiming that they know the length of the putative “Gleissberg cycle” to the nearest tenth of a year is blowing smoke …
w.
Also, Ulric, when we look at the long reconstruction, the so-called “Gleissberg Cycle” disappears … the analysis is below. Very little power around 100 years
w.
The following is power density ( which comes from the transform of the auto-correlation not straight FA.)
There is a clear and strong peak at 87.8y in the full dataset. It’s about half as strong in the later 5ka of data but still there and still centered on exactly the same frequency.
It can be seen on a higher resolution plot to be a well defined peak, if your analysis only has 80,90,100 you will not see how broad/fine the peak is.
There seems to be a consistent feature as 150 years too.
Willis, Ulrich didn’t claim to “know the length of the putative “Gleissberg cycle” to the nearest tenth of a year.”
He just presents the numeric value of a computation.
Willis,
In the linked article you claim that your results are identical to those obtained from
the fast fourier transform. If that is indeed the case then you should get identical results
to Dr. Svalgaard if that is the method that he uses. So again the question would be why you results and his differ so significantly. Incidentally if I use matlab’s inbuilt
periodogram function with a sampling window of length 512 I also find a strong periodicity at
around 90 years. Which does not appear in your periodograms but does in Dr. Svalgaard.
Thanks, Percy. Not sure what claim you are referring to.
The difference is that the FFT is strongly dependent on the sampling window length. It breaks the signal down into just a few orthogonal signals. It simply cannot detect anything in between those signals. So it will break a signal of length 512 down as follows (plus shorter periods, of course):
> round(512 / c(2:12), 0)
[1] 256 171 128 102 85 73 64 57 51 47 43
Note that you get better resolution at the shorter periods, and bad resolution at longer periods. So yes, you will indeed get a result at “around 90 years”, in fact at 85 years… but that is a result of the length of the signal.
If you want to get better resolution, particularly at longer periods, you need to pad your data with zeros … lots of zeros.
Finally, I window the signal using a Hamming filter … otherwise you get artifacts.
In fact, it is this lack of resolution, particularly at longer periods, that led me to invent the method I use. Yes, I could have just padded my data with zeros, but I wanted something that was equally spaced in period rather than equally spaced in frequency as FFT is …
Hope this helps,
w.
Willis,
In your original article you state:
“An alert reader, Michael Gordon, pointed out that I was doing a type Fast Fourier Transform (FFT) … and provided a link to Nick Stokes’ R code to verify that indeed, my results are identical to the periodogram of his Fast Fourier Transform. So, it turns out that what I’ve invented can be best described as the “Slow Fourier Transform”, since it does exactly what the FFT does”.
So again if your method gives exactly the same answer as a normal Fourier transform then your result and Dr. Svalgaard’s should agree. But they don’t and hence either your method is different in which case you are not calculating the periodogram of data or else one of you have made an error in the calculations.
107.9 years is the long term mean for the synodic cycles that order centennial solar minima.

What’s this?
What jumps out visibly is the LIA.
Actually, the LIA was already well established in 1600, but the sunspots were similar to those in the 20th century …
w.
It’s more the three “hard cores” of the LIA 1430-1440, 1690-1700 and 1800-1820 that jumps out visibly.
Yup. The depths of the three solar minima mentioned below, with the coldest decade of the 1690s associated with the low point of the Maunder.
John, how do you know that the “coldest decade” of the LIA was the 1690s? AFAIK, not many thermometers back then, and proxies are notoriously unreliable …
w.
I think the “1690s” is a little too precise. Dates of Frost Fairs in London indicate that, assuming (for argument’s sake and nothing more) that the frequency of Frost Fairs reflects a general level of coolness, the period from 1650 through 1700 was very cold in Britain with six Frost Fairs, and the last quarter of the 18th century coming second(?) with four. If the LIA followed RIA patterns, the 18th C pulse might have been the coldest. And, since the Delaware River was frozen over at least one winter during that same 18th C span, it seems possible that the LIA “coolness” really did extend beyond Europe. Sadly, as Willis notes – not many thermometers around.
Willis,
There were thermometers in the 1690s, although Fahrenheit’s scale wasn’t adopted as standard until 1714.
The CET, which starts in 1659, shows the 1690s as the coldest decade. But the coldest winter of the Maunder Minimum was the Great Frost of 1708-09.
John Tillman October 28, 2018 at 6:14 pm
Yes … which makes any claims of prior temperatures questionable. See below.
The problem is that the CET is a patchwork of records, particularly in the earlier years. As a result, temporal comparisons must be viewed with suspicion. Please don’t think that this is just my opinion … here’s Manley, the developer of the CET, on the subject:
Note that ALL reading before 1720 are from “extrapolation from the results of readings of highly imperfect instruments in uncertain exposures at a considerable distance, generally in south-east England”. In other words, he guessed at the CET mostly from a patchwork of reading around London, far from the CET. And some of them are just guessed at from “observations of wind and weather” …
Here’s more from Manley:
Temperatures estimated from diary entries, from “snowfall frequency”, from temperatures in Holland … here’s more on how the early “observations” are really estimations based on things happening at great remove (emphasis mine):
And although later observations are better, from beginning to end the CET is an average of varying numbers of stations in varying locations … and when you have that, you can’t compare different times in the record, because they are records from different thermometers in different areas, sometimes estimated from anecdotes of things like snowfall and personal diaries, and sometimes not even from Central England at all.
As a result, you simply cannot say that the 1690s were the “coldest decade” … the records we have don’t support that.
w.
Manley Paper
The 1690s were also the coldest decade of the Maunder in Scotland:
http://notrickszone.com/2018/10/07/scotland-800-year-reconstruction-shows-temperatures-were-as-warm-or-warmer-in-the-past/comment-page-1/
And in New England, although 1701-10 was also cold there:
https://muse.jhu.edu/article/566819/summary
The coldest decade early in the LIA was probably the 1430s, during the Spörer Minimum:
https://www.sciencedaily.com/releases/2016/12/161201115357.htm
From the year 1658. Let’s just say that the waterways between the Danish isles don’t freeze over like that anymore.
In the middle of December the weather shifted, turning into the coldest winter in memory. The seawater between the islands froze, making a ship-borne assault impossible. Engineer Erik Dahlberg was dispatched by the king to ascertain whether the ice would support the weight of the Swedish cavalry and artillery. Dahlberg reported that a crossing over the ice was feasible.
Early in the morning of 30 January 1658, the army was lined up to cross the Little Belt to reach Funen. The army consisted of about 9,000 cavalrymen and 3,000 foot soldiers. The ice warped under the weight of the soldiers; on occasions water reached up to the men’s knees. Close to the shore of Funen a skirmish broke out with about 3,000 Danish defenders, but these were brushed aside quickly and the army was safe on Funen.
Now investigations were made to find the best way over the ice that covered the Great Belt in order to reach Zealand. Again Erik Dahlberg led the investigation, and he advised taking the longer route via Langeland and Lolland rather than the more direct route across the Belt. The night of 5 February the king set off with the cavalry across the ice and safely reached Lolland later in the day. The infantry and the artillery followed the next day. Thus, on 8 February, the Swedish host was safely on Zealand, and on 15 February, after forced marches, it reached the outskirts of Copenhagen. The Danes, who had thought the Swedes would start their offensive in the spring at the earliest, panicked and yielded. Negotiations were started and on 26 February the Treaty of Roskilde was signed by the two parties.
https://en.wikipedia.org/wiki/March_Across_the_Belts
Troed,
Thanks.
While less well attested, many historians believe that the barbarian horde which crossed the Rhine at Mainz into Gaul on 31 December 406 did so over ice. The mixed lot of Vandals, Alans and Suebi then destroyed Roman cities, leading to the collapse of imperial power in northern Gaul.
This was early in the Dark Ages Cool Period, which followed the Roman Warm Period.
The worst year of this period was arguably AD 540, “when tree rings suggest greatly retarded growth, the sun appeared dimmed for more than a year, temperatures dropped in Ireland, Great Britain, Siberia, North and South America, fruit didn’t ripen, and snow fell in the summer in southern Europe (Baillie in Singer and Avery, 2007). Towards the end of the period, in AD 800, the Black Sea froze and in 829 the Nile River froze (Oliver, 1973)”.
(Courtesy of Dr. Don J. Easterbrook; edited)
The LIA is distinguished by a series of solar minima, with sunspot recoveries between them. As with centennial-scale warm periods, cool periods experience counter-trend cycles.
Some assign the Wolf Minimum to the Medieval WP as a counter-trend cycle, because it was followed by another pronounced warm cycle. But others date the LIA from its onset.
In any case, the LIA is marked by at least the Spörer, Maunder and Dalton Minima. Coming out of the solar lows can produce pronounced warming cycles, as during the early 17th century, during recovery from the Maunder.
Another note. Here are the actual yearly sunspots, the Gaussian average of the yearly sunspots, and the Svalgaard reconstruction.
You can see that the reconstruction error increases the further back in time that you go …
w.
The Holocene Climatic Optimum and Little Ice Age jump right out…

What is southern Hemisphere in comparison to NH?
“What is southern Hemisphere in comparison to NH?”
Most notable difference is no contiguous MWP ….
“this record reveals an extended cold period (1594–1677) in both hemispheres but
no globally coherent warm phase during the pre-industrial 1000–1850) era. The current (post-1974) warm phase is the only period of the past millennium where both hemispheres are likely to have experienced contemporaneous warm extremes.”
http://sci-hub.tw/10.1038/NCLIMATE2174
Anthony,
Study after study has found a globally coherent Medieval WP.
https://wattsupwiththat.com/2016/01/09/evidence-of-the-medieval-warm-period-in-australia-new-zealand-and-oceania/
Same goes for South America.
https://www.thegwpf.com/medieval-warm-period-and-little-ice-age-in-south-america/
https://www.clim-past.net/9/307/2013/
https://www.sciencedirect.com/science/article/abs/pii/S0277379112001631
Besides of course the many sites all over the NH showing the same signal.
Thanks David.
Not really. The climatic optimum was earlier in the Holocene, partly beyond this reconstruction.
IMO, it’s visible in Fig. 4. It appears to end abruptly some 5 Ka, with a nose-dive in solar activity.
I’m trying to figure out how Leif got less that zero spots….LOL
Thanks for another useful and thought provoking article.
My only disappointment is that Leif has apparently succumbed to the new childish NASA trend of referring to observations as “messengers”, presumably referring to the new field of “Multi Messenger Astronomy”. Obviously use of the new buzz word and latest jargon ensures that the papers will be published and favourably regarded by the new generation of trendy graduates.
Why the need for this dumbing down? What was wrong with multiple “signals” or “frequencies”, etc.? I’m sure Leif has the intellect and experience to think up a more appropriate term.
So they can blame messenger when the global warming goes pear shape.
The fun continues. Here are the CEEMD and periodogram analyses of the long Svalgaard reconstruction, 6755 BCE to present.
and
Once again, no putative 87.6 year “Gleissberg Cycle”. In fact, there is very little evidence of any strong cycles.
w.
I dont know why people are obsessed with cycles, the solar evolution is linear not cyclical, iterations are unique, sun spots are but one element of an iteration so unless one can reconstruct the other factors of each iteration process, you are not going to get any clear picture at all
Besides, the longer we cling to this joke of a standard model (which is not actually a model, but a patchwork of science and gibberish) we won’t understand it either.
Same for earth’s climate evolution, which is why all that Paleo bunk from the Pages community is meaningless even if it was (it isn’t) accurate.
Indeed Mark, cycles start to sound epicycles when people are obsessed with finding them. Willis though, I’d like you to reiterate why the visible peak is not a putative 88 year cycle?
Artefact, why exactly? What is the limit of significance? Eyeballing doesn’t help here.
Maybe the obsession with cycles has something to do with the regularity of Earth’s planetary motions – from the daily spin on it’s axis to the annual tour around the sun?
Many humans seem to have a strong desire to find a reason for everything, & also love to predict what will happen next.
obsession with cycles is because it gives the possibility of prediction. Saying I have a chaotic system that I do not understand does not get us far. That is not to say stable cycles do exist, but that is the reason for what you call obsession with cycles.
Willis,
Below 1000 years, your CEEMD and Periodogram both show distinct peaks at roughly:
89 (Gleissberg), 104 (CEEMD only), 130 (Periodogram only), 150, 210 (de Vries), 230 (periodogram only), 355, 440, 505, 700, and 900-970 (Eddy) years. There is even a hint of a peak at 2300 years (Hallstadt).
This result agrees reasonably well with periodicities that Abreu et al (2012) found in their spectral analysis of their solar modulation potential. They found periodicities at:
88 (Gleissberg), 104, 130, 150, 209, 232, 351, 445, and 505 years.
Their wavelet analysis shows that these periods are not stationary (i.e. they fad in an out with time). For example, the 208-year period is prominent during periods associated with grand solar minima, that occur roughly once every 2000-3000 years.
I would say that Leif’s data and your analysis confirms the earlier results of Abreu et al. (2012).
Power density spectrum from chirp-z analysis of the auto-correlation fn gives a fairly strong , consistent and well defined peak at 87.8 years.
Though I am not sure whether anything else is strong enough to suggest cycles in a phsically causal sense. Everything has “cycles” in a periodic analysis.
That was sort of my conclusion as well.
On the 22-year Hale ‘cycle’:
Even the 11-year Schwabe cycle is not a ‘cycle’ but an ‘eruption’. Around the statistical solar minimum two eruptions run in parallel: one at low latitudes and one in high to mid-latitudes. The length of an eription is typically 17 or so years. Each eruption leaves behind the debris of which the next eruption will develop. No cycles here.
Slides 14 & 15 show the two parallel eruptions very nicely:
https://leif.org/research/The-Mysterious-Polar-Fields.pdf
Enjoying your work, especially figure 8.8 for those cycling enthusiasts.
The length of the event has little to do with the periodicity, I find this line of argument confused. If I have a Hall effect pickup on crank of my car engine the pulse will be very short but that tells me nothing about the repetition cycle which gives the RPM. If the Hall pulse was 2/3 of the repeat interval the rpm would still be the same and would still be a cycle.
It is interesting to point out the nature and duration of the eruption but this does not deny the repetitive nature of these events.
A lot of the problem is how the cycle lengths are determined from poorly defined minima.
There is not one peak in a Schwabe cycle but too. Sometimes they are barely distinguishable, sometimes clearly separated. A better characterisation of the cycle would help understand and measure it. The cross-over of two opposing tails is not a very useful criterion for Schwabe cycle length.
Greg,
I can’t help but feel that you guys are just re-inventing the wheel.
Go and have a look at Abreu et al. 2012 (Ref: Abreu, J.A., Beer, J., Ferriz-Mas, A., McCracken, K.G. and Steinhilber, F., 2012, Is there a planetary influence on solar activity?, Astron. & Astrophys., 548, A88).
These authors do a full wavelet analysis [which I believe is the best way to investigate this data] and they clearly show that most of the periods that you find in your analysis are indeed present, however, they are non-stationary i.e. they fad in and out over time. Some of the cycles fade in and out in a systematic way e.g. the 210 de Vries cycle is only present during periods of grand solar minima.
Ruling out cyclical behavior simply because it is non-stationary is to me, the hallmark of a scientific hack.
Greg,
I know that you are fully aware of most of the following. I am just thinking out loud in order to clarify my thoughts on this topic.
Long-term non-stationary cycles can be produced if you have fundamental (short-term) cyclical forcing terms with variable intensity. The varying intensity of the fundamental cycle may be intrinsic (e.g. the cyclic forcing may randomly/periodically synchronizing itself with a natural cycle within the system under study) or it can vary in intensity because the fundamental forcing cycle interacts with other fundamental forcing cycles that could be present.
Here is a bit of speculation that I think will make my point.
First, let’s be very conservative and limit our analysis to cycle with periods that are less than roughly 1/20th of the data length i.e. ~ less than 500 years. Using this limit means that we can get a reasonable handle on a non-stationary 500-year cycle if it is present for about 50 % of the length of the data record.
Second, Willis’ CEEMD and Periodogram show the following distinct peaks with periods -31.0 years
step 1 —> 28.75 years
step 2 —> 88.50 years
step 3 –> 148.25 years_________________Start —-> 177.0 years (= 208.0 – 31.0 years)
step 4 –> 208.0 years__________________Step 1 —> 236.75 years
_________________________________________Step 2 —> 296.50 years
_________________________________________Step 3 —> 356.25 years
_________________________________________Step 4 —> 416.00 years
Start —-> 385.0 (= 416.0 – 31.0 years)
Step 1 —> 444.75 years
Step 2 —> 504.50 years etc. etc.
Note that this 208-year cycle could account for the 89 years, 104 years (= 208.0/2 years) 150 years, 210 years, 230 years, 355 years, 440 years, and 505 years cycles found by Willis.
I know that many here will be ROTFL at this point but I have made a claimed that the level of solar activity is modulated by the gravitational torques of (primarily) Jupiter acting upon the tidal bulges near the base of the convective layers of the Sun that are produced by the periodic alignments of the Earth/Moon system with Venus. The hypothesis is called the Venus-Earth-Moon-Jupiter Tidal Torquing model.
This simple model would explain:
a) the 239 year period – which is associated transit cycle of Earth and Venus.
and
b) the 31.0, 28.75 and 59.7 (= 28.75 + 31.0 years = 59.75 years) periods – which are associated with the alignments of the Perigean Full/New Moon tidal cycles with the Earth’s orbital period (which produce the seasons).
A lot of this is explained in my most recent publication at:
Ian Robert George Wilson and Nikolay S Sidorenkov, A Luni-Solar Connection to Weather and Climate I: Centennial Times Scales, J Earth Sci Clim Change 2018, 9:2
Hi Ian. I don’t follow what your “steps” are about here.
J-V seems a reasonable thing to investigate. JVE at a stretch, JVEM and I suspect you getting into over-fitting and any result is spurious coincidence ( aka numerology ).
One thing that has struck me as odd is that the various basic lunar periods are close to the solar equatorial rotation, though that has to be Sun -> Moon influence or common cause if anything. I doubt the moon is doing much to the Sun.
One thing I have never managed to fathom out is that the mean frequency of Saros eclipse period (~18.6y) and lunar apsidal precession ~8.84y is very , very close to the orbital frequency of Jupiter. This suggests to me that there is some kind of orbital connection between J and lunar precession but I have never managed to untangle what is going on.
Greg,
A whole section of my previous post has been deleted for some reason. The section missing should read:
Second, Willis’ CEEMD and Periodogram show the following distinct peaks with periods
89, 104, 130, 150, 210, 230, 355, 440, and 505.
[please bear with me in the following analysis – hopefully, it will make some sense at the end]
1. Imagine if have a 239-year cycle that starts at -31 years.
2. Step forward by 59.75 years from the starting point at -31 years.
3. End the pattern after four steps of 59.75 years (= 208.0 years)
4. Go back to step 1.
Start —> -31.0 years
step 1 —> 28.75 years
step 2 —> 88.50 years
step 3 –> 148.25 years_________________Start —-> 177.0 years (= 208.0 – 31.0 years)
step 4 –> 208.0 years__________________Step 1 —> 236.75 years
_________________________________________Step 2 —> 296.50 years
_________________________________________Step 3 —> 356.25 years
_________________________________________Step 4 —> 416.00 years
Start —-> 385.0 (= 416.0 – 31.0 years)
Step 1 —> 444.75 years
Step 2 —> 504.50 years etc. etc.
——————
I have great admiration for your no-nonsense scientific reasoning. It is like a breath of fresh air at times. However, could you read the following paper by Frank Stefani et al. 2018 about the Tayler-Spruit model of the Solar Dynamo. It is based upon my VEJ tidal-torquing model for driving solar activity.
On the Synchronizability of Tayler–Spruit and Babcock–Leighton Type Dynamos
Stefani, F., Giesecke, A., Weber, N. et al. Sol Phys (2018) 293: 12. https://doi.org/10.1007/s11207-017-1232-y
ABSTRACT
The solar cycle appears to be remarkably synchronized with the gravitational torques exerted by the tidally dominant planets Venus, Earth and Jupiter. Recently, a possible synchronization mechanism was proposed that relies on the intrinsic helicity oscillation of the current-driven Tayler instability which can be stoked by tidal-like perturbations with a period of 11.07 years. Inserted into a simple α – Ω dynamo model these resonantly excited helicity oscillations led to a 22.14 years dynamo cycle. Here, we assess various alternative mechanisms of synchronization. Specifically we study a simple time-delay model of Babcock–Leighton type dynamos and ask whether periodic changes of either the minimal amplitude for rising toroidal flux tubes or the Ω effect could eventually lead to synchronization. In contrast to the easy and robust synchronizability of Tayler–Spruit dynamo models, our answer for those Babcock–Leighton type models is less propitious.
I guess “proxy” although accurate, is out of vogue?
Based on your evidence I would conclude that Solar activity is a complex non-linear phenomenon. Predicting it might be impossible.
Connecting Space weather to weather on Earth is a scientific question that could be more interesting. How often will a Solar storm hit on Earth and what are the consequences?
People here still go looking for cycles in chaos. It is a pointless and meaningless exercise.
One assumes the suns interior is a boiling turbulent mass of plasma and that it is intensely chaotic.
Prediction of chaotic systems cannot be done with any degree of accuracy, and they do not display cyclic behaviour except by accident, so to speak.
Fact Check: True.
w.
Incompletely or insufficiently characterized and unwieldy.
There may be observable or inferred, even reproducible, cycles in limited frames of reference; but, generally, we have to acknowledge that accuracy is inversely proportional to the product of time and space offsets, forward and backward, and all around. We have difficulty predicting a human life, other than conception, death, and several intermediate milestones. We have barely made near-observation at the edge of our solar system.
In chaos theory, you look for strange attractors rather than cycles. Example, Lorenz attractor derived from Lorenz equations for atmospheric convection. Applying it to the sun, formulate “solar equations” and look for a “solar attractor.” If the equations obey some conservation laws, there are corresponding mathematical symmetries to be discovered. This is Noether’s theorem. This is how particle physicists built the Standard Model by searching for mathematical symmetries.
My guess is the Navier-Stokes equations combined with Fermi-Dirac statistics could describe the plasma flow in the sun. Sometimes there’s subtle connection between seemingly unrelated fields. Example, I’m applying Bose-Einstein statistics to black holes. I think it could explain or modify Hawking radiation.
As I have stated many times (I did this dance with Nick Stokes who was playing with them) if you make radiative transfer the main driver then you have Quantum Chaos and strange attractors are not relevant .. can you people read
https://en.wikipedia.org/wiki/Quantum_chaos
So if you want to introduced orbital period or oscillations into solar energy can you please at least refer to current theories not deceased classical physics which we already know is wrong. If you want to use gravitational energy which obeys classical laws then fine you can use classical theory but you can not mix and match. Choose which energy, which determines which theory covers it and apply it.
Magnetohydrodynamics is a classical theory. Fermi-Dirac statistics approaches the classical Maxwell-Boltzmann statistics in the limit of high temperature and low particle density. It’s chaos theory in classical physics. Once we understand this and apply it to solar physics, then you can be more ambitious and try reformulating it in terms of quantum chaos.
However, we do successfully forecast and model weather. Which means the under-laying processes are not opaque, or beyond useful projections, despite chaos. Precise sampling is of course the key to projecting for a few days, at a level of detail that is clearly usable.
Weather forecasting is like drawing a straight line with a French Curve. One can do so for a short interval, but. . .
Other than agreeing with Leo Smith, I must ask a question:
“One assumes the suns interior is a boiling turbulent mass of plasma”…
Does not Boiling imply the requirement that there be condensed matter and an attendant phase change?
One can see the prolonged solar minimums are still present and with that the climate correlations. One can see the LIA corresponding to the low solar activity from this latest revision.
This keeps the basic premises intact but what really matters is where solar goes from here. I maintained the sun has now entered a period of inactivity post 2005 as opposed to a very active period prior to that time.
This is not the first nor will it be the last revision.