Guest post by Alec Rawls
Study of the sun-climate link was energized in 1991 by Friis-Christensen and Lassen, who showed a strong correlation between solar-cycle length and global temperature:
This evidence that much of 20th century warming might be explained by solar activity was a thorn in the side of the newly powerful CO2 alarmists, who blamed recent warming on human burning of fossil fuels. That may be why Lassen and Thejll were quick to offer an update as soon as the 1997-98 El Nino made it look as if temperatures were suddenly skyrocketing:
The rapid temperature rise recently seems to call for a quantitative revisit of the solar activity-air temperature association …
We conclude that since around 1990 the type of Solar forcing that is described by the solar cycle length model no longer dominates the long-term variation of the Northern hemisphere land air temperature.
In other words, there was now too much warming to account for by solar cycle length, so some other factor, such as CO2, had to be driving the most recent warming. Of course everyone knew that the 1998 warming had actually been caused by ocean oscillations. Even lay people knew it. (El Nino storm tracks were all the news for six months here in California.)
When Lassen was writing his update in mid ’99, temperatures had already dropped back to 1990 levels. His 8 year update was outdated before it was published. 12 years later the 2010 El Nino year shows the same average temperature as the ’98 El Nino year, and if post-El Nino temperatures continue to fall off the way they did in 99, we’ll be back to 1990 temperatures by mid-2011. Isn’t it about time Friis-Cristensen, Lassen and Thejll issued another update? Do they still think there has been too much recent warming to be accounted for by solar activity?
The most important update may be the discovery that, where Lassen and his colleagues found a correlation between the length of a solar-cycle and temperatures over that cycle, others have been finding a much stronger correlation to temperatures over the next cycle (reported at WUWT this summer by David Archibald).
This further correlation has the advantage of allowing us make projections. As Archibald deciphers Solheim’s Norwegian:
since the period length of previous cycle (no 23) is at least 3 years longer than for cycle no 22, the temperature is expected to decrease by 0.6 – 1.8 degrees over the following 10-12 years.
Check out this alarming graphic from Stephen Strum of Frontier Weather Inc:
The snowed in Danes might like to see these projections, before they bet the rest of their climate eggs on a dangerous war against CO2.
From sins of omission to sins of commission
In 2007, solar scientist Mike Lockwood told the press about some findings he and Claus Frohlich had just published:
In 1985, the Sun did a U-turn in every respect. It no longer went in the right direction to contribute to global warming. We think it’s almost completely conclusive proof that the Sun does not account for the recent increases in global warming.
Actually, solar cycle 22, which began in 1986, was one of the most intense on record (part of the 20th century “grand maximum” that was the most active sun of the last 11 thousand years), and by almost every measure it was more intense than solar cycle 21. It had about the same sunspot numbers as cycle 21 (Hathaway 2006):
Cycle 22 ran more solar flux than cycle 21 (via Nir Shaviv):
Cycle 22 was shorter than cycle 21 (from Joseph D’Aleo):
Perhaps most important is solar activity as measured (inversely) by the cosmic ray flux (which many think is mechanism by which solar activity drives climate). Here cycle 22 is THE most intense in the 60 year record, stronger even than cycle 19, the sunspot number king. From the Astronomical Society of Australia:
Some “U-turn in every respect.”
If Lockwood and Frohlich simply wanted to argue that the peak of the modern maximum of solar activity was between solar cycles 21 and 22 it would be unobjectionable. What difference does it make exactly when the peak was reached? But this is exactly where their real misdirection comes in. They claim that the peak of solar activity marks the point where any solar-climate effect should move from a warming to a cooling direction. Here is the abstract from their 2007 Royal Society article:
Abstract There is considerable evidence for solar influence on the Earth’s pre-industrial climate and the Sun may well have been a factor in post-industrial climate change in the first half of the last century. Here we show that over the past 20 years, all the trends in the Sun that could have had an influence on the Earth’s climate have been in the opposite direction to that required to explain the observed rise in global mean temperatures.
In order to assert the need for some other explanation for recent warming (CO2), they are claiming that near peak levels of solar activity cannot have a warming effect once they are past the peak of the trend—that it is not the level of solar activity that causes warming or cooling, but the change in the level—which is absurd.
Ken Gregory has the most precise answer to this foolishness. His “climate smoothing” graphic shows how the temperature of a heat sink actually responds to a fall-off in forcing:
“Note that the temperature continues to rise for several years after the Sun’s forcing starts to decrease.”
Gregory’s numbers here are arbitrary. It could be many years before a fall off in forcing causes temperatures to start rising. In the case of solar cycle 22—where if solar forcing was actually past its peak, it had only fallen off a tiny bit—the only way temperature would not keep rising over the whole solar cycle is if global temperature had already equilibrated to peak solar forcing, which Lockwood and Frohlich make no argument for.
The obvious interpretation of the data is that we never did reach equilibrium temperatures, allowing grand maximum levels of solar activity to continue to warm the planet until the sun suddenly went quiet. Now there’s an update for Lockwood and Frohlich. How about telling the public when solar activity really did do “U” (October 2005).
Usoskin, Benestad, and a host of other solar scientists also mistakenly assume that temperature is driven by trend instead of level
Maybe it is because so much of the evidence for a sun-climate link comes from correlation studies, which look for contemporaneous changes in solar activity and temperature. Surely the scientists who are doing these studies all understand that there is no possible mechanism by which the rate of change in solar activity can itself drive temperature. If temperature changes when solar activity changes, it is because the new LEVEL of solar activity has a warming or cooling effect.
Still, a remarkable number of these scientists say things like this (from Usoskin et al. 2005):
The long term trends in solar data and in northern hemisphere temperatures have a correlation coefficient of about 0.7 — .8 at a 94% — 98% confidence level. …
… Note that the most recent warming, since around 1975, has not been considered in the above correlations. During these last 30 years the total solar irradiance, solar UV irradiance and cosmic ray flux has not shown any significant secular trend, so that at least this most warming episode must have another source.
Set aside the other problems with Usoskin’s study. (The temperature record he compared his solar data to is Michael Mann’s “hockey stick.”) How can he claim overwhelming evidence for a sun-climate link, while simultaneously insisting that steady peak levels of solar activity can’t create warming? If steady peak levels coincide with warming, it supposedly means the sun-climate link is now broken, so warming must be due to some other cause, like CO2.
It is hard to believe that scientists could make such a basic mistake, and Usoskin et al. certainly have powerful incentive to play dumb: to pretend that their correlation studies are finding physical mechanisms by which it is changes in the level of solar activity, rather than the levels themselves, that drive temperature. Just elide this important little nuance and presto, modern warming gets misattributed to CO2, allowing these researchers to stay on the good side of the CO2 alarmists who control their funding. Still, the old adage is often right: never attribute to bad motives what can just as well be explained by simple error.
And of course there can be both.
RealClimate exchange on trend vs. level confusion
Finally we arrive at the beginning, for me anyway. I first came across trend-level confusion 5 years ago at RealClimate. Rasmus Benestad was claiming that, because post 1960’s levels of Galactic Cosmic Radiation have not been trending downwards, GCR cannot be the cause of post-60’s warming.
But solar activity has been well above historical norms since the 40’s. It doesn’t matter what the trend is. The solar-wind is up. According to the GCR-cloud theory, that blows away the GCR, which blows away the clouds, creating warming. The solar wind doesn’t have to KEEP going up. It is the LEVEL that matters, not the trend. Holy cow. Benestad was looking at the wrong derivative (one instead of zero).
A few months later I took an opportunity to state my rebuttal as politely as possible, which elicited a response from Gavin Schmidt. Here is our 2005 exchange:
Me: Nice post, but the conclusion: “… solar activity has not increased since the 1950s and is therefore unlikely to be able to explain the recent warming,” would seem to be a non-sequitur.
What matters is not the trend in solar activity but the level. It does not have to KEEP going up to be a possible cause of warming. It just has to be high, and it has been since the forties.
Presumably you are looking at the modest drop in temperature in the fifties and sixties as inconsistent with a simple solar warming explanation, but it doesn’t have to be simple. Earth has heat sinks that could lead to measured effects being delayed, and other forcings may also be involved. The best evidence for causality would seem to be the long term correlations between solar activity and temperature change. Despite the differences between the different proxies for solar activity, isn’t the overall picture one of long term correlation to temperature?
[Response: You are correct in that you would expect a lag, however, the response to an increase to a steady level of forcing is a lagged increase in temperature and then a asymptotic relaxation to the eventual equilibrium. This is not what is seen. In fact, the rate of temperature increase is rising, and that is only compatible with a continuing increase in the forcing, i.e. from greenhouse gases. – gavin]
Gavin admits here that it’s the level of solar activity, not the trend in solar activity, that drives temperature. He’s just assuming that grand maximum levels of solar forcing should have bought the planet close to equilibrium temperature before post-80’s warming hit, but that assumption is completely unwarranted. If solar activity is driving climate (the hypothetical that Schmidt is analyzing), we know that it can push temperatures a lot higher than they are today. Surely Gavin knows about the Viking settlement of Greenland.
The rapid warming in the late 90’s could easily have been caused by the monster solar cycle 22 and there is no reason to think that another big cycle wouldn’t have brought more of the same. Two or three more cycle 22s and we might have been hauling out the longships, which would be great. No one has ever suggested that natural warming is anything but benign. Natural cooling bad, natural warming good. But alas, a longer grand maximum was not to be.
Gavin’s admission that it is level not trend that drives temperature change is important because ALL of the alarmist solar scientists are making the trend-level mistake. If they would admit that the correct framework is to look at the level of forcing and the lapse to equilibrium then they would be forced to look at the actual mechanisms of forcing and equilibration, instead of ignoring key forcings on the pretense that steady peak levels of forcing cannot cause warming.
That’s the big update that all of our solar scientists need to make. They need to stop tolerating this crazy charade that allows the CO2 alarmists to ignore the impact of decades of grand maximum solar activity and misattribute the resulting warming to fossil fuel burning. It is a scientific fraud of the most disastrous proportions, giving the eco-lunatics the excuse they need to unplug the modern world.







oneuniverse says:
January 10, 2011 at 3:06 pm
I’ll have a look tomorrow
I’m travelling, so have not had time to look at anything. In the next couple of days I may catch up. Short comment: the solar cycle maxima are not important for this, only the minima.
oneuniverse says:
January 10, 2011 at 3:06 pm
I’ll have a look tomorrow
The calibration of the GCR flux is itself in a flux with conflicting views. My main argument is that since solar activity is back down to the level of 1900s, the GCR flux should be as well. Unless you can come up with viable mechanism that would make it different.
Leif: Short comment: the solar cycle maxima are not important for this, only the minima.
Higher sunspot numbers tend to accompany higher HMF. Since the S&C argument (quoted above) concerns HMF over 1933-51, surely low & high (all) values in the period should be considered. The sunspots predict an increasing HMF in that period, in agreement with the Neher data. Your own HMF reconstruction has increasing maxima during 1930-1950 (with stongest maximum of the series to follow).
Leif: My main argument is that since solar activity is back down to the level of 1900s, the GCR flux should be as well.
A disagreement of McCracken’s reconstruction with your HMF reconstruction from geomagnetic data isn’t evidence that the Neher data is wrong. They’re not even reconstructing the same thing (cosmogenic isotope production in the atmosphere vs HMF).
Leif: Unless you can come up with viable mechanism that would make it different.
Unless you can come up with a reason why the Neher ionisation measurements, which agree with the independent LPI measurements in their 13-year overlap, are not valid, then the Neher data should stand.
oneuniverse says:
January 13, 2011 at 1:22 pm
A disagreement of McCracken’s reconstruction with your HMF reconstruction from geomagnetic data isn’t evidence that the Neher data is wrong. They’re not even reconstructing the same thing (cosmogenic isotope production in the atmosphere vs HMF).
The Neher data have no meaning in themselves. Only when they are used to extend the modern time series back is the Neher data of value. McCracken saw this clearly and reconstructed HMF. The HMF is a measure of solar activity, so we get a time series of that constructed from 10Be. This is what we are after as the original topic was whether recent activity is at a 10,000 year high. Who cares what the isotope production was if you cannot connect that solar activity.
Leif: The calibration of the GCR flux is itself in a flux with conflicting views.
Measurements from ionisation chambers were being carried out world-wide (and hence by many different experimenters) by 1935, and were used to find the ‘equatiorial dip’. If there were serious calibration problems, I expect that they would’ve been detected.
oneuniverse says:
January 13, 2011 at 1:22 pm
Unless you can come up with a reason why the Neher ionisation measurements, which agree with the independent LPI measurements in their 13-year overlap, are not valid, then the Neher data should stand.
There are many reasons, all centered on comparison with other solar indices which do not show any discontinuity around 1948. Figure 15 of http://www.leif.org/research/The%20Open%20Flux%20Has%20Been%20Constant%20Since%201840s%20(SHINE2007).pdf illustrates this well. It shows the solar cycle variation of cosmic rays since the 1930s. At all minima the intensity reverts to about the same level [reflecting the lack of solar modulation at minimum], except for the minima 1933 and 1945 where the intensity is about 10-15% higher on par with the solar cycle variation itself. As all other indicators show those two minima not to be abnormal, the suspicion falls of the ion chamber calibration. You are welcome to believe that only the GCRs had this discontinuity and none of the other solar indicators had, but to me that is special pleading and is anyway unexplained. So, my statement is a bit of uniformatism [same laws as now governing the past].
Leif: The HMF is a measure of solar activity, so we get a time series of that constructed from 10Be.
10Be is a primarily, in its role as a space proxy, a record of cosmic ray interactions in the atmosphere.
Leif: Who cares what the isotope production was if you cannot connect that solar activity.
The solar signal is present in the 10Be and 14C records, as a modulator of GCR flux in the local solar environment. At the same time, secular changes of the GCR flux in the LIS cannot be ruled out (nor can secular variations of solar activity) – the cosmogenic record indicates a greater range than experienced in the 20th c. This introduces an uncertainty into historical reconstructions of solar (not GCR) activity derived from these proxies. They are more faithful trackers of GCR activity in the atmosphere.
oneuniverse says:
January 13, 2011 at 2:44 pm
This introduces an uncertainty into historical reconstructions of solar (not GCR) activity derived from these proxies. They are more faithful trackers of GCR activity in the atmosphere.
For the question of whether recent solar activity is the highest in the past 10,000 years, the GCR activity in the atmosphere [and what 10Be measures is also deposition, i.e. climate dependent] is, I agree, so fraught with uncertainty as tracker of solar activity that we cannot attach much significant to it and, consequently, should not claim that it is in strong support of the ‘highest-ever] claim.
Even so, the 10-15% discontinuity is unexplained and at variance with current understanding [that may be wrong, of course] and hence must stand as an anomaly, invalidating the now anomalous record as a reliable indicator of even GCR activity.
neuniverse says:
January 13, 2011 at 2:10 pm
Measurements from ionisation chambers were being carried out world-wide (and hence by many different experimenters) by 1935, and were used to find the ‘equatiorial dip’. If there were serious calibration problems, I expect that they would’ve been detected.
It was well-known at the time that the measurements were not ‘absolute’ measurements, but that each counter had its own [essentially unknown] calibration. Neher’s balloon data were supposed to provide the absolute values on which to base the calibration of all those uncalibrated ion chambers. Clearly, this hope has not been fulfilled.
Leif,: As all other indicators show those two minima not to be abnormal, the suspicion falls of the ion chamber calibration. You are welcome to believe that only the GCRs had this discontinuity and none of the other solar indicators had, but to me that is special pleading and is anyway unexplained.
GCR flux is a measure of GCR flux – it’s not “special pleading” for me to accept the measurements as they have been written up, without you providing a substantial reason why not, which you haven’t so far. There’s evidence that by 1935 these instruments had no serious calibration problems (see the 1936 review in Physical Review) – the global ‘equatorial dip’ result still stands.
You are basing your argument on your S&C reconstructions of HMF (and I think one or two others). Most reconstructions of solar activity eg. Lockwood (see for example 2002), Solanki, Muscheler, Lean, Bard, Hoyt, Fligge and co-authors, all show secular increases during 1931-1955, in agreement with the Neher data.
Agreed, this was a known problem at the time, affecting comparison between groups, which is why relative measurements were considered more reliable (if less useful).
However, the secular change during 1933-1957 (as I should’ve written in last post, not 1931-1955) would have been recorded in the relative Neher record, so your criticism doesn’t find its stated mark.
By the way, absolute measurements were not “essentially unknown”, as you put it (particle beams were available for calibration) , but had uncertainties.
oneuniverse says:
January 13, 2011 at 3:41 pm
There’s evidence that by 1935 these instruments had no serious calibration problems (see the 1936 review in Physical Review) – the global ‘equatorial dip’ result still stands.
The ion chambers did not measure the absolute flux, thus were not calibrated at all. So in a sense you are correct that there were no calibration problems, because there was no calibration.
Most reconstructions of solar activity eg. Lockwood (see for example 2002), Solanki, Muscheler, Lean, Bard, Hoyt, Fligge and co-authors, all show secular increases during 1931-1955, in agreement with the Neher data.
All of these have been superseded by newer reconstructions. It is only in the last 4 years that the community has finally figured out to do the reconstruction correctly. ‘Secular increase’ over 24 years hardly qualify as ‘secular’. And in any event, there is now general acceptance that the HMF and solar activity now is where they were 107 years ago, so for all we know GCR intensity should also be, and the recent attempts to use Neher’s data would indicate that the it should not be. Solar activity and HMF 1933 and 1945 were higher than now, so GCR intensity should not be 10-15% higher than now as Neher would indicate.
Leif: The ion chambers did not measure the absolute flux, thus were not calibrated at all. So in a sense you are correct that there were no calibration problems, because there was no calibration.
As I said, there was calibration, with uncertainties, but good enough to detect the equatorial gap around South America (which would be undetectable with the zero absolute calibration you’re suggesting).
oneuniverse says:
January 13, 2011 at 4:10 pm
However, the secular change during 1933-1957 (as I should’ve written in last post, not 1931-1955) would have been recorded in the relative Neher record, so your criticism doesn’t find its stated mark.
There was no secular change at all 1930s-1950s, neither in solar activity nor in GCRs. The minima 1933, 1945, 1954 had very similar HMF, SSN, CaII values. Neher’s data shows a discontinuity around 1949.
By the way, absolute measurements were not “essentially unknown”, as you put it (particle beams were available for calibration) , but had uncertainties.
Which were larger than the variation between minima. ‘essentially’ expressed that.
Leif: All of these have been superseded by newer reconstructions. It is only in the last 4 years that the community has finally figured out to do the reconstruction correctly.
I’ll wait to hear the admission (or at least see its results in the literature).
As it stands, their reconstructions are in disagreement with yours in this regard. For a very recent reconstruction please see Krivova, Vieira and Solank 2010 – the large 1930’s-1950’s secular rise is still present.
Leif: There was no secular change at all 1930s-1950s, neither in solar activity nor in GCRs. The minima 1933, 1945, 1954 had very similar HMF, SSN, CaII values.
By my reckoning above, most records (10Be, 14C, ionisation chamber data, all in good agreement in tracking the increases and decreases) show the secular change.
re: considering the minima only, I wrote earlier :
Higher sunspot numbers tend to accompany higher HMF. Since the S&C argument (quoted above) concerns HMF over 1933-51, surely low & high (all) values in the period should be considered. The sunspots predict an increasing HMF in that period, in agreement with the Neher data. [ and multiple 10Be and 14C etc.]
Leif: Which were larger than the variation between minima. ‘essentially’ expressed that.
Besides the point: the secular trend still is still recordable with good relative calibration – please address this, the relevant, point.
oneuniverse says:
January 13, 2011 at 4:26 pm
As I said, there was calibration, with uncertainties, but good enough to detect the equatorial gap around South America (which would be undetectable with the zero absolute calibration you’re suggesting).
You don’t need absolute data for this. Example: place several thermometers [using different scales Celsius, Fahrenheit, Reamur, etc] at different places and run them in parallel. You can still detect the seasons by relative comparisons. And I don’t say ‘zero’. just enough that the noise is too high for the trend.
Leif: You can still detect the seasons by relative comparisons.
Yes, and run the observations for long enough and you can detect secular changes.
Leif: And I don’t say ‘zero’. just enough that the noise is too high for the trend.
Johnson 1956, speaking of the Neher series up to that point (which covers the period we’re discussing) : “The accuracy of the intercomparison was quite good, being somewhat better than one percent.” (Johnson calls the absolute calibration “considerably more uncertain”.)
An error of < 1% is good enough for the Neher series to be suitable to detect a secular trend such as the one recorded.
oneuniverse says:
January 13, 2011 at 4:41 pm
I’ll wait to hear the admission (or at least see its results in the literature).
To be submitted to J. Geophys. Res.
Centennial changes in the heliospheric field and open solar flux: the consensus view from geomagnetic data and cosmogenic isotopes and its implications
M. Lockwood and M.J. Owens
Abstract. Svalgaard and Cliver [2010] recently reported a consensus between the various reconstructions of the heliospheric field over recent centuries. This is a significant development because, individually, each has uncertainties introduced by instrument calibration drifts, limited numbers of observatories, and the strength of the correlations employed. However, taken collectively, a consistent picture is emerging. We here show that this consensus extends to more data sets and methods than they report, including that used by Lockwood et al. (1999), once a misunderstanding about that reconstruction is clarified.
For a very recent reconstruction please see Krivova, Vieira and Solank 2010 – the large 1930′s-1950′s secular rise is still present.
They still use obsolete data and superseded conclusions.
oneuniverse says:
January 13, 2011 at 4:50 pm
“There was no secular change at all 1930s-1950s, neither in solar activity nor in GCRs. The minima 1933, 1945, 1954 had very similar HMF, SSN, CaII values.”
By my reckoning above, most records (10Be, 14C, ionisation chamber data, all in good agreement in tracking the increases and decreases) show the secular change.
One more time: there is no secular change, but a jump [discontinuity] ~1949 in the Neher data reconstruction.
Besides the point: the secular trend still is still recordable with good relative calibration – please address this, the relevant, point.
The point is that there is no secular change between the minima. The maxima do not count as far as calibration is concerned, and you ignore [or do not grasp] that whatever went up has come down, except the [inverse] GCR flux
oneuniverse says:
January 13, 2011 at 4:41 pm
I’ll wait to hear the admission (or at least see its results in the literature).
M. Lockwood and M.J. Owens: “Svalgaard and Cliver [2010] recently reported a consensus between the various reconstructions of the heliospheric field over recent centuries. This is a significant development”
The consensus series was reached thus:
http://www.leif.org/research/AGU%20Fall%202008%20SH24A-01.pdf
oneuniverse says:
January 13, 2011 at 5:14 pm
An error of < 1% is good enough for the Neher series to be suitable to detect a secular trend such as the one recorded.
There is no secular trend: solar indicates at minima 1933, 1945, 1954 are all the same. And the ion chamber data are not even good enough to show the typical solar cycle variation. Look at the last figure [slide 13] of http://www.leif.org/research/AGU%20Fall%202008%20SH24A-01.pdf
The big dots show the Neher data used.
But you are still missing the point: Any upward trends before 1950 have been erased by downwards trends since. so the GCR levels before 1948 should be the same as those after 1948. If they are not [e.g. Neher] they have a problem that you need to understand and explain in order to still believe in the data. You did not provide any such explanation. And I suggest there simply isn’t any as none is needed. That the sole dot before 1948 is 10-15% too high compared to all the other indicators could have any number of explanations.
To be submitted to J. Geophys. Res. [..] M. Lockwood and M.J. Owens
Abstract.[..] We here show that this consensus extends to more data sets and methods than they report, including that used by Lockwood et al. (1999), once a misunderstanding about that reconstruction is clarified.
I’ll have to see what Lockwood and Owen say – the abstract doesn’t indicate whose misunderstanding is clarified, for a start.
The maxima do not count as far as calibration is concerned
I wasn’t calibrating. I was noting that the sunspots (taken as an average) increase over 1930-1955 (the final maximum is the highest in the 300+ yr sunspot record). This is indicative of increasing HMF and decreasing GCR flux over the period, as recorded by Neher (and the 10Be and 14 records – see Fig. 8 Muscheler 2007, showing 10Be concentrations – no possibly errored and certainly uncertainty-introducing mappings to HMF B (and back to GCR for the comparison). Neher is measuring atmospheric GCR flux, like 10Be and 14C concentrations, so this is a good comparison.
HMF 1933 and 1945 were higher than now, so GCR intensity should not be 10-15% higher than now as Neher would indicate.
This doesn’t make sense – 2009-2010 had very weak HMF, possibly weakeast since a 100 yearsm, with GCR record highs, while 1933 and 1945 were increasing HMF, leading to a unsurpassed peak in the 50’s.
See “Record-Setting Cosmic-Ray Intensities In 2009 And 2010”, Mewaldt et al. 2010 : “In the energy interval from ~70 to ~450 MeV nucleon, near the peak in the near-Earth cosmic-ray spectrum, the measured intensities of major species from C to Fe were each 20%-26% greater in late 2009 than in the 1997-1998 minimum and previous solar minima of the space age (1957-1997). ”
You earlier said of Mewaldt’s statement of record-breaking CR’s in 2009-2010 : “Mewaldt refers to low-energy cosmic rays which show a much larger modulation.”.
Yes, they’re lower energy, by at least an order of magnitude, than those measured by earth neutron monitors, but the Neher ionisation chambers appear to be measuring energies at ~340 MeV (Johnston 1956), which is in the range measured in Mewaldt ea 2010, and so the percentage swing can be compared.
Leif: They still [Viera and Solanko 2010] use obsolete data and superseded conclusions.
Which of the data do you consider obselete ? From the paper :
“Following Vieira and Solanki [2010], the modelled total magnetic flux is confronted with the measurements carried out at the Mt. Wilson Solar Observatory (MWO), National Solar Observatory Kitt Peak (KP NSO) and Wilcox Solar Observatory (WSO) over cycles 20–23 [Arge et al., 2002; Wang et al., 2005]. The calculated open magnetic flux is compared to the reconstruction by Lockwood et al. [2009] since 1904. Following Krivova et al. [2007], we also require the computed TSI variations to match the PMOD composite of space-based measurements since 1978 [Fr¨ohlich, 2005, 2008, version d41 62 0906].
“Here we have also added 2 new records to constrain the model further. These are (i) the facular contribution to the TSI variations over 1978–2003, computed by Wenzler [2005] with the SATIRE-S model from KP NSO magnetograms and continuum images, and (ii) the solar irradiance flux integrated over wavelengths 220–240 nm over the period 1947– 2006 as reconstructed by Krivova et al. [2009a] and Krivova et al. [2010] using solar F10.7 cm radio flux (before 1974) and KP NSO as well as MDI magnetograms and continuum images (after 1974). The two new sets serve, firstly, to provide further constraints on the model and the values of the free parameters. Secondly, they ensure that not only the total (integrated over all wavelengths) irradiance is reproduced correctly but also its spectral distribution.”
One or two items unreplied to, will try to revisit tomorrow, as it’s a bit late here, goodnight.
Leif: Look at the last figure [slide 13] [..] The big dots show the Neher data used. [..] That the sole dot before 1948 is 10-15% too high compared to all the other indicators could have any number of explanations.
Why is there only one dot before 1948 in your plot ?- Neher took measurements since 1933, as visible in other dotted plots of I’ve seen of the Neher data.
Beer et al. 2008, citing Neher 1971, disagrees with you about the existence of calibration problems of the Neher data : “Carefully intercalibrated ionization chambers were flown on balloons between 1933 and 1969 and the long-term changes in sensitivity were estimated to be <1% [1]."
oneuniverse says:
January 13, 2011 at 9:35 pm
Why is there only one dot before 1948 in your plot ?- Neher took measurements since 1933, as visible in other dotted plots of I’ve seen of the Neher data.
The big dots are the values that were actually used in McCracken’s splicing together of the ion chamber and neutron monitor records. This is non-trivial because of the factor of ten in energy. I got this particular plot from Ken McCracken himself. I don’t know if it has been published somewhere.
The issue of long-term trend is complex and must be approached carefully. Here is such a careful analysis:
http://www.srl.utu.fi/AuxDOC/kocharov/ICRC2009/pdf/icrc1554.pdf
“Using this comprehensive set of neutron monitor data we conclude that the alternating peak/plateau behaviour of subsequent cosmic-ray maxima is well-established, and that its different rigidity dependence for the overall modulation can be understood in terms of drift effects. Furthermore, the cosmic-ray levels have returned to nearly the same levels as during the previous two qA<0 solar minima, despite the fact that there are large differences in the heliospheric magnetic field and its tilt angle. We have demonstrated that this can be understood in terms of standard modulation theory”
What this means is that if we know the HMF B and the tilt angle A we can calculate the cosmic ray flux. Both B and A are known since 1926 and since the values of B and A at minima in 1933 and 1945 were similar to the values in 1986 and 1996, the GCR flux cannot have been 10-15% higher for the earlier period.
“They still [Viera and Solanko 2010] use obsolete data and superseded conclusions.”
Which of the data do you consider obsolete ?
The Solanki 2000-2002 model is faulty as we point out in section 2 of http://www.leif.org/research/Comment%20on%20McCracken.pdf
and the newer models are based directly on the Group Sunspot Number that is also at variance with newer analyses.
I’ll have to see what Lockwood and Owen say – the abstract doesn’t indicate whose misunderstanding is clarified, for a start.
I don’t know what ‘for a start’ means. They strongly endorse the consensus and point out that it is even broader than we claim. The misunderstanding is a curious one: in ALL their papers the Lockwood group claim that they regressed the geomagnetic data against the magnitude of the HMF B. This makes sense, because B can be unambiguously determined, while Br depends on the averaging interval and the sampling frequency. They tell us that they misstated this [in ALL the papers] and that they actually regressed against the radial component Br of B, and that they in ‘their mind’ thought that they said that in ALL their published papers, while in fact they did not. [BTW, it makes no big difference which one of Br and B you use]. We had quite a long private discussion about this, until finally Lockwood send us this email:
“my god I did!!!! I apologize unreservedly, wholesomely and totally That’s incredibly stupid of me to have written that in the later papers because it GENUINELY isn’t what we did. We really did do an end to end fit to Br[…] Its not your fault at all it’s mine. I am so sorry”.
This is the misunderstanding that is cleared up. But in the end it doesn’t matter as the results are the same within error bars.
To avoid any more strawman digressions perhaps the issue can be describes this way:
Analysis of the latest GCR data confirms that the theory used to interpret the data is good and that we have a good understanding of this. Therefore we must demand that the data before ~1948 must be consistent with that. Adopting Neher would violate the consistency. It often happens that data [direct observations] reanalyzed with later understanding turn out to require a different calibration.
Leif: The big dots are the values that were actually used in McCracken’s splicing together of the ion chamber and neutron monitor records.
So that’s an argument against McCracken’s use of the Neher data, rather than the data itself.
The Solanki 2000-2002 model is faulty as we point out in section 2 of http://www.leif.org/research/Comment%20on%20McCracken.pdf
and the newer models are based directly on the Group Sunspot Number that is also at variance with newer analyses.
They don’t use the Solanki 2000-2002 model :
“It [the model] is based on the SATIRE-T (Spectral And Total Irradiance REconstructions for the Telescope era) model developed by Krivova et al. [2007], which is modified and updated here to take into account the latest observational data and theoretical results. These include: the new model of the evolution of solar total and open magnetic flux by Vieira and Solanki
[2010], the updated reconstruction of the heliospheric magnetic flux by Lockwood et al. [2009], the reconstructed solar
UV irradiance since 1947 [Krivova et al., 2009a, 2010] and the facular contribution to the TSI variations since 1974 [Wenzler, 2005]. Spectral irradiance below 270 nm is calculated following Krivova et al. [2006] and Krivova et al. [2009a].”
Note that they use an array of modern observational data (as quoted earlier), not just sunspot data.
Leif: They [Lockwood ea] tell us that they misstated this [in ALL the papers] and that they actually regressed against the radial component Br of B” … “[BTW, it makes no big difference which one of Br and B you use].” … “This is the misunderstanding that is cleared up.
You’ve apparently spotted an error in the Lockwood analyses (which apparently makes no big difference). It doesn’t count against the Neher data. I look forward to reading the paper.
Leif:To avoid any more strawman digressions perhaps the issue can be describes this way:
Analysis of the latest GCR data confirms that the theory used to interpret the data is good and that we have a good understanding of this. Therefore we must demand that the data before ~1948 must be consistent with that. Adopting Neher would violate the consistency. It often happens that data [direct observations] reanalyzed with later understanding turn out to require a different calibration.
Strawmen ..? You can stop coming up with them whenever you want.. I certainly haven’t raised any.
For interest and comparison, please point out some examples. It would be interesting to see what kind of evidence has been sufficient to instigate recalibrations.
In general, observations trump models. Neher 1971 (and earlier, Johnston 1956, for the series available at the time) found that for changes in sensitivity were <1%. The series shows both the 1930-1950's increase, and the flattening out after the 1950's. FWIW, the measurements agree, where they overlap, with the Forbrush measurements and the LPI measurements, as well as the 10Be and 14C records.
You are trying to use derivations of HMF to compare with measurements of GCR. You assume that the pre-solar-modulated GCR flux is constant, and you assume that your model is good enough to rule out other unknowns – from this, you "demand" that GCR measurements (Neher) must match your predictions based on geomagnetic data.
The Neher measurements are precisely the kind of measurements one would use to check your assumption that pre-modulated GCR flux is constant (or that GCR's vary only with the solar/terrestrial activity). The measurements appear to disagree with your assumption, providing evidence against your hypothesis, yet you're arguing that the Neher data should be thrown out, because it disagrees with conclusions based on your assumption. I hope you can see the error in your logic.