Do solar scientists STILL think that recent warming is too large to explain by solar activity?

 

Guest post by Alec Rawls

Study of the sun-climate link was energized in 1991 by Friis-Christensen and Lassen, who showed a strong correlation between solar-cycle length and global temperature:

This evidence that much of 20th century warming might be explained by solar activity was a thorn in the side of the newly powerful CO2 alarmists, who blamed recent warming on human burning of fossil fuels. That may be why Lassen and Thejll were quick to offer an update as soon as the 1997-98 El Nino made it look as if temperatures were suddenly skyrocketing:

The rapid temperature rise recently seems to call for a quantitative revisit of the solar activity-air temperature association …

We conclude that since around 1990 the type of Solar forcing that is described by the solar cycle length model no longer dominates the long-term variation of the Northern hemisphere land air temperature.

In other words, there was now too much warming to account for by solar cycle length, so some other factor, such as CO2, had to be driving the most recent warming. Of course everyone knew that the 1998 warming had actually been caused by ocean oscillations. Even lay people knew it. (El Nino storm tracks were all the news for six months here in California.)

When Lassen was writing his update in mid ’99, temperatures had already dropped back to 1990 levels. His 8 year update was outdated before it was published. 12 years later the 2010 El Nino year shows the same average temperature as the ’98 El Nino year, and if post-El Nino temperatures continue to fall off the way they did in 99, we’ll be back to 1990 temperatures by mid-2011. Isn’t it about time Friis-Cristensen, Lassen and Thejll issued another update? Do they still think there has been too much recent warming to be accounted for by solar activity?

The most important update may be the discovery that, where Lassen and his colleagues found a correlation between the length of a solar-cycle and temperatures over that cycle, others have been finding a much stronger correlation to temperatures over the next cycle (reported at WUWT this summer by David Archibald).

This further correlation has the advantage of allowing us make projections. As Archibald deciphers Solheim’s Norwegian:

since the period length of previous cycle (no 23) is at least 3 years longer than for cycle no 22, the temperature is expected to decrease by 0.6 – 1.8 degrees over the following 10-12 years.

Check out this alarming graphic from Stephen Strum of Frontier Weather Inc:

Lagged solar cycle length and temp, Stephen Strum, Frontier Weather Inc.

The snowed in Danes might like to see these projections, before they bet the rest of their climate eggs on a dangerous war against CO2.

From sins of omission to sins of commission

In 2007, solar scientist Mike Lockwood told the press about some findings he and Claus Frohlich had just published:

In 1985, the Sun did a U-turn in every respect. It no longer went in the right direction to contribute to global warming. We think it’s almost completely conclusive proof that the Sun does not account for the recent increases in global warming.

Actually, solar cycle 22, which began in 1986, was one of the most intense on record (part of the 20th century “grand maximum” that was the most active sun of the last 11 thousand years), and by almost every measure it was more intense than solar cycle 21. It had about the same sunspot numbers as cycle 21 (Hathaway 2006):

Sunspot prediction, NASA-Hathaway, 2006

Cycle 22 ran more solar flux than cycle 21 (via Nir Shaviv):

Cycle 22 was shorter than cycle 21 (from Joseph D’Aleo):

Solar cycle length, from Joseph D'Aleo

Perhaps most important is solar activity as measured (inversely) by the cosmic ray flux (which many think is mechanism by which solar activity drives climate). Here cycle 22 is THE most intense in the 60 year record, stronger even than cycle 19, the sunspot number king. From the Astronomical Society of Australia:

Neutron counts, Climaz Colorado, with sunspots, Univ. of Chicago

Some “U-turn in every respect.”

If Lockwood and Frohlich simply wanted to argue that the peak of the modern maximum of solar activity was between solar cycles 21 and 22 it would be unobjectionable. What difference does it make exactly when the peak was reached? But this is exactly where their real misdirection comes in. They claim that the peak of solar activity marks the point where any solar-climate effect should move from a warming to a cooling direction. Here is the abstract from their 2007 Royal Society article:

Abstract There is considerable evidence for solar influence on the Earth’s pre-industrial climate and the Sun may well have been a factor in post-industrial climate change in the first half of the last century. Here we show that over the past 20 years, all the trends in the Sun that could have had an influence on the Earth’s climate have been in the opposite direction to that required to explain the observed rise in global mean temperatures.

In order to assert the need for some other explanation for recent warming (CO2), they are claiming that near peak levels of solar activity cannot have a warming effect once they are past the peak of the trend—that it is not the level of solar activity that causes warming or cooling, but the change in the level—which is absurd.

Ken Gregory has the most precise answer to this foolishness. His “climate smoothing” graphic shows how the temperature of a heat sink actually responds to a fall-off in forcing:

Gregory, climate smoothing, contra-Lockwood

“Note that the temperature continues to rise for several years after the Sun’s forcing starts to decrease.”

Gregory’s numbers here are arbitrary. It could be many years before a fall off in forcing causes temperatures to start rising. In the case of solar cycle 22—where if solar forcing was actually past its peak, it had only fallen off a tiny bit—the only way temperature would not keep rising over the whole solar cycle is if global temperature had already equilibrated to peak solar forcing, which Lockwood and Frohlich make no argument for.

The obvious interpretation of the data is that we never did reach equilibrium temperatures, allowing grand maximum levels of solar activity to continue to warm the planet until the sun suddenly went quiet. Now there’s an update for Lockwood and Frohlich. How about telling the public when solar activity really did do “U” (October 2005).

Usoskin, Benestad, and a host of other solar scientists also mistakenly assume that temperature is driven by trend instead of level

Maybe it is because so much of the evidence for a sun-climate link comes from correlation studies, which look for contemporaneous changes in solar activity and temperature. Surely the scientists who are doing these studies all understand that there is no possible mechanism by which the rate of change in solar activity can itself drive temperature. If temperature changes when solar activity changes, it is because the new LEVEL of solar activity has a warming or cooling effect.

Still, a remarkable number of these scientists say things like this (from Usoskin et al. 2005):

The long term trends in solar data and in northern hemisphere temperatures have a correlation coefficient of about 0.7 — .8 at a 94% — 98% confidence level. …

… Note that the most recent warming, since around 1975, has not been considered in the above correlations. During these last 30 years the total solar irradiance, solar UV irradiance and cosmic ray flux has not shown any significant secular trend, so that at least this most warming episode must have another source.

Set aside the other problems with Usoskin’s study. (The temperature record he compared his solar data to is Michael Mann’s “hockey stick.”) How can he claim overwhelming evidence for a sun-climate link, while simultaneously insisting that steady peak levels of solar activity can’t create warming? If steady peak levels coincide with warming, it supposedly means the sun-climate link is now broken, so warming must be due to some other cause, like CO2.

It is hard to believe that scientists could make such a basic mistake, and Usoskin et al. certainly have powerful incentive to play dumb: to pretend that their correlation studies are finding physical mechanisms by which it is changes in the level of solar activity, rather than the levels themselves, that drive temperature. Just elide this important little nuance and presto, modern warming gets misattributed to CO2, allowing these researchers to stay on the good side of the CO2 alarmists who control their funding. Still, the old adage is often right: never attribute to bad motives what can just as well be explained by simple error.

And of course there can be both.

RealClimate exchange on trend vs. level confusion

Finally we arrive at the beginning, for me anyway. I first came across trend-level confusion 5 years ago at RealClimate. Rasmus Benestad was claiming that, because post 1960’s levels of Galactic Cosmic Radiation have not been trending downwards, GCR cannot be the cause of post-60’s warming.

But solar activity has been well above historical norms since the 40’s. It doesn’t matter what the trend is. The solar-wind is up. According to the GCR-cloud theory, that blows away the GCR, which blows away the clouds, creating warming. The solar wind doesn’t have to KEEP going up. It is the LEVEL that matters, not the trend. Holy cow. Benestad was looking at the wrong derivative (one instead of zero).

A few months later I took an opportunity to state my rebuttal as politely as possible, which elicited a response from Gavin Schmidt. Here is our 2005 exchange:

Me: Nice post, but the conclusion: “… solar activity has not increased since the 1950s and is therefore unlikely to be able to explain the recent warming,” would seem to be a non-sequitur.

What matters is not the trend in solar activity but the level. It does not have to KEEP going up to be a possible cause of warming. It just has to be high, and it has been since the forties.

Presumably you are looking at the modest drop in temperature in the fifties and sixties as inconsistent with a simple solar warming explanation, but it doesn’t have to be simple. Earth has heat sinks that could lead to measured effects being delayed, and other forcings may also be involved. The best evidence for causality would seem to be the long term correlations between solar activity and temperature change. Despite the differences between the different proxies for solar activity, isn’t the overall picture one of long term correlation to temperature?

[Response: You are correct in that you would expect a lag, however, the response to an increase to a steady level of forcing is a lagged increase in temperature and then a asymptotic relaxation to the eventual equilibrium. This is not what is seen. In fact, the rate of temperature increase is rising, and that is only compatible with a continuing increase in the forcing, i.e. from greenhouse gases. – gavin]

Gavin admits here that it’s the level of solar activity, not the trend in solar activity, that drives temperature. He’s just assuming that grand maximum levels of solar forcing should have bought the planet close to equilibrium temperature before post-80’s warming hit, but that assumption is completely unwarranted. If solar activity is driving climate (the hypothetical that Schmidt is analyzing), we know that it can push temperatures a lot higher than they are today. Surely Gavin knows about the Viking settlement of Greenland.

The rapid warming in the late 90’s could easily have been caused by the monster solar cycle 22 and there is no reason to think that another big cycle wouldn’t have brought more of the same. Two or three more cycle 22s and we might have been hauling out the longships, which would be great. No one has ever suggested that natural warming is anything but benign. Natural cooling bad, natural warming good. But alas, a longer grand maximum was not to be.

Gavin’s admission that it is level not trend that drives temperature change is important because ALL of the alarmist solar scientists are making the trend-level mistake. If they would admit that the correct framework is to look at the level of forcing and the lapse to equilibrium then they would be forced to look at the actual mechanisms of forcing and equilibration, instead of ignoring key forcings on the pretense that steady peak levels of forcing cannot cause warming.

That’s the big update that all of our solar scientists need to make. They need to stop tolerating this crazy charade that allows the CO2 alarmists to ignore the impact of decades of grand maximum solar activity and misattribute the resulting warming to fossil fuel burning. It is a scientific fraud of the most disastrous proportions, giving the eco-lunatics the excuse they need to unplug the modern world.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

342 Comments
Inline Feedbacks
View all comments
January 14, 2011 1:34 pm

oneuniverse says:
January 14, 2011 at 7:24 am
So that’s an argument against McCracken’s use of the Neher data, rather than the data itself.
The use of the data is what is important for the issue at hand, not the data themselves.
They don’t use the Solanki 2000-2002 model
Vieira & Solanki [ http://arxiv.org/abs/0911.4396 ], section 2.1:
“The model presented here is an extension of the one presented by Solanki et al. (2002), which is itself an extension of the work of Solanki et al. (2000).”
Note that they use an array of modern observational data (as quoted earlier), not just sunspot data.
Matching the modern data has no bearing on the old trend. The long-term trend is determined by the Group Sunspot Number:
“[14] The flux emergence rates of AR, “act, and ER, “eph, which are the main inputs to the model, are calculated from the historical group sunspot number, Rg [Hoyt and Schatten, 1993]. ”
You’ve apparently spotted an error in the Lockwood analyses (which apparently makes no big difference). It doesn’t count against the Neher data. I look forward to reading the paper.
Neher was used to ‘calibrate’ the 10Be data to show that the modern Sun is much more active than before ~1948. See Figure 5 of http://www.leif.org/EOS/2006JA012119.pdf
The consensus determination of HMF B shows that this is not the case.
For interest and comparison, please point out some examples. It would be interesting to see what kind of evidence has been sufficient to instigate recalibrations.
Science is replete with examples. Consult any book on the history of science. For an obvious example, take Hubble’s determination of the Hubble Expansion Parameter [modern calibration 10 times lower].
In general, observations trump models.
No observation makes sense without a model under which to interpret the observation. To go from the raw 10Be count to anything at all is governed by a set of models, thus making the conclusion dependent on parameters and assumptions of the model.
You assume that the pre-solar-modulated GCR flux is constant
No, the people who try to reconstruct solar activity from the GCR flux are making that assumption. I would be perfectly happy to drop any such assumption and then dropping GCRs altogether as a useful proxy, meaning that there is no support at all for the contention that modern activity is the highest in 10,000 years.

oneuniverse
January 14, 2011 3:10 pm

Leif: The use of the data is what is important for the issue at hand, not the data themselves.
You were arguing against the Neher data itself previously.
Leif: “The model presented here is an extension of the one presented by Solanki et al. (2002), which is itself an extension of the work of Solanki et al. (2000).”
Yes, it’s the improved 2007 model (responding in part to critcisms)- it’s not accurate to call it the Solanki ea 2000/2002 model, as you did.
Leif: Matching the modern data has no bearing on the old trend. The long-term trend is determined by the Group Sunspot Number.
According to Hathaway ea 2002, it’s better to use the group number for reconstructions: “We conclude that the Group numbers are most useful for extending the sunspot cycle data further back in time and thereby adding more cycles and improving the statistics. However, the Zürich numbers are slightly more useful for characterizing the on-going levels of solar activity.”
BTW, Krikova ea 2007 use both Group and Zurich sunspot numbers. The use of either gives increasing solar activity since 1700 (of different magnitutde), and they both have matching sustained increases in the first half of the 20thC, followed by a plateau in the 2nd half (see fig. 13a).
Leif: Neher was used to ‘calibrate’ the 10Be data to show that the modern Sun is much more active than before ~1948. See Figure 5 of http://www.leif.org/EOS/2006JA012119.pdf
The consensus determination of HMF B shows that this is not the case.

I refer you to my earlier responses. The Neher data is an independent dataset measuring ionisation activity in the atmposphere. A proxy reconstruction of HMF that is used to predict (assuming a constant GCR flux) a different GCR flux to the measurements doesn’t mean the Neher data is wrong. (Of course, this doesn’t mean that the McCracken reconstruction is correct, either).
Leif: Science is replete with examples. Consult any book on the history of science. For an obvious example, take Hubble’s determination of the Hubble Expansion Parameter [modern calibration 10 times lower].
I was hoping for an analagous example (historical, unrepeatable measurements which are recalibrated later). Never mind, it was just curiousity , each case is decided on its own merits anyway. I’m sure examples exist – I guess I was wondering about the use of the word “often”.
Leif: No observation makes sense without a model under which to interpret the observation. To go from the raw 10Be count to anything at all is governed by a set of models, thus making the conclusion dependent on parameters and assumptions of the model.
The comment was made with respect to the Neher ionisation data, not 10Be. The model of ionisation behaviour inside an ionisation chamber is far better understood and accepted than a model used to describe GCR flux from HMF from geomagnetic measurements.
Leif: No, the people who try to reconstruct solar activity from the GCR flux are making that <assumption [that pre-solar-modulated GCR flux is constant]. I would be perfectly happy to drop any such assumption and then dropping GCRs altogether as a useful proxy, meaning that there is no support at all for the contention that modern activity is the highest in 10,000 years.
Yes you do – you make the assumption when you say eg. “Both B and A are known since 1926 and since the values of B and A at minima in 1933 and 1945 were similar to the values in 1986 and 1996, the GCR flux cannot have been 10-15% higher for the earlier period.”, or when you present
re: no support for the ‘10,000’ years contention
If you changed the conclusion to ‘highest levels of GCR flux in the last 8,000 years’ (or 6,000, according to another study), you would be on firmer ground, as it cuts out some of the uncertainties.

oneuniverse
January 14, 2011 3:21 pm

Correction: I wrote “Leif: “The model presented here [….]” , it should be “Leif, quoting Vieira & Solanki 2009 : […]”
What do you think of the following, Leif – a changing floor = secular changes, no?
Lockwood ea 2007: “”McCracken (2007) proposes that the concept of floors in B may indeed be valid, but notes that since 1428 there must have been at least four upward steps in such a floor to reach present-day values, the floor value for 1428–1528 being less than a 10th of today’s value. If the minimum B does change in discrete steps, as opposed to continuously, the reasons for this are not yet understood.”

oneuniverse
January 14, 2011 3:24 pm

Lockwood ea 2009, not 2007.

January 14, 2011 4:44 pm

oneuniverse says:
January 14, 2011 at 3:10 pm
You were arguing against the Neher data itself previously.
I was arguing that the Neher data is not an absolute measurement of the flux at the 3GeV typical for the neutron monitors.
Yes, it’s the improved 2007 model (responding in part to critcisms)- it’s not accurate to call it the Solanki ea 2000/2002 model, as you did.
nit picking. The 2007 [and later] is based on the same assumptions and data and is not substantially different. Has the same flaws.
According to Hathaway ea 2002, it’s better to use the group number for reconstructions
This is under the assumption that the GSN is correctly calibrated, which we can show it is not, e.g. http://www.leif.org/research/Updating%20the%20Historical%20Sunspot%20Record.pdf
BTW, Krikova ea 2007 use both Group and Zurich sunspot numbers. The use of either gives increasing solar activity since 1700 (of different magnitutde), and they both have matching sustained increases in the first half of the 20thC, followed by a plateau in the 2nd half (see fig. 13a).
Since the GSN and the ZSN agree 1880-1945, no wonder they both match the increase in the first half of the 20th C.
A proxy reconstruction of HMF that is used to predict (assuming a constant GCR flux) a different GCR flux to the measurements doesn’t mean the Neher data is wrong. (Of course, this doesn’t mean that the McCracken reconstruction is correct, either).
The HMF is reconstructed using well-understood processes from geomagnetic data [independent of GCRs]. Any reconstruction using GCRs must match the HMF. If it does not, the reconstruction based on GCR is invalid. The GCRs reconstruction of solar activity [of which HMF is a well-understood proxy] is based on four pillars: 1) the 10Be counts themselves, 2) the assumption of constant GCR background, 3) the correctness of the splicing together of two disparate datasets, which 4) relies on a difficult and uncertain Neher calibration extrapolated to 3GeV. Pick any combination of problems you wish, the net result is that the GCR record cannot be used to say anything quantitatively about solar activity on century-millennium time scales.
I was hoping for an analagous example (historical, unrepeatable measurements which are recalibrated later).
I’m amazed that your knowledge of the history of science is so scant that you can’t come up with many examples on your own. Here are a few more: The SSN itself [both GSN and ZSN] has been recalibrated [ZSN 4 times, GSN, 1 time]. TSI changes with each new calibration and comparison. The geological time scale. Meteorological datasets are ‘reanalyzed’, etc, etc.
The comment was made with respect to the Neher ionisation data, not 10Be. The model of ionisation behaviour inside an ionisation chamber is far better understood and accepted than a model used to describe GCR flux from HMF from geomagnetic measurements.
See comment above. The [geomagnetically] reconstructed HMF is well understood and shows that recent solar activity is not extraordinary. You may be correct that trying to construct solar activity from GCR is so uncertain that it is useless as support for long-term changes in solar activity. I can live with that. On the other hand, it is important to try to identify the errors and failed assumptions that make GCRs useless as a proxy for solar activity. An obvious place to look would be at the point where two disparate datasets have been joined, which is what we suggest.
Yes you do
My ‘no’ was directed at the implication that I was the only one making that assumption. Everybody will have to make that assumption at this time. If we do not, we have admitted that GCRs cannot say anything quantitatively about solar activity. I could live with that.
If you changed the conclusion to ‘highest levels of GCR flux in the last 8,000 years’ (or 6,000, according to another study), you would be on firmer ground, as it cuts out some of the uncertainties.
The 10,000 years was just a ‘large number’. It makes no difference to me if you would claim 8000, 6000, or 7835.837 years.
What do you think of the following, Leif – a changing floor = secular changes, no?
No. The ‘change’ from 4.5 to 4.0 was just a better determination of the floor, based on more data. Just changes are normal and desirable and do not signal a shift in concept.
Lockwood ea 2009: “[…]that the concept of floors in B may indeed be valid, but notes that since 1428 there must have been at least four upward steps in such a floor to reach present-day values…
We know that one of those [the biggest one] did not happen at all, so the rest don’t have much credibility.

oneuniverse
January 14, 2011 7:09 pm

Leif: nit picking.
Merely a preference for accuracy – they didn’t use the ‘Solanki ea 2000/2002’ model (which is the Solanki 2002 model), they used the Vieira and Solanki 2009 model.
Leif: I’m amazed that your knowledge of the history of science is so scant that you can’t come up with many examples on your own. Here are a few more: The SSN itself [both GSN and ZSN] has been recalibrated [ZSN 4 times, GSN, 1 time]. TSI changes with each new calibration and comparison. The geological time scale. Meteorological datasets are ‘reanalyzed’, etc, etc.
Thanks – as it happens, the only applicable example from your list is TSI, which is probably instructive.
Leif: You may be correct that trying to construct solar activity from GCR is so uncertain that it is useless as support for long-term changes in solar activity.
To “nitpick” once again for accuracy – I never the called HMF reconstructions from GCRs useless, I said there’s an added uncertainty compared to a GCR reconstruction. The GCR recons hint well at the solar activity. The proxies have recorded the minima and maxima reasonably well in the modern and sunspot count period, but we cannot as it stands, use the GCR recons to distinguish between secular changes in the GCR flux (independent of solar activity) and secular changes in solar activity.
I wouldn’t call such a reconstruction “useless”, however, that’s your description.
Leif: No. The ‘change’ from 4.5 to 4.0 was just a better determination of the floor, based on more data. Just changes are normal and desirable and do not signal a shift in concept.
I was referring to the changes mentioned in the Lockwood ea quotation.

oneuniverse
January 14, 2011 7:11 pm

Speaking of accuracy, that should be “.. recorded the grand minima and maxima ..”.

oneuniverse
January 14, 2011 8:11 pm

Leif: I was arguing that the Neher data is not an absolute measurement of the flux at the 3GeV typical for the neutron monitors.

Do you agree that the instrument is capable of detecting a trend over 20 years ? And do you accept the less than 1% variation to sensitivity for the Neher series reported in the literature ? (Johnston 1956, Neher 1971)
It would be interesting to determine the difficult of calibrating ionisation chambers to neutron monitors.
oneuniverse: For a very recent reconstruction please see Krivova, Vieira and Solank 2010
Leif: They still use obsolete data and superseded conclusions.
Was your one-line reply a fair summary of the work of Krivova, Vieira and Solanki 2010? It does use a corrected aa geomagnetic index (if not your preferred correction).
Must sign off for the evening, ’til tomorrow, good-night.

January 14, 2011 10:16 pm

oneuniverse says:
January 14, 2011 at 7:09 pm
they used the Vieira and Solanki 2009 model.
Would a rose with any other name …
What is important is not what you call their model, as long as the basic method and assumptions stay the same, which they do.
as it happens, the only applicable example from your list is TSI, which is probably instructive.
The adjustments of the ZSN seems to me to be the clearest example. Tell us why you think that is not applicable.
but we cannot as it stands, use the GCR records to distinguish between secular changes in the GCR flux (independent of solar activity) and secular changes in solar activity. I wouldn’t call such a reconstruction “useless”, however, that’s your description.
cannot … use seems to me to fit nicely the definition of ‘useless’ for this purpose.
I was referring to the changes mentioned in the Lockwood ea quotation.
remind me why those were relevant, especially the one we know didn’t happen.
Do you agree that the instrument is capable of detecting a trend over 20 years ? And do you accept the less than 1% variation to sensitivity for the Neher series reported in the literature ? (Johnston 1956, Neher 1971)
I don’t know to what accuracy that carries over to ten times the energy.
http://www.leif.org/EOS/muscheler07qst.pdf has a much more thorough discussion of the Neher calibration than we can conduct on this blog. They conclude “If the model of Solanki et al. (2000) is correct around 1950 AD this comparison indicates that the Cheltenham data overestimate and the Neher data underestimate the solar modulation before 1950 AD.”. See also their figure 5.
The whole discussion of Neher is a convenient strawman [and certainly not mine] as my conclusion that [as was the issue] solar activity the past half-century was not extraordinary [within the last three hundred years] does not hinge on the GCR record and its uncertain mix of calibration issues, climate interference, assumption of constancy of flux, disagreement between sites, etc.
Was your one-line reply a fair summary of the work of Krivova, Vieira and Solanki 2010? It does use a corrected aa geomagnetic index (if not your preferred correction).
Yes [and I did point to a further discussion]. The reconstruction of HMF B based on geomagnetic data [not using GCRs at all] is in much firmer hand than any other one out there. That is disagrees strongly with McCracken’s http://www.leif.org/EOS/2006JA012119.pdf (e.g. Figure 5) is a problem for Lockwood [1999, which they have essentially abandoned] and Solanki [2002, which forms the base for even their latest reconstructions].

oneuniverse
January 15, 2011 10:52 am

Leif: The adjustments of the ZSN seems to me to be the clearest example. Tell us why you think that is not applicable.
I consider sunspot counts as semi-instrumental measurements. I was looking for a calibrated ‘classic’ instrument whose results were later recalibrated. I didn’t doubt they existed, I wanted to compare. As I said earlier, never mind – it’s not that relevant, only curiousity.
Leif: What is important is not what you call their model, as long as the basic method and assumptions stay the same, which they do.
Could you be specific in your criticism ?
Leif:cannot … use seems to me to fit nicely the definition of ‘useless’ for this purpose.
It’s not useful for distinguishing between solar and GCR trends (certainly for periods before the 18th C).
Leif: remind me why those were relevant, especially the one we know didn’t happen.
Deposited cosmogenic concentrations on the whole fell over the last few centuries, indicative of decreased GCR flux. While this may be a GCR trend, the 300 yr sunspot record also points to increased solar activity over the same period. (It doesn’t matter which version of the sunspot numbers one uses – the 20th C has higher levels.) . Remind me again how you know with certainty that such an increase didn’t happen?
oneuniverse: Was your one-line reply a fair summary of the work of Krivova, Vieira and Solanki 2010?
Leif: Yes [and I did point to a further discussion].
I’m not sure it was a fair summary. You didn’t point to a further discussion of KVS 2010, or the details and improvements of their study, by the way.
What are the “superseded conclusions” ? You have been criticising Lockwood 1999, or Solanki 2000 or 2002, yet the KVR paper uses more, more recent, and improved results and data.
Leif: The whole discussion of Neher is a convenient strawman
Maybe from your perspective – I was just pointing out that your cricism of the Neher data itself as flawed didn’t hold water.
Leif: The reconstruction of HMF B based on geomagnetic data [not using GCRs at all] is in much firmer hand than any other one out there.
HMF reconstructions from geomagnetic data have their own uncertainties – it seems like a good idea to use as much of our understanding and relevant available data as possible. The KVS 2010 paper makes an effort to do that – that’s why I found your dismissal of the paper rather disappointing. They do use corrected geomagnetic indices, even if not your preferred IDV.
Lockwood ea 2009 : “Lockwood et al. (2009b) show that most of the interval (1868–1968) compiled by the inventor of aa, Father Mayaud, it is remarkably accurate when tested using other range indices: however there are small calibration skips and drifts after 1957. Lockwood et al. (2009b) derived a corrected aa index, aaC, using only the Ap (range) index and the k (range) indices derived by Clilverd et al. (2005) from the long sequences of Sodankyla and Niemegk magnetometer observations. Rouillard et al. (2007) used the aaC index along with the annual index m, which is derived from the median standard deviation of hourly average geomagnetic data for each station-UT, as described by Lockwood et al. (2009b), to evaluate centennial variation in the solar wind speed, the IMF field magnitude and the open solar flux.”

oneuniverse
January 15, 2011 11:00 am

Leif, by the way, if you think that solar activity hasn’t increased, that implies that GCR LIS flux has decreased, according to 10Be and 14C concentrations, which goes against your position (stated on an earlier thread) that the GCR LIS flux must have remained constant in the last few centuries. At least, according to the best understanding of cosmogenic proxies as described in the literature.

January 17, 2011 9:13 am

oneuniverse says:
January 15, 2011 at 10:52 am
I was looking for a calibrated ‘classic’ instrument whose results were later recalibrated. I didn’t doubt they existed, I wanted to compare. As I said earlier, never mind – it’s not that relevant, only curiousity.
I have no idea what you mean. Most instrumental series are recalibrated with time. The MDI magnetometer on SOHO was recalibrated last year and all measurements recalculated. ‘Global Temperatures’ are constantly adjusted. Old astronomical plates are remeasured, etc, etc.
Could you be specific in your criticism ?
I have been that many times. Getting tedious now. But OK, one more time: there are three basic assumptions:
1) that the group sunspot number is good [which it isn’t]. Here it doesn’t matter that the latest data are used for the modern period.
2) that the running average [over a solar cycle] sunspot number is a good measure of the ‘ephemeral’ background magnetic flux [which it isn’t]
3) that the ‘ephemeral’ background flux is the dominant part of the long-term variation of TSI [and solar activity as such], which it isn’t
It’s not useful for distinguishing between solar and GCR trends (certainly for periods before the 18th C).
As I said, is useless for what you try to use it for.
(It doesn’t matter which version of the sunspot numbers one uses – the 20th C has higher levels.)
The 20th C after 1945 has sunspot numbers that are 20% too high [Waldmeier discontinuity]. In the 18th century, cycle with maximum in 1778 had SSN of 154 [corresponding to 185 on modern scale]
Remind me again how you know with certainty that such an increase didn’t happen?
If the Neher data is correct and the current understanding of how the GCR modulation happen is correct, then there would be a large increase [no secular change, but jump in a given year] in the HMF B of 1.7 nT. We understand how to calculate HMF B from geomagnetic data [we found out in 2003 and every year since, the calculated B has matched the observed B] and there are enough high-quality stations since 1910 to have a good fix on B, so we ‘know’ what B was.
the KVR paper uses more, more recent, and improved results and data.
None of those have any bearing on what happened before the space age.
I was just pointing out that your cricism of the Neher data itself as flawed didn’t hold water.
What I’m saying is that the Neher data introduced an anomaly that is not found in other data.
HMF reconstructions from geomagnetic data have their own uncertainties – it seems like a good idea to use as much of our understanding and relevant available data as possible.
The uncertainties are much smaller than the 1.7 nT jump that Neher predicts.
The KVS 2010 paper makes an effort to do that – that’s why I found your dismissal of the paper rather disappointing. They do use corrected geomagnetic indices, even if not your preferred IDV.
The paper was dismissed on its demerits. I don’t think KVS uses any geomagnetic data. They refer to various other reconstructions, e.g. recent Lockwood stuff [which largely agree with our HMF B], but then ruin the quantitative fits by using PMOD’s recent decrease [which is a calibration problem and not real], and also the MWO magnetic flux data which are wrongly calibrated [e.g. http://www.leif.org/research/MWO%20MPSI%20-%20F107.pdf ]. Figure 2 of KVS shows that Br for the cycle with max in 1940 is on par with that of the cycle with max in 1970. No sign of the 1.7 nT jump that the Neher data would demand.
Lockwood et al. (2009b) derived a corrected aa index, aaC, using only the Ap (range) index and the k (range) indices derived by Clilverd et al. (2005) from the long sequences of Sodankyla and Niemegk magnetometer observations.
Sigh, one cannot use Sodankyla for this, see section A5.1 of http://www.leif.org/research/2007JA012437.pdf
Leif, by the way, if you think that solar activity hasn’t increased, that implies that GCR LIS flux has decreased, according to 10Be and 14C concentrations, which goes against your position (stated on an earlier thread) that the GCR LIS flux must have remained constant in the last few centuries. At least, according to the best understanding of cosmogenic proxies as described in the literature.
It means that we do not have a good understanding of the deposition of those nuclides, of how much is due to climate, to regional changes, even do that the LIS is. The constancy of LIS has to do with the diffusion time and isotropy of the GCRs. We have found no accepted change in LIS on the relevant time scales. So, now you are introducing yet another strawman.
The topic was: is the ‘modern maximum’ the highest in ~10,000 years, and the conclusion still stands that there is no compelling evidence for that.

oneuniverse
January 18, 2011 3:57 pm

Leif: If the Neher data is correct and the current understanding of how the GCR modulation happen is correct, then there would be a large increase [no secular change, but jump in a given year] in the HMF B of 1.7 nT.
Going in circles.. from earlier reply:
You are trying to use derivations of HMF to compare with measurements of GCR. You assume that the pre-solar-modulated GCR flux is constant, and you assume that your model is good enough to rule out other unknowns – from this, you “demand” that GCR measurements (Neher) must match your predictions based on geomagnetic data.
The Neher measurements are precisely the kind of measurements one would use to check your assumption that pre-modulated GCR flux is constant (or that GCR’s vary only with the solar/terrestrial activity). The measurements appear to disagree with your assumption, providing evidence against your hypothesis, yet you’re arguing that the Neher data should be thrown out, because it disagrees with conclusions based on your assumption. I hope you can see the error in your logic.
Leif: What I’m saying is that the Neher data introduced an anomaly that is not found in other data.
Contemporary observations of GCR found no problems with the Neher measurements . From McCracken & Heikikila 2003 :
“Neher used a set of calibration chambers for pre-flight calibration throughout the 32 year program and inter-calibration accuracy was stated to be beter than 1%. This accuracy was repeatedly verified by making duplicate flights throughout the 32 year program. The long-term decline in cosmogenic 10Be gives independent verification of the long term decline in GCR, and of the stability of Neher’s calibrations. It is therefore proposed that the Neher data provides the most accurate record of the long term changes in GCR [..]”
Leif: I don’t think KVS uses any geomagnetic data.
The aa_c and m indices are used in Lockwood 2009b (used in KVS) . The aa_c correction of the aa index does use Sodankyla but it agrees well with the m index, which is from hourly geomagnetic data from a global network of stations.
Leif: They refer to various other reconstructions, e.g. recent Lockwood stuff [which largely agree with our HMF B], but then ruin the quantitative fits by using PMOD’s recent decrease [which is a calibration problem and not real], and also the MWO magnetic flux data which are wrongly calibrated [e.g. http://www.leif.org/research/MWO%20MPSI%20-%20F107.pdf ].
Three stations are used, not just MWO – it’d be interesting to see what your proposed alteration would to MWO would make.
re: PMOD, I hope to have time to look into this.
Leif: I have been that many times. Getting tedious now. But OK, one more time: there are three basic assumptions:
Thanks – two of the three assumptions are new though, not repeats. I’ll have to look at these in a day or two. However, your proposed 20% reduction of earlier sunspot counts (prior to 1946, Waldmeier) seems odd, because it’s the post-1946 figures that appear to be in error. Shouldn’t it be the post-1948 numbers that are adjusted?

January 18, 2011 6:13 pm

oneuniverse says:
January 18, 2011 at 3:57 pm
Going in circles..
So cannot progress further on that line. You are stuck where you are.
“It is therefore proposed that the Neher data provides the most accurate record of the long term changes in GCR [..]”
If so, then not of solar activity which is more directly expressed by HMF B. However in Steinhilber et al [McCracken is one of the ‘al’] they don’t have the proposed step functions. I’ll let them slug it out. For me, Neher is a strawman, useful for circular arguments.
The aa_c correction of the aa index does use Sodankyla but it agrees well with the m index, which is from hourly geomagnetic data from a global network of stations.
The m index is badly conceived. Its flaw is that the RMS value includes the secular variation of the base level [not solar related], which is of the same order as the variation related to the solar wind. Here are some [brief] notes on that [we have also calculated the m-index (correctly) even using many more stations that Lockwood et al.]
http://www.leif.org/research/m-index%20-effect%20of%20secular%20variation.doc
http://www.leif.org/research/m-index%201890-1923.doc
http://www.leif.org/research/M-index%20POT%201900-1902.doc
http://www.leif.org/research/M-index-all-stations.png
Their problem with the m-index is particularly bad when the solar wind related part is small [low solar activity]. Because of that they simply removed some stations that ‘didn’t fit’ before the 1920s [e.g. CLH and VQS, see second link], although that alone is not enough. This is the main reason the consensus is not good before [say 1915]. The net effect is that the extrapolation to low solar activity is compromised. It doesn’t matter that Sodankyla agrees [and not ‘well’] with something, it is still wrong for this purpose.
Three stations are used, not just MWO
KPO has very uncertain zero level.
However, your proposed 20% reduction of earlier sunspot counts (prior to 1946, Waldmeier) seems odd, because it’s the post-1946 figures that appear to be in error. Shouldn’t it be the post-1948 numbers that are adjusted?
I usually don’t do ‘odd’ things. There is always a [good] reason. You have something backwards or express yourself poorly. The post-1945 values are too high [20%] relative to the pre-1945 values. Two ways of fixing this:
1) increase the old values
2) decrease the new values
Because the new values are used in operational programs, it is better to opt for fix#1.
The values are not in ‘error’. Just a different way of counting [which actually may be better]. They just destroy the homogeneity of the [already not homogeneous] time series.
It would be useful if you didn’t hide behind your ‘oneuniverse’ avatar.

January 24, 2011 2:16 pm

I wonder why it is’nt mentioned that the perhaps the 2 main drivers of climate in shorter term than grand maximum or minimum, have since 1994 been most positive.Both the Amo and the Pdo have been mostly positive since 1994, and though the Pdo turned negative in 2008 the amo is still more positive.The climate cooled the msot from 1964-1976 when amo and pdo were negative and has warmed the most since they were both positive.

1 12 13 14
Verified by MonsterInsights