Update on solar cycle 24

Space Weather Prediction Center
Image via Wikipedia

NOAA’s Space Weather Prediction Center posted an update to their graphs today.

They show the largest gains in solar cycle 24 tracking metrics I’ve seen yet.

See graphs below:

 

 

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
136 Comments
Inline Feedbacks
View all comments
April 13, 2011 9:19 am

Leif Svalgaard says:
April 13, 2011 at 8:47 am
BTW, take care when using NOAA or SWPC values. Their monthly values in
http://www.swpc.noaa.gov/ftpdir/weekly/RecentIndices.txt have transcription errors,

And also June 2006 which is listed as 37.7 but should be 24.4. There are undoubtedly others. Interestingly these errors were found because NOAA disagreed with the ‘rest of the World’. Recalculating the monthly means from the daily values corrected the errors and re-esatblished agreement with the “rest of the World”, showing the power of such comparisons.

April 13, 2011 6:39 pm

Leif Svalgaard says:
April 13, 2011 at 6:56 am
SIDC is undercounting since 2001 compared to NOAA as the k-factor before that was 0.662 (0.677) and after 2001 0.591 (0.608).
You previously stated the long term average was 0.66, now you are saying this only applies only before 2001?
You will need to describe how you scaled the amateur records to match with the SIDC, no doubt there is devil in the detail. Interesting that NOAA did not match SIDC closely prior to 2001(your new graph shows NOAA matching SIDC pre 2001?) but the amateurs did. At the moment you are saying the amateurs are the world benchmark.
On the images you selected both observatories have the same speck count [16] and very nearly the same total spot count [38 and 36 – not weighted]. So, no difference on that carefully cherry picked image pairs.
pores counted 30/3
Group Catania Locarno
1176 8 4-6
1183 6 5
1181 1 0
area 2 6 3
The date was not cherry picked. As stated it is difficult to find images with the same good seeing this time of year.
As I’ve said so many times [when does it sink in?], the 37cm is too small. The telescope has to be bigger than ~60mm
The 37mm has a max seeing of greater than 3 arc seconds, good conditions allow around 1 arc second. This is important in determining the Wolf threshold size. I have shown that Catania count more pores under the same conditions as Locarno and will continue to do so.
Oh, yes. “Wolf under threat”. You have hinted many times that you think there are deliberate mal-counting.
This is your interpretation that I take offense to. The counting processes have changed due to evolution of methods and equipment, no doubt some ego’s involved also.
It was not Wolf’s data. He wasn’t even born. The data Wolf used was auroral counts scaled to sunspot numbers. Making any discussion of threshold meaningless.
You are confusing the issue. The Wolfer 0.6 k factor to align with Wolf was never tested in grand minimum conditions. You admit the L&P effect is now just a greater proportion of specks which places the Wolfer k factor in doubt during these times. The threshold maybe meaningless to you, but to others that are trying to reproduce history the threshold is important.
NOAA come to a similar count because they deliberately try to align themselves with SIDC
Could you explain how this is done and why it did not happen pre 2001?
The LSC is useless for that purpose, because it is not calibrated and because the Dalton numbers are not based on sunspots
The LSC is calibrated by the threshold that Wolf used in his 37mm telescope. The replica LSC 37mm cannot determine penumbra under 333 pixels. Wolf’s Dalton numbers are backed up by the Group Sunspot Number which has 117 extra observers.
http://www.landscheidt.info/images/gsn_sval.png
Can we now put this to rest?
Not likely.

April 14, 2011 12:25 am

Geoff Sharp says:
April 13, 2011 at 6:39 pm
You previously stated the long term average was 0.66, now you are saying this only applies only before 2001?
Leif Svalgaard says:
April 8, 2011 at 6:52 am
“The long-term average ratio is 0.64.” Over all the data.
I consider the SIDC undercounting after 2001 an exception and yes, the averages before and after 2001 must be treated separately. I hope that was clear enough, but from now on that should be clear [one can only hope].
You will need to describe how you scaled the amateur records to match with the SIDC, no doubt there is devil in the detail.
No devil. This is standard regression. I can describe the process thusly: To scale one observer to another ou plot the values of the observers against each other. Hre is the German organization SONNE vs SIDC:
http://www.leif.org/research/SIDC-SONNE-Regression-0.png
Several things to be aware of. If the data points fall along or close to a straight line, you can do a linear regression to find the relationship. For the plot it is SIDC = 0.9755 * SONNE. There is actually a tiny linear offset, but it is [as is evident] so small (+0.8) that it can be neglected, but for every regression made, one has to check this, and if necessary include that in the formula. I used here the period 1996-2010 because I have fairly complete data for many organizations [SONNE, . Minor gaps do not matter, as the regression is done only on days [or months, as I actually use] where both series have data.
One can now do this for all the organizations SONNE, AAVSO, BAA, GEFOES, OAA, RWG, TOS, VVS and calculate SIDC from each using the regression formula for that observatory. That gives you 8 lists [plus SIDC] of scaled, sunspot numbers. How well does the calculation do? That is given by the Coefficient of Determination, R-squared, which is 0.9898 for SIDC from SONNE. This means that 98.98% of the wiggles match in position and relative size. We express this by saying the the formula ‘explains’ 98.98% of the variance. This is a VERY significant correlation. Much better than you normally get in science, where R^2 above 0.80 is considered very good.
The next step is to plot all the scaled lists on the same graph. That allows several things to be gleaned: How well do the series match each others. Are there any that are obvious outliers and may have some problems? etc. Here is the result for the 8+SIDC lists: http://www.leif.org/research/Sunspot-Org-Comparison.png
You note that all the black curves [one for each observatory] lie very close to each other, meaning that they all have the same calibration towards each other. All this can be [and was] checked and verified by standard statistical methods. But, note that the red curve [SIDC] does not fit so well: it is generally above the black ones before ~2000-2001, but below after that. This can be made clearer by also plotting a 12-month running mean [the dashed curves]. This we could not know beforehand. But with that knowledge, we can repeat the regression for all stations using the data split into pre-2001 and post-2001 parts. For SONNE that gives this plot: http://www.leif.org/research/SIDC-SONNE-Regression.png and similar ones for all the other stations. We have now a choice to make: since for all stations the split seems to be around 2001 we can assume that either something happened at all stations at that time to upset their calibration, or that none of their calibrations changed, but SIDC’s did. I prefer the latter choice, but there are weird people out there that often claim silly things…
The last step is now to repeat the calculation of the scaled SIDC for each observatories. Here we again have two choices: should we assume that SIDC before 2001 was good, but after 2001 was too low, or the other way around. For various reasons, I prefer the first assumption, so we recalculate using the regression equation for the 1996-2000 period throughout. We can also include NOAA in this and do the same for it. The result is the Figure you have seen before: http://www.leif.org/research/SIDC-Undercounting.png, where the blue curve is the average of the ‘gang of eight’, the green curve is NOAA, and the red is SIDC. It is now without any doubt that SIDC is too low after 2001, thus ‘undercounting’.
Interesting that NOAA did not match SIDC closely prior to 2001(your new graph shows NOAA matching SIDC pre 2001?) but the amateurs did.
NOAA matches SIDC very closely [as all the other stations did] prior to 2001 as you can see at a glance. One call also compute various statistics on that as verification. Repeat: all observers agree with SIDC before 2001.
At the moment you are saying the amateurs are the world benchmark.
No, Locarno [Sergio Cortesi] is. SIDC play their cards very close to the vest about why they have a problem around 2001. You see the politics here: it is always difficult to get funding and if it gets out too widely or with too much fanfare that SIDC might be off or have calibration problems, the Belgian Government might say “well, if you can’t even count correctly, perhaps it is time to stop doing it…”
“On the images you selected both observatories have the same speck count [16] and very nearly the same total spot count [38 and 36 – not weighted]. So, no difference on that carefully cherry picked image pairs.”
1176 8 4-6, 1183 6 5, 1181 1 0, area 2 6 3

No, specks 16 for both, real spots 22 for Catania and 20 for Locarno.
The date was not cherry picked. As stated it is difficult to find images with the same good seeing this time of year.
There are hundreds of drawings each year…
The 37mm has a max seeing of greater than 3 arc seconds, good conditions allow around 1 arc second. This is important in determining the Wolf threshold size.
No, because Wolf observed for a dozen years before ever using the 37mm and that time is when his scale was set. He multiplied by 1.5 to bring the 37mm up to his real threshold. But, nobody knows pr can know what his ‘threshold’ was. It is un-knowable which is why nobody in his right mind uses [or tries to resurrect] Wolf’s original method.
I have shown that Catania count more pores under the same conditions as Locarno and will continue to do so.
I showed you that Locarno on the example you selected was the best. Shall I show you again: http://www.leif.org/research/Catania-Locarno-March-30.png
http://www.leif.org/research/Catania-Locarno-2-March-30.png
Did you even bother to study them carefully or at least look at them?
This is your interpretation that I take offense to.
It is clear from reading your website and from your comments [and Robert’s] that you think that there was some monkey business going on. There very word ‘inflated’ [that you use all the time] bears the same connotation. You can take offense all you will, your statements on this still stand. Or are you retracting them?
“It was not Wolf’s data. He wasn’t even born. The data Wolf used was auroral counts scaled to sunspot numbers. Making any discussion of threshold meaningless.”
You are confusing the issue. The Wolfer 0.6 k factor to align with Wolf was never tested in grand minimum conditions.

If you plot the k-factor as a function of the sunspot number, you’ll find that there is no clear correlation. Nobody has observed the past 300 years during Grand Minimum conditions. The Dalton was not a Grand Minimum.
You admit the L&P effect is now just a greater proportion of specks which places the Wolfer k factor in doubt during these times. The threshold maybe meaningless to you, but to others that are trying to reproduce history the threshold is important.
The L&P effect [which you say is bad science, but now invoke when it suits] is indeed a dark horse in this. but that has nothing to do with the threshold if all the spots are gone. You cannot reconstruct history before 1849 by referring to Wolf’s ‘threshold’ as Wolf did not observe then. For the Dalton period, he used counts of Swedish aurorae as a proxy for the sunspot number.
“NOAA come to a similar count because they deliberately try to align themselves with SIDC”
Could you explain how this is done and why it did not happen pre 2001?

I have asked them for their exact procedure, perhaps they’ll answer soon. They ‘try’, but do not quite succeed. Perhaps they think their result is good enough for Government work as we say here in the U.S. of A. At any rate, it is not only NOAA that is the issue. EVERY other observer have a pre-2001 problem, or rather SIDC has. As far as I can ascertain there has been no change at Locarno. It is possible that Locarno was used used as reference instead of SIDC and that SIDC somehow changed their complicated calculation [ http://www.leif.org/EOS/Clette_JASR8745.pdf ]. This is one of the things we eant to clear up at the workshop [if not sooner].
The LSC is calibrated by the threshold that Wolf used in his 37mm telescope. The replica LSC 37mm cannot determine penumbra under 333 pixels.
The problem is that having a penumbra is NOT the criterion for counting it as a spot or calling it a pore. The ‘real’ threshold was [as far a we know] twofold: the spot should look black in the 80mm [not grey] and it should not be so small that it could only be seen under very good seeing.
The Wolf’s Dalton numbers are backed up by the Group Sunspot Number which has 117 extra observers.
No, there were a total of 32 observers during 1800-1820 if which several were already used by Wolf. An additional problem was that many of these observers only made very few observations. Finally, the Group Sunspot Number has a very uncertain calibration at that time and is likely too small by some 40%. We have very little knowledge of what the actual number was. The geomagnetic record suggests something analogous to the 1900-1920 period. And in any event, the ‘threshold’ is completely irrelevant for this as Wolf was not observing.
“Can we now put this to rest?”
Not likely.

My next door neighbor believes that the Earth is only 6000 years old, and nothing and nobody can shake that belief. He also say ‘not likely’.

April 14, 2011 5:36 pm

Leif Svalgaard says:
April 14, 2011 at 12:25 am
“Can we now put this to rest?”
Not likely.

My next door neighbor believes that the Earth is only 6000 years old, and nothing and nobody can shake that belief. He also say ‘not likely’.
That’s a doosy and quite ironic really. You have shown no movement from your position even when hard data (Cat/Loc specks)is presented.
“The long-term average ratio is 0.64.” Over all the data.
No your statement read:
Leif Svalgaard says:
April 11, 2011 at 7:58 pm
The time of undercounting by SIDC is marked with red, open triangles. The mean of those points is 0.601. Partly due to my needling them, SIDC is beginning to improve on that as the green triangles show. Their long term ratio is 0.659.

No devil. This is standard regression. I can describe the process thusly: To scale one observer to another ou plot the values of the observers against each other. Hre is the German organization SONNE vs SIDC:………
I think you are making this way too hard, and in the process misrepresenting the data. If I do a simple check on your graph it is obvious the monthly values for NOAA and SIDC are incorrect.
If I plot the actual SIDC monthly figures and the NOAA monthly figures x 0.6 (std Wolfer factor) and just let the numbers fall I get a very different result. It agrees with my original graph and shows a discrepancy pre 2001. It is far more likely that you have it the wrong way around and the SIDC look to be over counting during two periods prior to 2001. This working on the assumption that the amateur records match NOAA.
“The date was not cherry picked. As stated it is difficult to find images with the same good seeing this time of year.”
There are hundreds of drawings each year…

See how many days you can find this year where both drawings have the same seeing at 1 or 2 and with 30 mins of each other.
There very word ‘inflated’ [that you use all the time] bears the same connotation. You can take offense all you will, your statements on this still stand. Or are you retracting them?
Any use of the Waldmeier method infers inflation along with the use of the Wolfer 0.6 factor as it is not allowing for the higher speck ratio. Telescope placement and number of observers plus 24 hour observing is also an inflation that is part of the modern system, this is very different from “deliberate mal-counting”
“NOAA come to a similar count because they deliberately try to align themselves with SIDC”
Could you explain how this is done and why it did not happen pre 2001?
I have asked them for their exact procedure, perhaps they’ll answer soon. They ‘try’, but do not quite succeed.

So you really have no clue how they deliberately try to align.
The L&P effect [which you say is bad science, but now invoke when it suits] is indeed a dark horse in this. but that has nothing to do with the threshold if all the spots are gone. You cannot reconstruct history before 1849 by referring to Wolf’s ‘threshold’ as Wolf did not observe then. For the Dalton period, he used counts of Swedish aurorae as a proxy for the sunspot number.
If the L&P effect is just extra speck ratio, then all is good. But unfortunately the title of the paper suggests otherwise.
The problem is that having a penumbra is NOT the criterion for counting it as a spot or calling it a pore. The ‘real’ threshold was [as far a we know] twofold: the spot should look black in the 80mm [not grey] and it should not be so small that it could only be seen under very good seeing.
This further strengthens the LSC threshold.
No, there were a total of 32 observers during 1800-1820 if which several were already used by Wolf. An additional problem was that many of these observers only made very few observations. Finally, the Group Sunspot Number has a very uncertain calibration at that time and is likely too small by some 40%. We have very little knowledge of what the actual number was. The geomagnetic record suggests something analogous to the 1900-1920 period. And in any event, the ‘threshold’ is completely irrelevant for this as Wolf was not observing.
Correct, the 117 applies to the whole record. I will amend, but 32 observers over that time frame even allowing for “several” that were used by Wolf is a reasonable record over 20 years. Looking at the daily records there are some years in the early 1800’s that have missing days but the data would seem more reliable than using a proxy.
Wolf would have dovetailed his reconstruction into his own count which is based on his threshold and proxy matching.

April 14, 2011 9:06 pm

Geoff Sharp says:
April 14, 2011 at 5:36 pm
That’s a doosy and quite ironic really. You have shown no movement from your position even when hard data (Cat/Loc specks)is presented.
You could substantiate that by marking on my figure of the two groups which spot is a ‘speck’. Then we could count. You also ignore the observational evidence that I have shown you that the k-factor [once you are above ~60mm] does not depend on the aperture of the telescope: http://www.leif.org/research/Wolf-80mm-Telescope.png . If anything it gets slightly worse with increasing aperture, the reason for the stop down at Locarno: Sergio found with varying sizes of the cardboard stop that 80mm gave better resolution [most specks] than if the original 150mm aperture was kept [which is what they have at Catania]. So, no wonder that Locarno has a sharper image than Catania [were it not for the seeing]. You can see the cardboard piece on the telescope here: http://www.specola.ch/img/cupola.JPG
“April 8, 2011 at 6:52 am
The long-term average ratio is 0.64.”
No your statement read:
Leif Svalgaard says:
April 11, 2011 at 7:58 pm

I had carefully given you the time [6:52] of my statement, yet you just pick another statement [7:58]:
“The time of undercounting by SIDC is marked with red, open triangles. The mean of those points is 0.601. Partly due to my needling them, SIDC is beginning to improve on that as the green triangles show. Their long term ratio is 0.659.”
Note that I said ‘Their long term ratio is 0.659′, referring to the green triangles [and they do just have that ratio]. Sad that you can get such things wrong. Most be deliberate.
“No devil. This is standard regression. I can describe the process thusly: To scale one observer to another you plot the values of the observers against each other.”
I think you are making this way too hard, and in the process misrepresenting the data. If I do a simple check on your graph it is obvious the monthly values for NOAA and SIDC are incorrect.

Science is hard if done correctly. I think you didn’t even look at what I did. What you see on the graph are curves brought onto the same scale using the scale factors found by the regression. The scaled numbers would be slightly different from the raw data. This is the whole point of the exercise: if the number are not different from the raw data, they have not been scaled or brought onto the same scale.
If I plot the actual SIDC monthly figures and the NOAA monthly figures x 0.6 (std Wolfer factor) and just let the numbers fall I get a very different result.
The NOAA figures should not be multiplied by 0.6 as NOAA does not use the standard Wolfer factor, but by 0.64 [empirically found] for all the data as I said. You have not understood the ‘details’ in spite of my effort. Perhaps you didn’t even try. If you think it is wrong, point out where.
This working on the assumption that the amateur records match NOAA.
They do in the sense that they do not show any jump in 2001, but numerically they will not as they are not on the NOAA scale. See below on new information on the NOAA counts.
“There are hundreds of drawings each year…”
See how many days you can find this year where both drawings have the same seeing at 1 or 2 and with 30 mins of each other.

Who said ‘this year’. There are decades of data.
Any use of the Waldmeier method infers inflation along with the use of the Wolfer 0.6 factor as it is not allowing for the higher speck ratio.
If the speck ratio is higher, the Waldmeier method would give a lower number [thus deflation]. Let’s take an example: on a day there is a big spot [counted as 3], two smaller spots [counted as 2 each] and 3 specks [counted as 1 each], this gives a total of 3+2+2+1+1+1 = 10 [speck ratio 3/6=0.5]. Now increase the speck ratio by demoting one of the medium spots to a speck, then the sum becomes 3+2+1+1+1+1 = 9, thus smaller than 10 for the larger speck ratio of 4/6 = 0.67. Unless you stipulate that in a Grand Minimum there are actually MORE spots and specks which does not seem reasonable.
Telescope placement and number of observers plus 24 hour observing is also an inflation that is part of the modern system, this is very different from “deliberate mal-counting”
The Zurich observers had a simple rule: you only look at the Sun ONCE a day. This is still followed by Locarno, so the Waldmeier jump is not caused by round-the-clock observing. And more observers does not translate into more spots, as spots come and GO. If observer A sees 5 spots in the morrow, it is just as likely that observer B would see 4 spots in the afternoon as he would see 6. More observers does not change the average.
“I have asked them for their exact procedure, perhaps they’ll answer soon. They ‘try’, but do not quite succeed.”
So you really have no clue how they deliberately try to align.

They answered me today. They took over the old American Sunspot Number [which did careful alignment as I explained], but since the Navy no longer uses the old nomograms, there is no longer a reason to align with the Zurich scale, and the last many years they have not even tried to align, they just use k=1. [This was, in fact, news to me as they just stopped without telling anybody about it]. So now we don’t need to worry about how they align, because they don’t. They simply take the averages of the various observers. This seems to work well enough [and really: they don’t care, because the NOAA numbers are not made for long-term studies, but for immediate consumption in practical applications].
If the L&P effect is just extra speck ratio, then all is good. But unfortunately the title of the paper suggests otherwise.
The L&P effect is that, what were specks disappear and what were medium spots become specks. Example: 2 big spots, 4 medium spots, 10 specks. Due to L&P we now have, say, 0 big spots, 2 medium spots, and 4 specks. I don’t think you even read [or understood] their paper [otherwise you would not claim this]. What the L&P is, is a shift of the distribution towards smaller field strength [and thus less visibility]. Here is the evolution of the number of spots since 2001: http://www.leif.org/research/Livingston-Penn-Distribution.png The blue curves show the distributions for each year of 2001-2004, the green for 2005-2008, and the red for the latest years 2009-2011 [all of cycle 24]. You can clearly see the shift from blue, through green, to red.
“The ‘real’ threshold was [as far a we know] twofold: the spot should look black in the 80mm [not grey] and it should not be so small that it could only be seen by the 80mm under very good seeing.”
This further strengthens the LSC threshold.

Nonsense, you said that the 37mm was the key. And the why throw data away? That Wolf did it was already in the 1870s realized to be a mistake and serious observer does that any more. Are you a serious observer?
Looking at the daily records there are some years in the early 1800′s that have missing days but the data would seem more reliable than using a proxy.
Not ‘some years’, most years. and you do not understand the nature of the beast. Even with perfect coverage, it would do us no good as we would not know how they related to Wolf’s series. What Wolf tried to use was comparison with something [aurorae] that could be hoped to have a constant relation with sunspots and then use that scaling to calibrate the sunspot series. Again, this has nothing to do with any threshold.
Wolf would have dovetailed his reconstruction into his own count which is based on his threshold and proxy matching.
He almost cut the sunspot count in half based on the auroral counts. His threshold had nothing to do with it. He was trying to compensate for variable seeing, by not using spots/specks that were only visible [with his acuity] under exceptional seeing. The 37mm automatically did that for him, but he had to up his count by 50% on account of missing too much. You see, this was his admission that he knew it was wrong to omit the smallest spots [he had no other choice because he traveled a lot]

April 14, 2011 9:30 pm

Leif Svalgaard says:
April 14, 2011 at 9:06 pm
And why throw data away? That Wolf did it was already in the 1870s realized to be a mistake and no serious observer does that any more. Are you a serious observer?

April 14, 2011 9:55 pm

Geoff Sharp says:
April 14, 2011 at 5:36 pm
It is far more likely that you have it the wrong way around and the SIDC look to be over counting during two periods prior to 2001.
On commenting on http://www.leif.org/research/SIDC-Undercounting.png Frederic Clette in email today agrees:
“We find something similar using a bunch of core stations of the SIDC network (excluding Locarno).”
Time for you to give up your illusion and accept the facts.

April 15, 2011 12:03 am

Leif Svalgaard says:
April 14, 2011 at 9:06 pm
You could substantiate that by marking on my figure of the two groups which spot is a ‘speck’. Then we could count.
I will do exactly that next week, and hopefully I will also gain access to last years Catania drawings (the Catania website giving trouble).
“The time of undercounting by SIDC is marked with red, open triangles. The mean of those points is 0.601. Partly due to my needling them, SIDC is beginning to improve on that as the green triangles show. Their long term ratio is 0.659.”
Note that I said ‘Their long term ratio is 0.659′, referring to the green triangles [and they do just have that ratio]. Sad that you can get such things wrong. Most be deliberate.

You need to be clearer in your statements. It can be read two ways.
The NOAA figures should not be multiplied by 0.6 as NOAA does not use the standard Wolfer factor, but by 0.64 [empirically found] for all the data as I said. You have not understood the ‘details’ in spite of my effort. Perhaps you didn’t even try. If you think it is wrong, point out where.
It is becoming clear that you have biased the figures to suit your cause. The NOAA figures should be factored by the standard Wolfer 0.6, this is how its been done since the late 1800’s, to do otherwise is plain wrong. Of course you will see a difference from 2001 if the overall long term difference is applied. I have also compared NOAA with the 0.64 factor and it still doesn’t look like your graph. You have also elected to line up the records on the upslope of SC23 which produces a spurious outcome. By not applying any bias it is clear that the SIDC are not undercounting from 2001 on. Now that we learn NOAA are doing their own thing it is obvious that SIDC is counting correctly since 2001 but perhaps their count could be in question before that.
We can compare the F10.7 flux values against both records, F10.7 is only a guide but when doing so the NOAA values follow more closely (especially in the SC23 upslope area). This suggests to me that perhaps NOAA was more accurate during this period, which is what you see also when comparing the amateur records. I would need to see the amateur records to comment further.
So I still see no evidence that SIDC are undercounting, if anything I am more confident they are at least being consistent with their inflated method since 2001. The question still remains how does NOAA come up with a very similar value without using the Waldmeier system?

April 15, 2011 12:07 am

Leif Svalgaard says:
April 14, 2011 at 9:55 pm
Time for you to give up your illusion and accept the facts.
I might send them my research and a copy of this text and see what happens.

April 15, 2011 3:45 am

Geoff Sharp says:
April 15, 2011 at 12:07 am
I might send them my research and a copy of this text and see what happens.
Good idea. To put it in proper perspective include my comments from here.

April 15, 2011 5:58 am

Geoff Sharp says:
April 15, 2011 at 12:03 am
You need to be clearer in your statements. It can be read two ways.
Only if you want to misundersdtand. As it stands it is very clear: the green triangles have a factor of 0.66.
The NOAA figures should be factored by the standard Wolfer 0.6, this is how its been done since the late 1800′s, to do otherwise is plain wrong.
The 0.6 applies to the Zurich observers [including Locarno]. Every other observer including the ones that make up the NOAA network [USAF – SEON] will have a different k-factor. On http://www.vds-sonne.de/index.php?page=gem/res/results.html you can find a lot of information about this (including all the ‘amateur’ records). Including the k-factors for every observer in the SONNE network: e.g. for 2009 4Q:
Brettel,G. Refr. 90/1000 0.806
Bullon,J.M. Refl. 200/2000 0.976
Bullon,J.M. Refr. 70/ 350 1.018
Bullon,J.M. Refr. 120/1000 0.545
Gieseke,R. Fegl. 50/ 300 1.135
Hofmann,W. Refr. 80/ 400 2.945
Joppich,H. Refr. 60/ 900 0.773
Karlsen,N. Refr. 100/1000 0.869
Morales,G. Refl. 90/2000 0.822
Schott,G.-L. Refr. 80/ 910 1.371
Smit,F. Refr. 80/1200 0.903
Willi,X. Refl. 200/1320 1.166
As you can see applying the 0.6 for everyone is juts ‘plain wrong’.
Of course you will see a difference from 2001 if the overall long term difference is applied. I have also compared NOAA with the 0.64 factor and it still doesn’t look like your graph.
Then do it right. Here is a spreadsheet with the data: http://www.leif.org/research/SIDC-SWPC%20comparisons.xls it also produces the graph.
You have also elected to line up the records on the upslope of SC23 which produces a spurious outcome.
For all observers except SIDC the records line up throughout. SIDC agrees with all the others up to 2001, so it makes sense to align SIDC [only] on the upslope of SC23 [and it has really nothing to do with upslopes or solar cycles. Correct would have been to say 1996-2000, as this is not solar related but has to do with a defect in the SIDC processing]
it is obvious that SIDC is counting correctly since 2001 but perhaps their count could be in question before that.
SIDC agrees with everybody before 2001, so counted correctly then. SIDC disagrees with everybody [NOAA and the amateurs] after 2001, so SIDC is wrong after 2001.
We can compare the F10.7 flux values against both records
If you so [as I have told you many times] F10.7 agrees well with the Zurich and SIDC sunspot number up to about 1990. From there on the sunspot numbers are too progressively too low. This is independent confirmation of the L&P effect, which will hit every observer equally.
The question still remains how does NOAA come up with a very similar value without using the Waldmeier system?
They don’t. NOAA is higher. Since 1996 NOAA’s average was 79.90 and SIDC’s was 51.95 or 53.8% higher [k=0.65, if you calculate the average of k for each month you get 0.64], so they do not come up with a very similar value and it is not a ‘bias’ [it is just the value established from the data]. So there is no question anymore after we have learned that they do not even try to align themselves.

April 15, 2011 10:09 am

Geoff Sharp says:
April 15, 2011 at 12:03 am
it is obvious that SIDC is counting correctly since 2001 but perhaps their count could be in question before that.
Leif: SIDC agrees with everybody before 2001, so counted correctly then. SIDC disagrees with everybody [NOAA and the amateurs] after 2001, so SIDC is wrong after 2001.

The ‘obvious’ bit is unfounded, but you can, of course, stipulate that SIDC is ‘correct’ since 2001 and then say that all the time before that SIDC was wrong. SIDC took pains to be consistent with Zurich because homogeneity is important. Their main vehicle for this was Locarno [Sergio, to be precise], so you must then also assume that Sergio changed something when SIDC took over. All of these assumptions are clearly special pleading. The simpler position [that SIDC also takes, c.f. Frederic] is that something went wrong at SIDC around 2000.
Anyway, a resource of sunspot drawings [including the ones on with NOAA is partly based] is here:
ftp://ftp.ngdc.noaa.gov/STP/SOLAR_DATA/SOLAR_IMAGES/Sunspot_Drawings/

April 18, 2011 9:22 pm

Leif Svalgaard says:
April 15, 2011 at 5:58 am
Then do it right. Here is a spreadsheet with the data: http://www.leif.org/research/SIDC-SWPC%20comparisons.xls it also produces the graph.
Your spreadsheet is exactly the same as mine, but I needed to add the NOAA x 0.64 values. When graphed it produces the same result which is very different to your graph shown during the SIDC workshop.
Leif Svalgaard says:
April 15, 2011 at 3:45 am
Geoff Sharp says:
April 15, 2011 at 12:07 am
I might send them my research and a copy of this text and see what happens.
—————————
Good idea. To put it in proper perspective include my comments from here.

I have done just that. The SIDC while noticing some divergence from SC22/23 maxima are also not convinced by your research. They are doing an extensive investigation which will take time, but they have also welcomed my research and remain in touch.

April 18, 2011 10:20 pm

Geoff Sharp says:
April 18, 2011 at 9:22 pm
“Then do it right. Here is a spreadsheet with the data”
Your spreadsheet is exactly the same as mine, but I needed to add the NOAA x 0.64 values. When graphed it produces the same result which is very different to your graph shown during the SIDC workshop.

No, you neglected to plot the ratio.
By multiplying NOAA by the long-term k-factor you distribute the discrepancy over the whole plot, making it difficult to see. Multiply NOAA by 0.672 for 1996-2001 and by 0.611 thereafter and show us. Then you can see the difference.
I have done just that. The SIDC while noticing some divergence from SC22/23 maxima are also not convinced by your research.
You are not being quite honest here. They agree that they undercount [otherwise they wouldn’t be doing extensive investigation…]. What they are not convinced about is when the problem started. They think the undercounting started earlier, perhaps in 1998 or 1999. This is possible, hard to tell, as there is not a very sharp jump from one day to the next. Any time between 1998 and 2002 would be fine with me.

April 18, 2011 10:48 pm

Leif Svalgaard says:
April 18, 2011 at 10:20 pm
By multiplying NOAA by the long-term k-factor you distribute the discrepancy over the whole plot, making it difficult to see. Multiply NOAA by 0.672 for 1996-2001 and by 0.611 thereafter and show us. Then you can see the difference.
I said that clumsily. What I meant was that if SIDC were not undercounting, then the k-factor should be the same after 2001 as before. since before it was 0.672, you should multiply ALL the NOAA values 1996-today by the assumed constant 0.672 and compare to SIDC. Or conversely, if you think SIDC after 2001 is correct then ALL the NOAA values before 2001 should be multiplied by 0.611. This is, of course, a separate plot. either you align before 2001 or you align after 2001. What SIDC is not so sure about is whether 2001 is the best cut-over time, they would like to think that their problem started a bit earlier. I can live with a progressive change from 1998 to 2001. The main point is that they know they have a problem. It does not take a long time to recognize that. It may take a long time to figure out why.

April 19, 2011 3:40 am

Leif Svalgaard says:
April 18, 2011 at 10:20 pm
No, you neglected to plot the ratio.
By multiplying NOAA by the long-term k-factor you distribute the discrepancy over the whole plot, making it difficult to see. Multiply NOAA by 0.672 for 1996-2001 and by 0.611 thereafter and show us. Then you can see the difference.

All that does is align the 2 records. There is no doubt there is a shift at 2001, but what I think you are not seeing, is that the SIDC are possibly over counting before 2001. The F10.7 records suggest the same, the Ri values are much higher than the flux values compared with NOAA pre 2001. There is a very large divergence between Ri and NOAA in 1998, December being a good example. Looking at the drawings there is evidence of two changes:
(1)The Waldmeier method is further skewing the records because of a higher incidence of larger spots within groups.
(2) Spots are getting a higher weighting than observed post 2001.
Here are some examples in Dec 1998.
http://www.specola.ch/drawings/1988/loc-d19881219.JPG
http://www.specola.ch/drawings/1988/loc-d19881214.JPG
http://www.specola.ch/drawings/1988/loc-d19881207.JPG
There are examples of single spots with a score of 4 & 6 that might interest you, along with other groups that might not score as high today…see what you think?
They think the undercounting started earlier, perhaps in 1998 or 1999. This is possible, hard to tell, as there is not a very sharp jump from one day to the next. Any time between 1998 and 2002 would be fine with me.
You are searching for an outcome rather than investigating the facts. Looking at the NOAA and F10.7 records there is no way the SIDC can be considered to be undercounting during 1998. This may only be evident if comparing with some of their other stations.

April 19, 2011 8:45 am

Geoff Sharp says:
April 19, 2011 at 3:40 am
All that does is align the 2 records.
By doing that, you see the difference, otherwise you are just hiding the decline.
There is no doubt there is a shift at 2001
Good, then we can get that out of the way/
but what I think you are not seeing, is that the SIDC are possibly over counting before 2001. The F10.7 records suggest the same
Simply plotting SSN as a function of F10.7 [as I have shown you so many times] shows that SIDC is undercounting after 2001. Here is such a plot:
http://www.leif.org/research/Yearly-SSN-vs-F107.png
To help you understand the plot: pick an F10.7 value on the X-axis. Follow a vertical line upwards. You will then see that the line first encounter the red symbols [SSN after 2001] and then later blue symbols [before 2001]. This means that for a given value of F10.7, the SSN since 2001 is lower [i.e. undercounted] than before. Perhaps you now would claim that the Canadians had a calibration problem around 2001 and that they were measuring too high values before that. Comparison with the Japanese flux values shows that this is not the case.
Now, most of the difference between the blue and red data points is not really due to sunspot counting problems, but the L&P [but since you think L&P is bad science, nonsense, and non-existent, you would have to accept that the SSN compared to F10.7 is indeed too low before 2001].
To show that the SIDC undercounting is a genuine defect in their series [and not just the L&P effect], we can directly compare with NOAA: http://www.leif.org/research/Monthly-SIDC-vs-NOAA.png
And find the same thing: the red data points are below the blue. SIDC is undercounting. Comparing with ‘the rest of the world’ shows the same thing.
There is a very large divergence between Ri and NOAA in 1998
since 1998 is just near the borderline when undercounting began, it is a poor example.
(1)The Waldmeier method is further skewing the records because of a higher incidence of larger spots within groups.
(2) Spots are getting a higher weighting than observed post 2001.

The Waldmeier method has been in effect for decades and did not change near 2001
There are examples of single spots with a score of 4 & 6 that might interest you, along with other groups that might not score as high today…see what you think?
Since Cortesi is doing the counting as he has for 54 years, I see no difference [unless you are claiming he is going blind]
This may only be evident if comparing with some of their other stations.
comparing with the ‘rest of the world’ [as I have shown you repeatedly] shows the same as comparing with NOAA and F10.7: SIDC is undercounting, and all stations [incl. SIDC] are further undercounting due to L&P.

April 19, 2011 3:51 pm

Geoff Sharp says:
April 19, 2011 at 3:40 am
There is a very large divergence between Ri and NOAA in 1998
If there is, the fault is with NOAA, as you can see here:
http://www.leif.org/research/Yearly-SSN-vs-F107.png
The light blue diamond at F10.7 = 118 for 1998 shows that Ri falls just on the line of the dark blue diamonds, hence 1998 has just the same relationship between F10.7 and Ri as the other years before 2001. NOAA is less secure than Ri as it is based on only 4-6 stations manned by Air Force NCOs with minimal training.

April 19, 2011 8:04 pm

Leif Svalgaard says:
April 19, 2011 at 8:45 am
There is no doubt there is a shift at 2001
———————-
Good, then we can get that out of the way/

There was also a shift in 1990, 1980, 1965 and 1957. This is nothing new and could easily be a product of the Waldmeier system with changing percentages of larger spots in groups. If NOAA went back to 1956 we would probably see the same results ie NOAA and SIDC would agree when the incidence of larger spots per group fell.
http://www.landscheidt.info/images/sidc_f107.png
Prior cycles also show SIDC values higher on the cycle up ramp and a closer match on the downramp. SC20 is an exception. My graph uses daily values with a 30 day moving average.
Simply plotting SSN as a function of F10.7 [as I have shown you so many times] shows that SIDC is undercounting after 2001. Here is such a plot:
http://www.leif.org/research/Yearly-SSN-vs-F107.png

Resorting to yearly figures is a bit desperate. As I have shown you so many times (above) there is minimal divergence between the long term Canadian flux figures and SSN. There is no evidence to suggest any secular movements in the flux/SSN ratio.
You have decided to group stations according to one curve on the long time record (SC23 upramp), this is biased reporting looking for a result.
NOAA is less secure than Ri as it is based on only 4-6 stations manned by Air Force NCOs with minimal training.
Just recently you said “NOAA was king”.

April 19, 2011 10:04 pm

Geoff Sharp says:
April 19, 2011 at 8:04 pm
“There is no doubt there is a shift at 2001”
Good, then we can get that out of the way

There was also a shift in 1990, 1980, 1965 and 1957.
These were minor [and 1990 was related to move from Ottawa to Penticton, 1980 from Zurich to Brussels]. The big shift was around 2001:
http://www.leif.org/research/SSN-F107-fit-1.png
My graph uses daily values with a 30 day moving average.
Your graphs mislead you. Plot the values against each other as I just did in the above link, and you’ll see why.
Resorting to yearly figures is a bit desperate.
The yearly figures are cleaner as there are fewer points. I have shown the monthly ones several times and you couldn’t see it. So, here you have it again:
http://www.leif.org/research/SSN-F107-fit-2.png
I start in 1952 becasue there were many missing days before that and because I want to make sure by comparing with the Japanese [starting in late 1951] that the Canadian data is good.
Just recently you said “NOAA was king”.
Not that I can remember. I have said that NOAA agrees better with the rest of the world than SIDC, which it does.
But I realize that I take steps that are too large for you, so we’ll continue with some baby steps instead. The first thing you do is to use Excel to make a scatter plot: make three columns one of 10.7 and one of SSN before 2001 and the last of SSN after 2001, then select ‘scatter plot’ as the graph mode. Show the result.

April 20, 2011 4:49 pm

Leif Svalgaard says:
April 19, 2011 at 10:04 pm
Just recently you said “NOAA was king”.
————————————————–
Not that I can remember. I have said that NOAA agrees better with the rest of the world than SIDC, which it does.

Once again you have selective memory.
http://wattsupwiththat.com/2011/01/22/new-wuwt-solar-images-and-data-page/#comment-581264
Anthony Watts says:
January 23, 2011 at 10:48 am
Dudes, while you are arguing… nothing gets done.
—————-
“since my plots is already on the Wolf scale [NOAA’s], I don’t need to change anything. I keep the SIDC data as ‘fly dirt’ only to see how it behaves. NOAA is king.”
But I realize that I take steps that are too large for you, so we’ll continue with some baby steps instead. The first thing you do is to use Excel to make a scatter plot: make three columns one of 10.7 and one of SSN before 2001 and the last of SSN after 2001, then select ‘scatter plot’ as the graph mode. Show the result.
This is a normal response for you once cornered. The need to resort to ad hominem and then the x-y scatter plot routine. If you can, why don’t you show us an x-y scatter plot comparing Ri with Canadian F10.7 adjusted, for the SC22/23 upramps. You would need to match Ri with flux since 1947 to find the best alignment. This might show us some useful information?
You commented on Sergio’s eyesight, it has been tested frequently by other observers while he is on holidays. “This indicates that there was no significant drift of Cortesi versus all others over the last 15 years. No significant trend in the local seeing conditions either (they systematically keep track of this as well).”
Ri has also been tested again Locarno:
“Ri scales perfectly with the pilot station of the network: Locarno. Therefore, if there is a long-term trend, it must thus be at Locarno.”
While there are inconsistencies at times that I have pointed out (there are others where a single alpha spot scores 30) the general weighting of spots seems consistent over SC23. A thorough analysis of sunspot weighting and splitting looking at all drawings would need to be performed to be sure. So what is different about Locarno? The Waldmeier method is only used at this station (since 1981) with its capacity to drift if the type of spots change. Sure this method was used from 1945 to 1981 at Zurich but do we know the total method used in constructing Ri during this period without Waldmeier’s drawings?
Your method of aligning sunspot counts before 2001 shows one outcome. It is just as reasonable to align the sunspot counts after 2001 which will show another outcome that is just as plausible. At the same time you could overlay the Canadian F10.7 values for reference which I am sure would be interesting. This kind of exercise will show that Ri drifted high during the Sc23 upramp and then followed the pack after 2001.

April 20, 2011 5:36 pm

Geoff Sharp says:
April 20, 2011 at 4:49 pm
Once again you have selective memory.
“since my plots is already on the Wolf scale [NOAA’s], I don’t need to change anything. I keep the SIDC data as ‘fly dirt’ only to see how it behaves. NOAA is king.”

And you take things out of context. What was meant was that that a count without the goofy 0.6 factor is king. On my plot, that is the NOAA curve. This does not mean that NOAA is ‘the best’, just that original Wolf scale with k=1 is to be preferred.
This is a normal response for you once cornered. The need to resort to ad hominem and then the x-y scatter plot routine. If you can, why don’t you show us an x-y scatter plot comparing Ri with Canadian F10.7 adjusted, for the SC22/23 upramps. You would need to match Ri with flux since 1947 to find the best alignment. This might show us some useful information?
I can do that [have already done so, in fact], but my experience with you is that unless you do it yourself you’ll not see the light, so try it and learn.
You commented on Sergio’s eyesight, it has been tested frequently by other observers while he is on holidays. “This indicates that there was no significant drift of Cortesi versus all others over the last 15 years. No significant trend in the local seeing conditions either (they systematically keep track of this as well).”
I maintain that Sergio’s eyesight is good [in spite of some recent glaucoma]. I was simply forestalling that you would use failing eyesight as an excuse or straw man.
>i>Ri has also been tested again Locarno:
“Ri scales perfectly with the pilot station of the network: Locarno. Therefore, if there is a long-term trend, it must thus be at Locarno.”
Not what they said lately [they are making progress]: “We find something similar [the SIDC under counting] using a bunch of core stations of the SIDC network (excluding Locarno).”
While there are inconsistencies at times that I have pointed out (there are others where a single alpha spot scores 30) the general weighting of spots seems consistent over SC23.
As I showed these are tiny and do not matter.
A thorough analysis of sunspot weighting and splitting looking at all drawings would need to be performed to be sure.
Only Locarno does weighting and is only one of ~60 stations in the network. One simple way to deal with this is to drop Locarno all together as the station only contributes 1/60 of the data.
So what is different about Locarno? The Waldmeier method is only used at this station (since 1981) with its capacity to drift if the type of spots change.
Locarno has used the Waldmeier method since Sergio started in 1957.
Sure this method was used from 1945 to 1981 at Zurich but do we know the total method used in constructing Ri during this period without Waldmeier’s drawings?
As I have said a gazillion times, the sunspot number is determined from direct visual observations through the eyepiece and not from drawings. And we do know the complete method, because Waldmeier in every yearly report says [and stresses] that there has not been [and must not be] any change in method.
Your method of aligning sunspot counts before 2001 shows one outcome. It is just as reasonable to align the sunspot counts after 2001 which will show another outcome that is just as plausible.
In principle, you could maintain that SIDC after 2001 is correct and they and all other observers back to and including Wolf, Schwabe, and Staudacher are wrong. Comparison with F10.7 and the diurnal variation of the magnetic needle, however, show that is not the case.
At the same time you could overlay the Canadian F10.7 values for reference which I am sure would be interesting. This kind of exercise will show that Ri drifted high during the Sc23 upramp and then followed the pack after 2001.
I have shown you the comparison with F10.7 [even the Canadian version with its jump when they moved to Penticton] many times. This is conclusive. You cannot just ‘overlay’ F10.7 without scaling it correctly to Ri, which is what you need to learn how to do.

April 20, 2011 9:45 pm

Geoff Sharp says:
April 20, 2011 at 4:49 pm
and then the x-y scatter plot routine.
The x-y scatter plot is the standard way scientists use to calibrate one instrument against another. Read here how Waldmeier suggested using x-y plot of Rz and F10.7 for precisely that purpose: http://www.leif.org/EOS/W-CCCIV.pdf Study it carefully. To check that you have even looked at it, tell us what the last word on page 6 is.
Then in the next comment, we’ll tackle how to do this for cycle 23 [which you liked so much].

April 20, 2011 11:37 pm

Leif Svalgaard says:
April 20, 2011 at 5:36 pm
And you take things out of context.
Predictable response.
This is a normal response for you once cornered. The need to resort to ad hominem and then the x-y scatter plot routine. If you can, why don’t you show us an x-y scatter plot comparing Ri with Canadian F10.7 adjusted, for the SC22/23 upramps. You would need to match Ri with flux since 1947 to find the best alignment. This might show us some useful information?
——————————————————-
I can do that [have already done so, in fact], but my experience with you is that unless you do it yourself you’ll not see the light, so try it and learn.

What it shows us is that the downramp for SC23 is has a better fit than the upramp?
http://www.landscheidt.info/images/ri_f10_xy.png
Ri has also been tested again Locarno:
“Ri scales perfectly with the pilot station of the network: Locarno. Therefore, if there is a long-term trend, it must thus be at Locarno.”
—————————–
Not what they said lately [they are making progress]: “We find something similar [the SIDC under counting] using a bunch of core stations of the SIDC network (excluding Locarno).”

My statement was taken from a private communication with the SIDC in the last few days.
So what is different about Locarno? The Waldmeier method is only used at this station (since 1981) with its capacity to drift if the type of spots change.
———————-
Locarno has used the Waldmeier method since Sergio started in 1957.

But the Locarno pre 1981 values were not the primary station used to construct Ri. It’s a shame you have to nitpick in this fashion. I am trying to to determine possible drift reasons for Locarno.
As I have said a gazillion times, the sunspot number is determined from direct visual observations through the eyepiece and not from drawings. And we do know the complete method, because Waldmeier in every yearly report says [and stresses] that there has not been [and must not be] any change in method.
Yep, we are saying the same thing. No drawings, so there is no way to establish the exact use and proportion of Waldmeiers weighting method in practice…only a description. This would be a lot quicker if we could bypass the unnecessary nitpicking.
Perhaps you could now preform your exercise in reverse. Group the sunspot counts after 2001?

April 21, 2011 6:50 am

Leif Svalgaard says:
April 20, 2011 at 9:45 pm
Read here how Waldmeier suggested using x-y plot of Rz and F10.7 for precisely that purpose: http://www.leif.org/EOS/W-CCCIV.pdf Study it carefully. To check that you have even looked at it, tell us what the last word on page 6 is.
You failed the test…