Usoskin Et Al. Discover A New Class of Sunspots

Guest Post by Willis Eschenbach

There’s a new post up by Usoskin et al. entitled “Evidence for distinct modes of solar activity”. To their credit, they’ve archived their data, it’s available here.

Figure 1 shows their reconstructed decadal averages of sunspot numbers for the last three thousand years, from their paper:

usoskin figure 2Figure 1. The results of Usoskin et al.

Their claim is that when the decadal average sunspot numbers are less than 21, this is a distinct “mode” of solar activity … and that when the decadal average sunspot numbers are greater than 67, that is also a separate “mode” of solar activity.

Now, being a suspicious fellow myself, I figured I’d take a look at their numbers … along with the decadal averages of Hoyt and Schatten. That data is available here.

I got my first surprise when I plotted up their results …

Figure 2 shows their results, using their data.

decadal sunspot numbers usoskinFigure 2. Sunspot numbers from the data provided by Usoskin et al.

The surprising part to me was the claim by Usoskin et al. that in the decade centered on 1445, there were minus three (-3) sunspots on average … and there might have been as few as minus ten sunspots. Like I said, Usoskin et al. seem to have discovered the sunspot equivalent of antimatter, the “anti-sunspot” … however, they must have wanted to hide their light under a bushel, as they’ve conveniently excluded the anti-sunspots from what they show in Figure 1 …

The next surprise involved why they chose the numbers 21 and 67 for the breaks between the claimed solar “modes”. Here’s the basis on which they’ve done it.

usoskin Figure 3Figure 3. The histogram of their reconstructed sunspot numbers. ORIGINAL CAPTION: Fig. 3. A) Probability density function (PDF) of the reconstructed decadal sunspot numbers as derived from the same 106 series as in Fig. 2 (gray-filled curve). The blue curve shows the best-fit bi-Gaussian curve (individual Gaussians with mean/σ being 44/23 and 12.5/18 are shown as dashed blue curves). Also shown in red is the PDF of the historically observed decadal group sunspot numbers (Hoyt & Schatten 1998) (using bins of width ΔS = 10).

The caption to their Figure 3 also says:

Vertical dashed lines indicate an approximate separation of the three modes and correspond to ±1σ from the main peak, viz. S = 21 and 67.

Now, any histogram has the “main peak” at the value of the “mode”, which is the most common value of the data. Their Figure 3 shows the a mode of 44, and a standard deviation “sigma” of 23. Unfortunately, their data shows nothing of the sort. Their data has a mode of 47, and a standard deviation of 16.8, call it 17. That means that if we go one sigma on either side of the mode, as they have done, we get 30 for the low threshold, more than they did … and we get 64 for the high threshold, not 67 as they claim.

So that was the second surprise. I couldn’t come close to reproducing their calculations. But that wouldn’t have mattered, because I must admit that I truly don’t understand the logic of using a threshold of a one-sigma variation above and below not the mean, not the median, but the mode of the data … that one makes no sense at all.

Next, in the right part of Figure 1 they show a squashed-up tiny version of their comparison of their results with the results of Hoyt and Schatten … the Hoyt-Schatten data has its own problems, but let’s at least take a look at the difference between the two. Figure 4 shows the two datasets during the period of overlap, 1615-1945:

comparison usoskin and hoyt sunspotsFigure 4. Decadal averages of sunspots, according to Hoyt-Schatten, and also according to Usoskin et al.

Don’t know about you, but I find that result pretty pathetic. In a number of decades, the difference between the two approaches 100% … and the results don’t get better as they get more modern as you’d expect. Instead, at the recent end the Hoyt-Schatten data, which at that point is based on good observations, shows about twice the number of sunspots shown by the Usoskin reconstruction. Like I said … not good.

Finally, and most importantly, I suspect that at least some of what we see in Figure 3 above is simply a spurious interference pattern between the length of the sunspot cycles (9 to 13 years) and their averaging period of ten years. Hang on, let me see if my suspicions are true …

OK, back again. I was right, here are the results. What I’ve done is picked a typical 12-year sunspot cycle from the Hoyt-Schatten data. Then I replicated it over and over starting in 1600. So I have perfectly cyclical data, with an average value of 42.

But once we do the decadal averaging? … well, Figure 5 shows that result:

pseudo sunspot dataFigure 5. The effect of decadal averaging on 12-year pseudo-sunspot cycles. Upper panel (blue) shows pseudo-sunspot counts, lower panel (red) shows decadal averaging of the upper panel data.

Note the decadal averages of the upper panel data, which are shown in red in the lower panel … bearing in mind that the underlying data are perfectly cyclical, you can see that none of the variations in the decadal averages are real. Instead, the sixty-year swings in the red line are entirely spurious cycles that do not exist in the data, but are generated solely by the fact that the 10-year average is close to the 12-year sunspot cycle … and the Usoskin analysis is based entirely on such decadal averages.

But wait … it gets worse. Sunspot cycles vary in length, so the error caused by the decadal averaging will not be constant (and thus removable) as in the analysis above. Instead, decadal averaging will lead to a wildly varying spurious signal, which will not be regular as in Figure 5 … but which will be just as bogus.

In particular, using a histogram on such decadally averaged data will lead to very incorrect conclusions. For example, in the pseudo-sunspot data above, here is the histogram of the decadal averages shown in red.

histogram pseudo sunspot dataFigure 6. Histogram of the decadal average data shown Figure 5 above.

Hmmm … Figure 6 shows a peak on the right, with secondary smaller peak on the left … does this remind you of Figure 3? Shall we now declare, as Usoskin et al. did, and with equal justification, that the pseudo-sunspot data has two “modes”?

CONCLUSIONS:

In no particular order …

1. The Usoskin et al. reconstruction gives us a new class of sunspots, the famous “anti-spots”. Like the square root of minus one, these are hard to observe in the wild … but Usoskin et al. have managed to do it.

2. Despite their claims, the correlation of their proxy-based results with observations is not very good, and is particularly bad in recent times. Their proxies often give results that are in error by ~ 100%, but not always in the same direction. Sometimes they are twice the observations … sometimes they are half the observations. Not impressive at all.

3. They have set their thresholds based on a bizarre combination of the mode and the standard deviation, a procedure I’ve never seen used.

4. They provided no justification for these thresholds other than their histogram, and in fact, you could do the same with any dataset and declare (with as little justification) that it has “modes”.

5. As I’ve shown above, the shape of the histogram (which is the basis of all of their claims) is highly influenced by the interaction between the length(s) of the sunspot cycle and the decadal averaging.

As a result of all of those problems, I’m sorry to say that their claims about the sun having “modes” simply do not stand up to close examination. They may be correct, anything’s possible … but their analysis doesn’t even come near to establishing that claim of distinct solar “modes”.

Regards to all,

w.

THE USUAL: If you disagree with me or someone else, please quote the exact words that you disagree with. That way, we can all understand just what it is that you object to.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

140 Comments
Inline Feedbacks
View all comments
February 24, 2014 12:30 am

Sunspot count during this February has been on high side and it is likely to end above 100 (non-smoothed). This would be a new monthly peak for SC24 and the highest monthly count since October 2002. Composite polar field has moved back to the ‘negative’ territory, it appears that SC24 isn’t ‘done’ yet and might still have a surprise or two.

cd
February 24, 2014 6:51 am

Willis
In the abstract they clearly state that they’re dealing with a bimodal distribution. However, it seems that they’re going a step further and assuming two unimodal distributions (not necessarily the same thing as bimodal – there are a suite of functions to assess the appropriateness of such assumptions); and they’re only looking at only one (the higher), in which case their mode may proximate to the mean for the “right” distribution. That’s what it looks like to me.
They may have also done a normal score transformation in order to acquire some of their stats (obviously not the mode) and hence the blue curve.
But it is certainly a little unconventional.
As for their “smoothing/averaging” of the sun-spot cycle – what? I didn’t/couldn’t read the article but it seems pretty poor stuff. Are you sure this is what they did? But given the quality of their data review it seems very likely.

tadchem
February 24, 2014 10:35 am

One of the first lessons of data analysis I was given in graduate school (by a Professor of Physical Chemistry) was this: “There are 3 kinds of liars. There are plain Liars, Damned Liars, and Deconvoluters.” The claim of resolving this ‘data’ into three ‘modes’ is an example of deconvolution.

1sky1
February 24, 2014 4:23 pm

Inasmuch as the Schwabe cycle is by no means strictly periodic and the power density peaks closer to 11yrs than to 12yrs, Willis’ Figure 5 is somewhat misleading regarding the strength of aliasing introduced by decimation of sunspot data to decadal average values. Nevertheless, to avoid aliasing in trying reveal longer-term variations, the original data should have been either pre-filtered to suppress the Schwabe cycle completely or a decadal average been computed, say, every 5 years.

February 24, 2014 7:52 pm

I spent last night reviewing my college textbook “Discrete Time Signal Processing” (Oppenheim, et. al). If the following elements in analyzing finite-time series are not described in detail, then it’s highly likely the author has put spurious signals into the data and the analysis has to be discarded.
1. Leakage prevention by proper windowing of the finite series. The choice of window should be described (e.g. Kaiser, Hamming, etc). Note that the default (no window) is a boxcar, with bad results. Think of the finite series as an infinite repeat of that series when the math is actually done. If the beginning and ending don’t match you’re going to get spurious leakage.
2. Attention to the Nyquist criterion. Any time I see “averaged over” with decimation it’s clear that the author has done something wrong. The proper technique is upsampling followed by a quality filter such as Gaussian, Hamming, etc. i.e. like your CD player works.
3. Resolution (analogous to Nyquist but for the low frequency components). You can’t discern a 60 year cycle from 70 years of data, you need at minimum 120 years of data, and even then the resolution of frequency is very low. Proper padding with zeros needs to be done to prevent aliasing low frequencies down to DC.
4. Quantization noise. All measurements have errors, and this needs to be modeled in the analysis.
5. The original data and in-between results must be provided to show that the author didn’t make easy-to-make mistakes such as applying the above steps in the wrong order. This was a requirement when this was used at two industrial firms I worked at in the 1990s. Mistakes are very easy to make, it’s a very complex and hard to understand subject.
References: textbook. http://www.amazon.com/Discrete-Time-Signal-Processing-Edition-Prentice/dp/0131988425. Accurate but practically unreadable if you don’t already know what you are doing. I distinctly remember hating it in college:
More accessible: http://www.silcom.com/~aludwig/Signal_processing/Signal_processing.htm#Resolution_and_bandwidth

Konrad
February 25, 2014 12:06 am

Willis Eschenbach says:
February 24, 2014 at 11:59 am
——————————————
Willis,
I think your point of “negative sunspots” has been quite effectively made on this thread, so it should be safe to respond to the SW/LWIR into oceans issue.
“I pointed out that we have actual 20-minute data on this. We don’t need to use “average solar radiation” as you claim, and so we don’t. Your claim is wrong. We use instantaneous measurements of the SW heating at depth, not averages as you say.”
What I was referring to was initial two shell radiative modelling such as depicted in the infamous Trenberth-Kiehl diagram. This kind of modelling does indeed treat incident SW over the oceans as a constant ¼ sun. Initial claims of the radiative greenhouse effect are based on this kind of modelling.
“The TAO buoys are the empirical experiment par excellence. When we look at their data we are looking at experimental results.”
This is correct, and I note that in quantifying the ITCZ cloud thermostat you have found the data very useful. It is disappointing to see so much money wasted in the AGW madness that could be spent on repair and upgrade of this system. However what is being measured is the noisy real world.
“So instead of measuring the effect of varying longwave and shortwave on some foam-lined box in the lab, we are measuring the effect of varying longwave and shortwave above, at the surface of, and at depth in the actual factual ocean [..] From those results we can determine how much the sun heats the ocean and how much the LW heats the ocean.”
The problem with using the TAO buoy array is noise. To determine if incident LWIR can heat or slow the cooling rate of liquid water that is free to evaporatively cool, a simple clean lab experiment should be all that is required. No priest or acolyte of the Church of Radiative Climatology can ever produce one when challenged. Why is that? Every study claiming that LWIR can slow the cooling of the oceans has been a noisy outdoor study with chronic limitations. The Marriott “study” even went so far as to merge day and night data!
I have run the simple and above all clean empirical experiments. Incident LWIR can heat (but not above the temperature of the IR source) or slow the cooling of almost any material that does not evaporatively cool. It just doesn’t work for liquid water that is free to evaporatively cool.*
“For a discussion of the diurnal effects of longwave and shortwave, you might enjoy my post “Cloud Radiation Forcing in the TAO Dataset”.”
I did read this at the time Willis, and I believe I posed a question that went unanswered about night only data. The reason for this is the only way TAO buoy data could prove LWIR having an effect on the cooling rate of the oceans would be to observe the cooling curve of a surface following thermometer on a night with low drifting 4 octa cloud cover. Flat spots in the cooling curve (adjusted for surface wind speed variation) should correlate with peaks in LWIR from drifting cloud. (no using day data like Marriott due to low angle of incidence scattering of UV, SW and SWIR)
*my experiments show that LWIR does not heat nor slow the cooling rate of liquid water that is free to evaporatively cool and does have an effect close to the Trenberth-Kiehl style claims when evaporation is mechanically restricted. However I believe it should be possible to get a result if the water and air above it is very cold and no evaporation is occurring.
Regards,
Konrad.

William Astley
February 25, 2014 1:16 am

In reply to
lsvalgaard says:
Evidence for distinct modes of solar activity⋆
I. G. Usoskin1, G. Hulot2, Y. Gallet2, R. Roth3, A. Licht2, F. Joos3, G. A. Kovaltsov4, E. Thébault2 and A. Khokhlov
William:
The cosmogenic isotope data supports Usoskin’s assertion that the solar magnetic cycle activity in the last 70 years was the highest in 70 years.
We present a new adjustment-free, physical reconstruction of solar activity over the past three millennia, using the latest verified carbon cycle, 14C production, and archeomagnetic field models. This great improvement allowed us to study different modes of solar activity at an unprecedented level of details.
William: Your name calling indicates you have no scientific response to their adjustment free analysis.
Conclusions. The Sun is shown to operate in distinct modes – a main general mode, a Grand minimum mode corresponding to an inactive Sun, and a possible Grand maximum mode corresponding to an unusually active Sun. These results provide important constraints for both dynamo models of Sun-like stars and investigations of possible solar influence on Earth’s climate.
William: Curious that we are suddenly starting to observe cooling of both poles.

Konrad
February 25, 2014 1:36 am

Willis Eschenbach says:
February 24, 2014 at 12:33 pm
———————————————
“Empirical experiment shows a very great difference in average temperatures between intermittent SW heating at depth in transparent materials than averaged SW heating at the surface of opaque materials.”
I asked if this claim was untrue. You responded –
“No clue. Not enough information there to answer the question. What transparent and opaque materials are we speaking about, for example? Averaged over what period? You know … details. Not enough of them.”
Fair enough. This was a reference to a simple experiment described previously on this blog showing that SW heating of transparent materials at depth gives very different results than SW heating of the same materials if their upper surface is opaque. The purpose of the experiment is to show how an intermittent SW source peaking over 1000 w/m2 is quite sufficient to warm our oceans. I point out again that the hight priests of the Church of Radiative Climatology have decreed that solar SW alone is not enough to keep our oceans from freezing.
Here is the recipe for “Shredded Lukewarm turkey in Boltzmannic vinegar”
Take two 100 x 100 x 10mm blocks of clear acrylic. Paint one black on the base (block A), and the second black on the top surface (block B). Spray both blocks with several layers of clear-coat on their top surfaces to ensure equal reflectivity and IR emissivity. Attach thermocouples to upper and lower surfaces. Insulate the blocks on the sides and base. Enclose each in a small LDPE greenhouse to minimise conductive losses. Now expose to strong solar SW.
As little 3 hours should result in a 17C average differential between the blocks. The block with the black base runs hotter. SB equations will not give the correct answer. (caution – experiment temperatures can exceed 115C)
What would the priests of the Church of Radiative Climatology say? Both blocks are absorbing the same amount of solar radiation, both blocks have the same ability to emit LWIR, they should reach the same equilibrium temperature.
However block A reaches a far higher average temperature, why? The SW absorbed by block A heats from the base, and non-radiative transports (conduction) govern how fast energy returns to the surface to be radiated as LWIR. The SW absorbed by block B is absorbed at the surface and some is immediately re-radiated as LWIR before conduction can carry it down into the block below. Our oceans most closely resemble block A, however two shell radiative models that consider the ocean just “surface” model the oceans more like block B.
This is how solar SW alone is quite sufficient to heat our oceans. SW heating at depth is instantaneous, however the slow speed of non-radiative transport back to the surface allows energy to accumulate over the diurnal cycle. If our oceans could be instantly turned to ice, my crude guess is that it may take over a decade for the sun to thaw them, but they would thaw even under a non-radiative atmosphere.
“Put different amounts of energy into different things in different ways, you get different resulting temperatures.”
I am putting the SAME amount of energy into materials in different ways and getting different temperatures.
“However, I don’t understand your point in highlighting that obvious fact.”
The point is this –
The sun heats our oceans.
The net effect of the atmosphere over the oceans is ocean cooling.
The net effect of radiative gases in our atmosphere is atmospheric cooling.
AGW is a physical impossibility.
The next question I hope to answer through empirical experiment is how hot our oceans would get if all atmospheric features above excepting pressure were removed. No evaporative or conductive cooling but also no downwelling LWIR –
http://i42.tinypic.com/315nbdl.jpg
The priests of the Church of radiative Climatology claims indicate the the water sample should freeze. Do you seriously think there is any chance of that? How did they go with the whole “the effect of clouds on surface temperatures is neutral” thing?
Regards,
Konrad.

February 25, 2014 2:29 am

William Astley says:
February 25, 2014 at 1:16 am
The cosmogenic isotope data supports Usoskin’s assertion that the solar magnetic cycle activity in the last 70 years was the highest in 70 years.
Considering that the 14C data does not cover the last 70 years your statement is very curious.
possible Grand maximum mode corresponding to an unusually active Sun. These results provide important constraints for both dynamo models of Sun-like stars and investigations of possible solar influence on Earth’s climate.
As there has not been a Grand Maximum the last 70 years [the sun has not been unusually active] no such constraints seem of importance.

1sky1
February 25, 2014 4:23 pm

Peter Sable says:
February 24, 2014 at 7:52 pm
Data “windowing” to supress spectral “leakage” is relevant only for frequency-domain analysis. And zero-padding improves low-frequency resolution without distortion only if the original signal is time-limited (e.g., FIR filter). Neither is advisable for time-domain analysis, such as discussed here.

William Astley
February 25, 2014 4:24 pm

In reply to:
The surprising part to me was the claim by Usoskin et al. that in the decade centered on 1445, there were minus three (-3) sunspots on average … and there might have been as few as minus ten sunspots. Like I said, Usoskin et al. seem to have discovered the sunspot equivalent of antimatter, the “anti-sunspot” … however, they must have wanted to hide their light under a bushel, as they’ve conveniently excluded the anti-sunspots from what they show in Figure 1 …
Negative sunspot count occurs as sunspot count is used as a proxy solar variable for all of the processes that modulate the solar heliosphere which in turns modulates cosmogenic isotope count.
To disprove a paper it is first necessary to understand the paper.
Usoskin use recent direct observation of sunspot count current era vs cosmogenic isotope count for calibration. In the past the cosmogenic isotope count is very, very, low which is not in agreement with silly comments in this forum that the Maunder minimum was more active that the current era or that the current era was not a grand maximum.
Those comments are incorrect.
Planetary cooling due to the interruption of solar magnetic cycle 24 will move the conversion off of whether solar magnetic cycle changes does or does not modulate planetary temperature to how it does and why did the AGW mechanism saturate.
The reason for the delay in the cooling is related to the physical reason why the AGW mechanism saturates. There has been a recent set of experimental results that confirm there is a very, very, strong electromagnetic field about the sun that varies with the solar magnetic cycle. An example is the recent discovery is the size of the proton is too small (see recent Scientific America article). A second example is the discovery that there is a change in atomic decay rates depending on distance from the sun which can be explained by a scalar field about the sun. (A very, very strong electrostatic field affects both the proton size and atomic decay rates.)
There are series of papers that are beyond the scope of this forum concerning quasar observations that confirm matter changes to resist the collapse of a very large object. The object that forms when very large objects collapse is active and emits charge and forms a very, very strong magnetic field to arrest the collapse. A very, very strong electrostatic field also affect redshift which explains redshift anomalies concerning both quasars and their host galaxy. The active object that forms when large objects collape explains quasar jets, lineless quasar spectrum, naked quasars, quasar clustering, the galaxy rotational anomaly, and the phenomena that dark energy and dark matter were created to explain.
Our sun formed from the collapse of a super nova. The core of our sun is this different state of matter.

February 25, 2014 4:40 pm

William Astley says:
February 25, 2014 at 4:24 pm
Usoskin use recent direct observation of sunspot count current era vs cosmogenic isotope count for calibration.
He has no recent cosmogenic data with which to calibrate. The record stops before 1950. He does not ‘use recent direct observation of sunspot count’, but the old Group Sunspot Number by Hoyt and Schatten which has been shown to be faulty.
The rest of your post is pure speculation with little basis in fact.

Konrad
February 25, 2014 6:18 pm

Willis Eschenbach says:
February 25, 2014 at 9:08 am
————————————
Willis,
thank you for your continuing polite and considered responses. I trust we have moved beyond the “asinine tripe” stage.
“Say what? Why would SB equations not give the right answer? In order for you to convince me of that, you’ll have to show the equations that you are using. I say this, because the SB equations have given other people the right answer for centuries … so if they give you the wrong answer, I’ve got to include “pilot error” in the differential diagnosis …”
Pilot error? I have a PPL. I may not be a bold pilot, but I am an old pilot 😉 I do not actually bother with SB calcs as most of my work deals with issues that would require CFD. My experience is that sceptics now distrust computer modelling, so I use easily replicated empirical experiment to demonstrate my points. My easily replicated experiments are designed to conclusively illustrate where instantaneous radiative flux equations fail.
“What would the priests of the Church of Radiative Climatology say? I haven’t a clue … I never met one of them.”
Try Trenberth, Kiehl or the high priest Dr. Perriehumbert, writer of the sacred texts and player of the devils instrument.
“Both blocks are absorbing the same amount of solar radiation, both blocks have the same ability to emit LWIR, they should reach the same equilibrium temperature. – Yes, and they will … but not in three hours.”
No, they won’t. And understanding why they won’t is critical.
“But heck, let’s play your game. […] Neither one of them will be radiating less than that, and neither one will be radiating more than that.”
The issue here is not radiative flux at the surface but temperature of the material. I can easily illustrate this point with an abortion/emissivity example –
Take two aluminium plates 2mm thick of equal area, polish one to mirror and paint the other matt black. Place in a vacuum and illuminate each with equal constant solar radiation. The matt black plate heats faster, but given time the “mirror” plate reaches the higher equilibrium temperature.*
My point is just as emissivity & conduction can effect the equilibrium temperature of a material exposed to electromagnetic radiation, so to can surface transparency/translucency & conduction.
“And this is why we deal with radiative fluxes instead of temperatures, because radiative fluxes are conserved and temperature is not. In a steady state situation like the blocks or the earth, what goes in has to equal what comes out.”
And this, young Skywalker is why you fail. What radiation goes in and what goes out gives you no clue as to the temperature profiles of moving fluids in a gravity field. Attempting to parametrise non-radiative fluxes is a dead end. For one thing it totally ignores “emergent phenomena” as you have previously correctly noted.
Callendar tried to revive the AGW in 1938. His paper was published along with comments from Sir George Simpson –
“..but he would like to mention a few points which Mr. Callendar might wish to reconsider. In the first place he thought it was not sufficiently realised by non-meteorologists who came for the first time to help the Society in its study, that it was impossible to solve the problem of the temperature distribution in the atmosphere by working out the radiation. The atmosphere was not in a state of radiative equilibrium, and it also received heat by transfer from one part to another. In the second place, one had to remember that the temperature distribution in the atmosphere was determined almost entirely by the movement of the air up and down. This forced the atmosphere into a temperature distribution which was quite out of balance with the radiation. One could not, therefore, calculate the effect of changing any one factor in the atmosphere..”
I contend that decades before my birth, Sir George Simpson had it right. What say you?
*the matt black/mirror example can also be applied to our atmosphere. Which does our atmosphere containing radiative gases most closely resemble? The matt black material good at adsorbing and emitting IR or the mirror material poor at absorbing and emitting IR?
Regards,
Konrad.

February 26, 2014 12:13 am

William Astley says:
February 25, 2014 at 4:24 pm
Like I said, Usoskin et al. seem to have discovered the sunspot equivalent of antimatter, the “anti-sunspot” … however, they must have wanted to hide their light under a bushel, as they’ve conveniently excluded the anti-sunspots from what they show in Figure 1 …
Negative SSN is physical impossibility It is not unusual that the Grand solar minima reconstructions show negative values for the SSN, this may occur as the after product of calculations, one of the reasons could be conflict in the combination of uncertainty in the geomagnetic field effect (which is subtracted) and uncertainty in the radionuclide production
i.e.:
– Solar modulation of the cosmic ray particles is calculated by removing the effect of the geomagnetic field based on paleomagnetic data.
– Geomagnetic field intensity is estimated from paleomagnetic data using cosmogenic radionuclide production records to eliminate solar influence.

Tom
Reply to  Willis Eschenbach
February 26, 2014 1:38 pm

Willis Eschenbach February 26, 2014 at 10:29 am
Willis, you said:“Of the total upwelling radiation from the surface of ~ 400 W/m2, the resulting radiation to space is 240 W/m2”.
I have no difficulty with the 240 W/m^2 figure, noting that this applies to the entire surface of the earth. I think you really need to focus on the “upwelling … 400 W/m^2”. Firstly, the “upwelling” figure you cite should be 480 W/m^2; we can get to that later, if needs be. The source of the widespread misunderstanding of the ‘energy flux gap’ is the perceived need to conserve energy flux. There is no universal law which mandates that energy flux be conserved; however, there is a never-falsified law which requires that energy be conserved.
Equating the terrestrial flux output with the solar flux input is a fatal error. Isn’t this obvious? The incoming solar energy strikes only half of the earth’s surface area at any instant, compared to that surface area which is constantly radiating energy to space. If the area which a given amount of solar energy irradiates is halved, the energy flux, from the same amount of energy, doubles. So what? Quite simply, 480 W/m^2 enters half the system and 240 W/m^2 leaves from the entire system. There is no missing energy, ever.
I defy you to contest this “heated on half, cooled off double” logic. It is the physical reality of the geometry and thermodynamics of any planet heated by a single sun. Once you permit yourself to see that, the rest becomes obvious, and simple. Via Stefan-Boltzmann, a flux of 480 W/m^2 on half the globe would generate a (linearly averaged) temperature of +30C, which is kind of sensible, and which accords with people’s everyday experience that direct sunlight is hot. Ice melts and water evaporates. If, in the manner of Kiehl/Trenberth, you arbitrarily halve the actual incoming solar flux to artificially ‘average’ it over the entire global surface area, then, via SB, the sun generates only -18C. The unnecessary need for the atmospheric radiative GHE is generated; magic happens and an entire planet is heated by a frigid atmosphere to +15C – by a secondary alleged process, but with no new energy entering the system.
This entire scam is generated and sustained by wrongly trying to numerically equate terrestrial flux output from 2x m^2, with solar flux input to only x m^2. Pretty easy, really.

February 26, 2014 10:56 am

All the previous works of researchers from the very beginning of thinking about phenomena on the sun and solar spots, have two results, both of them with no true conclusions why and how changes occur in the sun and that is the true cause of their occurrence. The first result is obtained by measuring the different sizes of physical phenomena in the sun, and the other is the result of giving fictional and completely illogical values, such as data on the number of sunspot before more than a few thousand years. Who then measured the number of these spots, and based on the assumption that the result obtained? I must stress, no matter how someone will realize that these are all the data and conclusions that do not lead to actual results. The appearance of the sun and everything related to these phenomena, they have completely different causes and since science does not want to learn, they will wander for many years deceptive and science and people who want to know the truth.
Here I ask all of you who argue about this: Do any of you want to come down to my level of knowledge about these phenomena, and to establish the correspondence, I with a contractual obligation to give his evidence and no idea how to solve this problem. There are laws in celestial mechanics by which to reach a solution to the enigma of all time (past, present, future.) If you want cooperation, yes and no fear of the police that you determine what you can and what you may not decide for themselves, agree that make a contract according to which this research is conducted. It will take a lot of astronomical data, and powerful program for computing and animation, but I’m sure I’ll get to the results. I have to be addressed, but with enormous difficulties, especially because none of you do now will not hear of some new methods for people who are unknown to you, as I am. I hope that among you there are those who respect this debate, especially since the time I deal with a few tens of years, and now I’m 77 years old and it is not appropriate to deceive someone. I am waiting for an answer.

Tom
February 26, 2014 6:43 pm

Searching for that .gif of a tumbleweed blowing through … maybe with a mission bell, faintly audible over the noise of the wind …

Konrad
February 26, 2014 8:33 pm

Willis Eschenbach says:
February 26, 2014 at 10:01 am
—————————————-
Willis,
thank you again for taking the time to respond.
“Analyses of the energy budget of the planet are done in terms of radiative fluxes (W/m2) and not in terms of temperature. Why? Because radiative flux is a measure of energy and energy is conserved … whereas temperature is not conserved. You can’t do an energy budget in terms of temperature.”
If we had better satellite measurement of ingoing and out going radiation from the planet we could say whether the planet was accumulating or losing energy. (errors as great as 5 w/m2 indicate we currently cannot do this) However we could not say from this measurement where in the oceans or atmosphere this was effecting temperatures. The global warming claims are all about near surface temperature. For this non-radiative transports need to be correctly modelled. In the original “basis physics” of the “settled science” this was not done correctly. The priest of the Church of Radiative Climatology tried to simply parametrise these fluxes as constants. The glaring errors are –
speed of tropospheric convective circulation not increased for increasing concentrations of radiative gases.
Timing of emergent phenomena not advanced in the diurnal cycle for increasing concentrations of radiative gases.
SW accumulation in the transparent oceans not properly modelled
I can easily illustrate this point with an abortion/emissivity example –
“We agree that fluxes and temperatures are very different.”
And this I contend is the heart of the matter.
“Whenever some …”
Given some of my own language on blogs I’d best let this one go 😉
“WE DON’T CARE ABOUT THE ABSOLUTE TEMPERATURE, KONRAD!
We’re attempting to follow the flow of the energy around the system, not the temperature.”
Again, the AGW scare concerns near surface temperatures. 2C is supposed to cause doooooom.
“Now, you are correct that without exact knowledge of the emissivity, we cannot say what the exact temperature of something is from knowing how much it radiates.”
Yes, however I am pointing out via empirical experiment that is is not just emissivity but translucency/transparency of the material that matters. Remember in the acrylic block experiment both blocks have the same surface LWIR emissivity.
“It’s true that radiation alone won’t tell us the absolute temperature profiles (although we can get quite close).”
The problem here is that both radiative and non-radiative energy transports must be correctly modelled, and non-radiative transports are far harder. As I show through empirical experiment “quite close” is nowhere near close enough.
——————————————————
“You are aware, I’m sure, of the existence of infrared thermometers. What you do is set the emissivity dial to the known emissivity of the substance in question, and the IR thermometer tells you its temperature.”
I own one. It can be seen here where I am repeating the transparent material experiment under intermittently cycled halogen lights –
http://i61.tinypic.com/2z562y1.jpg
Given the speed of the instrument you can compare the steady emission from block A and the “sawtooth” emission from block B.
This is used, for example, in stand-off measurements of the small-scale (centimetres) variation in the surface temperature of the ocean. The emissivity of water is known (0.96), so then an IR camera can record the minute fluctuations in SSTs over the field of vision of the camera. Fascinating stuff.
“So you are 100% correct that “what radiation goes in … gives you no clue as to the temperature”. The temperature cannot be derived from knowing what radiation is going in to an object. […] The thermal radiation given off by an object can give us a very detailed and accurate picture of the temperature.”
“Not only that, but since thermal radiation is used to measure the temperature of the surface of the ocean, it falsifies your objection that converting radiation to temperature won’t work on “moving fluids in a gravity field”. “
The attempts of the climate “scientists” to use this method have led them to the provably false conclusion that the oceans would freeze in the absence of downwelling LWIR.
—————————————
“I say that he [Sir George Simpson] was right. I would add that it it is not sufficiently realised by non-meteorologists that it is also impossible to solve the problem of the temperature distribution in the atmosphere without working out the radiation …”
We seem to be in agreement, both radiative and non-radiative transports need to be correctly worked out. I am showing through empirical experiment that climate “scientists” have gotten the non-radiative transports hideously wrong.
“Of the total upwelling radiation from the surface of ~ 400 W/m2, the resulting radiation to space is 240 W/m2. That puts the overall global average strength of the greenhouse effect at (400-240)/400 = 40%. In other words, some 40% of the upwelling longwave from the surface is intercepted by the atmosphere.”
And around 90% of the solar energy absorbed by the land, oceans and atmosphere is emitted back to space via radiative gases in the atmosphere. While there is a radiative greenhouse effect on earth there is not a NET radiative greenhouse effect.
I have shown via empirical experiment that it is not just emissivity that governs the equilibrium temperature of a material exposed to solar radiation, but translucency/transparency as well. On this basis I clam that the gospel of the Church of Radiative Climatology is in error regarding how our oceans heat. I contend that if our oceans could be retained without an atmosphere they would likely reach 80C or beyond. This means that the net effect of the atmosphere over the oceans is ocean cooling. Given that radiative gases are the only effective means of atmospheric cooling this means that the net effect of radiative gases in our atmosphere is planetary cooling at all concentrations above 0.0ppm.
Willis, do you feel that if our oceans could be retained without an atmosphere they would freeze?
Regards,
Konrad.

Verified by MonsterInsights