Usoskin Et Al. Discover A New Class of Sunspots

Guest Post by Willis Eschenbach

There’s a new post up by Usoskin et al. entitled “Evidence for distinct modes of solar activity”. To their credit, they’ve archived their data, it’s available here.

Figure 1 shows their reconstructed decadal averages of sunspot numbers for the last three thousand years, from their paper:

usoskin figure 2Figure 1. The results of Usoskin et al.

Their claim is that when the decadal average sunspot numbers are less than 21, this is a distinct “mode” of solar activity … and that when the decadal average sunspot numbers are greater than 67, that is also a separate “mode” of solar activity.

Now, being a suspicious fellow myself, I figured I’d take a look at their numbers … along with the decadal averages of Hoyt and Schatten. That data is available here.

I got my first surprise when I plotted up their results …

Figure 2 shows their results, using their data.

decadal sunspot numbers usoskinFigure 2. Sunspot numbers from the data provided by Usoskin et al.

The surprising part to me was the claim by Usoskin et al. that in the decade centered on 1445, there were minus three (-3) sunspots on average … and there might have been as few as minus ten sunspots. Like I said, Usoskin et al. seem to have discovered the sunspot equivalent of antimatter, the “anti-sunspot” … however, they must have wanted to hide their light under a bushel, as they’ve conveniently excluded the anti-sunspots from what they show in Figure 1 …

The next surprise involved why they chose the numbers 21 and 67 for the breaks between the claimed solar “modes”. Here’s the basis on which they’ve done it.

usoskin Figure 3Figure 3. The histogram of their reconstructed sunspot numbers. ORIGINAL CAPTION: Fig. 3. A) Probability density function (PDF) of the reconstructed decadal sunspot numbers as derived from the same 106 series as in Fig. 2 (gray-filled curve). The blue curve shows the best-fit bi-Gaussian curve (individual Gaussians with mean/σ being 44/23 and 12.5/18 are shown as dashed blue curves). Also shown in red is the PDF of the historically observed decadal group sunspot numbers (Hoyt & Schatten 1998) (using bins of width ΔS = 10).

The caption to their Figure 3 also says:

Vertical dashed lines indicate an approximate separation of the three modes and correspond to ±1σ from the main peak, viz. S = 21 and 67.

Now, any histogram has the “main peak” at the value of the “mode”, which is the most common value of the data. Their Figure 3 shows the a mode of 44, and a standard deviation “sigma” of 23. Unfortunately, their data shows nothing of the sort. Their data has a mode of 47, and a standard deviation of 16.8, call it 17. That means that if we go one sigma on either side of the mode, as they have done, we get 30 for the low threshold, more than they did … and we get 64 for the high threshold, not 67 as they claim.

So that was the second surprise. I couldn’t come close to reproducing their calculations. But that wouldn’t have mattered, because I must admit that I truly don’t understand the logic of using a threshold of a one-sigma variation above and below not the mean, not the median, but the mode of the data … that one makes no sense at all.

Next, in the right part of Figure 1 they show a squashed-up tiny version of their comparison of their results with the results of Hoyt and Schatten … the Hoyt-Schatten data has its own problems, but let’s at least take a look at the difference between the two. Figure 4 shows the two datasets during the period of overlap, 1615-1945:

comparison usoskin and hoyt sunspotsFigure 4. Decadal averages of sunspots, according to Hoyt-Schatten, and also according to Usoskin et al.

Don’t know about you, but I find that result pretty pathetic. In a number of decades, the difference between the two approaches 100% … and the results don’t get better as they get more modern as you’d expect. Instead, at the recent end the Hoyt-Schatten data, which at that point is based on good observations, shows about twice the number of sunspots shown by the Usoskin reconstruction. Like I said … not good.

Finally, and most importantly, I suspect that at least some of what we see in Figure 3 above is simply a spurious interference pattern between the length of the sunspot cycles (9 to 13 years) and their averaging period of ten years. Hang on, let me see if my suspicions are true …

OK, back again. I was right, here are the results. What I’ve done is picked a typical 12-year sunspot cycle from the Hoyt-Schatten data. Then I replicated it over and over starting in 1600. So I have perfectly cyclical data, with an average value of 42.

But once we do the decadal averaging? … well, Figure 5 shows that result:

pseudo sunspot dataFigure 5. The effect of decadal averaging on 12-year pseudo-sunspot cycles. Upper panel (blue) shows pseudo-sunspot counts, lower panel (red) shows decadal averaging of the upper panel data.

Note the decadal averages of the upper panel data, which are shown in red in the lower panel … bearing in mind that the underlying data are perfectly cyclical, you can see that none of the variations in the decadal averages are real. Instead, the sixty-year swings in the red line are entirely spurious cycles that do not exist in the data, but are generated solely by the fact that the 10-year average is close to the 12-year sunspot cycle … and the Usoskin analysis is based entirely on such decadal averages.

But wait … it gets worse. Sunspot cycles vary in length, so the error caused by the decadal averaging will not be constant (and thus removable) as in the analysis above. Instead, decadal averaging will lead to a wildly varying spurious signal, which will not be regular as in Figure 5 … but which will be just as bogus.

In particular, using a histogram on such decadally averaged data will lead to very incorrect conclusions. For example, in the pseudo-sunspot data above, here is the histogram of the decadal averages shown in red.

histogram pseudo sunspot dataFigure 6. Histogram of the decadal average data shown Figure 5 above.

Hmmm … Figure 6 shows a peak on the right, with secondary smaller peak on the left … does this remind you of Figure 3? Shall we now declare, as Usoskin et al. did, and with equal justification, that the pseudo-sunspot data has two “modes”?

CONCLUSIONS:

In no particular order …

1. The Usoskin et al. reconstruction gives us a new class of sunspots, the famous “anti-spots”. Like the square root of minus one, these are hard to observe in the wild … but Usoskin et al. have managed to do it.

2. Despite their claims, the correlation of their proxy-based results with observations is not very good, and is particularly bad in recent times. Their proxies often give results that are in error by ~ 100%, but not always in the same direction. Sometimes they are twice the observations … sometimes they are half the observations. Not impressive at all.

3. They have set their thresholds based on a bizarre combination of the mode and the standard deviation, a procedure I’ve never seen used.

4. They provided no justification for these thresholds other than their histogram, and in fact, you could do the same with any dataset and declare (with as little justification) that it has “modes”.

5. As I’ve shown above, the shape of the histogram (which is the basis of all of their claims) is highly influenced by the interaction between the length(s) of the sunspot cycle and the decadal averaging.

As a result of all of those problems, I’m sorry to say that their claims about the sun having “modes” simply do not stand up to close examination. They may be correct, anything’s possible … but their analysis doesn’t even come near to establishing that claim of distinct solar “modes”.

Regards to all,

w.

THE USUAL: If you disagree with me or someone else, please quote the exact words that you disagree with. That way, we can all understand just what it is that you object to.

0 0 votes
Article Rating
140 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Latitude
February 22, 2014 3:42 pm

Figure 1 shows their reconstructed decadal averages of sunspot numbers for the last three thousand years…..
serious question?……how is that possible?

February 22, 2014 4:38 pm

Latitude said:
February 22, 2014 at 3:42 pm
Figure 1 shows their reconstructed decadal averages of sunspot numbers for the last three thousand years…..
serious question?……how is that possible?
—————-
Tree rings.
/joke

Jeff in Calgary
February 22, 2014 4:47 pm

Would be interesting to see what would happen if they used averaging of each individual cycle. (does that make sense?)

david moon
February 22, 2014 4:50 pm

Is a “decadal average” done like this:
average of 1900 through 1909
average of 1901 through 1910
average of 1902 through 1911, etc
or
average of 1900 through 1909
average of 1910 through 1919, etc

February 22, 2014 4:55 pm

Is there any sunspot/solar activity data vs warm/cold periods over extended periods that can be viewed, so the Sun’s activity and periods such as Roman, Medieval and 20thC Warm periods and and Cold periods such as during Maunder Minimum can be seen, such that a climate cycle can be postulated? Intrigued as to whether this correlation has any traction or any modelling?

jorgekafkazar
February 22, 2014 4:58 pm

“In a number of decades, the difference between the two approaches 100% …”
This is astrophysics, not astronomy. Astrophysicists are happy just to get the decimal in the right place. A sporadic deviation of only 100% is the same as nailing it, in astrophysics.

Clay Marley
February 22, 2014 5:03 pm

This reminds me of the Parana River study where they tried to use an 11 year running average to smooth out the sunspot cycles. Resulted in similar problems in distorting the signal. Then they tried to correlate that with stream flow.
Smoothing over cyclic behavior that varies in period requires something like a LOESS model, over say 20 years or so in this case. IMHO. I’d like to hear Willis’ thoughts on how to handle this kind of data though.

Lew Skannen
February 22, 2014 5:04 pm

Scientists archiving data and making it publicly available without a legal battle?
I can’t see that trend catching on.

Editor
February 22, 2014 5:06 pm

If they’re going to do sunspot averages, would it not make more sense to use average-over-a-cycle, rather than average-over-a-decade? I could see the possibility of sunspots being different over cycles, but that remains to be proven. However, average-over-a-decade looks entirely arbitrary.

February 22, 2014 5:08 pm

Looks to me like they got the same screeching racket (noise) that you get when you use an A to D converter record the digital data and then convert it back into an analog signal using a sample rate very close to the sampled frequency.

ShrNfr
February 22, 2014 5:08 pm

As a complex non-linear system, I would entertain suggestions of strange attractors seriously. The paper might not have found any, but I would not dismiss the possibility out of hand either.

February 22, 2014 5:32 pm

___ i ___
-1_ 0 _+1
___-i ___
The square root of minus 1 is a poor way to illustrate the spurious nature of this data manipulation method, we see perpendicular lines all over the place, and squaring a negative number is simply a type of rotation.

Tom
February 22, 2014 5:49 pm

February 22, 2014 at 4:58 pm
“This is astrophysics, not astronomy. Astrophysicists are happy just to get the decimal in the right place. A sporadic deviation of only 100% is the same as nailing it, in astrophysics”.
So, by your logic, all space launches could have had been 10 times heavier, or lighter. F=ma=10ma. You then imply the equivalence of a deviation of an order of magnitude with doubling, or halving.
You could be a taxpayer-funded climastrologist with those error bars, deserving public ridicule. It seems to me that you have crossed that line by a factor of between 2 and 10.

Editor
February 22, 2014 6:01 pm

Willis: It’s my understanding that the Hoyt & Schatten sunspot numbers were created to explain the warming up through the 1950s and that one of the authors no longer agrees with that reconstruction. I guess we’ll have to wait for Leif to confirm my memory of that reconstruction.
Regards

February 22, 2014 6:06 pm

Like the square root of minus one, these are hard to observe in the wild
Useful for ‘book-keeping’ and making the math work out right; a method of indicating phase quadrature in equations, in the math. I doubt you will ever really observe “i” (or “j” as the sparkies like to denote it) per se, in nature (outside of using a network or phase-gain analyzer) … again, quadrature “I” operations work well to describe stored, rather I should say, transitive or ‘reactive’ energy (as when stored in the mag field of an inductor or in in/between/ the plates of capacitor or in the near field of an antenna (essentially an L/C ckt)) in a non-DC, non-static (AC-excited or dynamic WRT to time) circuits, yes …
.

February 22, 2014 6:16 pm

Bob Tisdale says:
February 22, 2014 at 6:01 pm
Willis: It’s my understanding that the Hoyt & Schatten sunspot numbers were created to explain the warming up through the 1950s and that one of the authors no longer agrees with that reconstruction. I guess we’ll have to wait for Leif to confirm my memory of that reconstruction.
Your memory is correct. Schatten now longer believes that the Group Sunspot Number is correct [more correctly: that my analysis and re-evaluation is correct]. All values before ~1885 are about 50% too low. When you graft such a series to the end of the 14C data you create an automatic ‘sunspot hockey stick’. More details about the SSN here: http://www.leif.org/research/CEAB-Cliver-et-al-2013.pdf
The whole Usoskin article is not science, but advocacy, as Usoskin is peddling the ‘solar activity since 1960 has been the highest in several thousand year’ [he used to say 12,000 years]. This, of course, explains Global Warming. Some here may agree with his analysis just because they have a similar bias, regardless of what the science says. Perhaps [hopefully] Ilya will show up here to defend his ‘work’.

timetochooseagain
February 22, 2014 6:47 pm

@lsvalgaard-Could we calibrate the data Usoskin used against more accurate sunspot data…or is the use of it as a proxy tenuous?

Jean Parisot
February 22, 2014 6:48 pm

Why are we averaging relatively sparse data sets that can be visualized with today’s technology?

bushbunny
February 22, 2014 7:01 pm

Jean I am no scientist but understand something of archaeological and palaeoanthropological researches. I can not rationalize where they got the data from for 3,000 years ago. It appears to be that this could be a case of corrupting the data to prove the hypothesis, I could be wrong. It goes on all the time in universities and academia. They have to publish to prove they are doing something with the funds and grants they get, bugger the factual credibility contained.

February 22, 2014 7:04 pm

Thank you
Paul

bushbunny
February 22, 2014 7:05 pm

As far as statistics are concerned, what is the point of the research, to prove or not prove. If I was to do some research into say ‘domestic violence’ and only took say a100 case histories to prove my point. Say in one town only, was or could this represent truly the condition in other towns. You can fudge statistics. There are so many variables involved in climate change and weather patterns, what suits one region might not be relevant to another.

Lance Wallace
February 22, 2014 7:32 pm

Tom says:
February 22, 2014 at 5:49 pm
In defense of JorgeKafkazar’s statement, I too was trained in astrophysics, and our general mantra was that “1=10”–that is, if we could estimate the energy of a blue giant as exp(55+-1) ergs we were happy. Tom, it’s engineers who operate the space program, and we should all be happy that astrophysicists don’t.
By the way, I no longer remember the energy of blue giants (it was 40 years ago!) so don’t hold me to the “55”–it’s the principle of the thing.

JBirks
February 22, 2014 7:34 pm

“Some here may agree with his analysis just because they have a similar bias, regardless of what the science says.”
———————————
That’s a provocative statement. Possibly true for some, I don’t know. From a purely strategic standpoint, one could argue that Usoskin’s data tricks are less egregious than the Team’s (at least he archives it), and a flawed rival to CAGW is better than none at all. There comes a time when the science becomes so politicised that “the science” of it becomes secondary to “winning.” The more I look, the more I wonder how many people really care what the science is, or would even know if they saw it.
I am not arguing for an abdication of scientific responsibility. By all means, let’s get the science right. But is that even possible in an environment where the scientists themselves are going rogue to promote their own causes?

Matthew R Marler
February 22, 2014 7:41 pm

Thank you again to Willis Eschenbach, Bob Tisdale and Leif Svalgaard.

February 22, 2014 7:41 pm

timetochooseagain says:
February 22, 2014 at 6:47 pm
Could we calibrate the data Usoskin used against more accurate sunspot data
Yes, we could [and should], but then we don’t find the same hockey stick with late 20th century data going through the roof [like temps].
Jean Parisot says:
February 22, 2014 at 6:48 pm
Why are we averaging relatively sparse data sets that can be visualized with today’s technology?
The modern sunspot data is produced [deliberately] with the technology of centuries ago.
bushbunny says:
February 22, 2014 at 7:01 pm
I can not rationalize where they got the data from for 3,000 years ago.
Sunspots are magnetic fields that when extended from the sun keeps some cosmic rays out of the solar system. Cosmic rays smash into Nitrogen and Oxygen atoms in the atmosphere converting them to Carbon 14 which is radioactive. Carbon is plant food, so the radioactive Carbon ends up in tree rings which we can count and so measure how active cosmic rays [and thus the Sun] were thousands of years ago.

February 22, 2014 7:45 pm

JBirks says:
February 22, 2014 at 7:34 pm
But is that even possible in an environment where the scientists themselves are going rogue to promote their own causes?
From what I know about Ilya, Usoskin believes in his stuff, so has not gone rogue.

Duncan
February 22, 2014 7:54 pm

Maybe I’m just tired, but usually I understand basic math better than this.
[quote Willis]
The caption to their Figure 3 also says:
Vertical dashed lines indicate an approximate separation of the three modes and correspond to ±1σ from the main peak, viz. S = 21 and 67.
Now, any histogram has the “main peak” at the value of the “mode”, which is the most common value of the data. Their Figure 3 shows the a mode of 44, and a standard deviation “sigma” of 23. Unfortunately, their data shows nothing of the sort. Their data has a mode of 47, and a standard deviation of 16.8, call it 17. That means that if we go one sigma on either side of the mode, as they have done, we get 21 for the low threshold, just as they did … but we get 74 for the high threshold, not 67 as they claim.
[/quote]
47 – 17 is 21?
47 + 17 is 74?
[You’re right, I was the tired one. Mea culpa. The point remains. I’ve corrected the post. Many thanks, w.]

Matthew R Marler
February 22, 2014 7:59 pm

Also shown in red is the PDF of the historically observed decadal group sunspot numbers (Hoyt & Schatten 1998) (using bins of width ΔS = 10).
Binning by decadal groups of sunspot numbers, then producing the pdf of that, produces less precise estimates of the pdf of the sunspot numbers than what the authors did, which was to fit mixtures of Gaussians to the unbinned annual counts. For reference, see the soon to be published book “Analysis of Neuronal Data” by R. Kass, U. Eden and E. Brown (Springer, 2014), and other texts on non-parametric density estimation.
Thus this is inaccurate: Their Figure 3 shows the a mode of 44, and a standard deviation “sigma” of 23.
That mean and sd refer to the mean and sd of the fitted Gaussian, so 44 +/- 23 is correct for mean +/- sd. The individual Gaussians are scaled so that the sum of their areas is 1, rather than the area of each.
I grant you that’s little evidence for a break around 21 defining 2 separate “modes of solar activity”, and there is no evidence for a break around 67 defining another “mode of activity”. Their paper does not describe this “mixture distribution” modeling (as it is called) in any detail. The full figure 3 has alternatives, names of fitting algorithms, and references to sources.

Matthew R Marler
February 22, 2014 8:02 pm

oops: That mean and sd refer to the mean and sd of the fitted Gaussian, so 44 +/- 23 is correct for mean +/- sd.
I meant in that sentence to refer to the fitted Gaussian with the largest error.

Matthew R Marler
February 22, 2014 8:03 pm

oh, man. I meant “the fitted Gaussian with the largest area”. grumble, grumble.

Owen in GA
February 22, 2014 8:10 pm

47+17=64
7+7=14 carry the one
4+1+1=6
I’ve been known to carry an extra one from time to time myself.

bobl
February 22, 2014 8:12 pm

A classic nyquist rate problem , they are generating an intermodulation product within the frequency band of the signal by sampling at a rete below the nyquist rate that is less than twice the frequency of the data. Their result is therefore total rubbish, the 11-13 year sunspot cycle has been aliased to a 2- 3 year signal in the average data, Nyquist says it must be thus.

Tad
February 22, 2014 8:55 pm

Numerology. Now a recognized branch of Science.

DonV
February 22, 2014 9:04 pm

Hear! Hear! bobl. Right on!

February 22, 2014 9:44 pm

FWIW,
last 1000 years sure has a lot of similarity to this :
http://en.wikipedia.org/wiki/File:2000_Year_Temperature_Comparison.png

February 22, 2014 9:47 pm

Jeff L says:
February 22, 2014 at 9:44 pm
FWIW, last 1000 years sure has a lot of similarity to this :
http://en.wikipedia.org/wiki/File:2000_Year_Temperature_Comparison.png

I think that is the whole idea…

jorgekafkazar
February 22, 2014 10:18 pm

Tom says:
February 22, 2014 at 4:58 pm
“This is astrophysics, not astronomy. Astrophysicists are happy just to get the decimal in the right place. A sporadic deviation of only 100% is the same as nailing it, in astrophysics”.
So, by your logic, all space launches could have had been 10 times heavier, or lighter. F=ma=10ma. You then imply the equivalence of a deviation of an order of magnitude with doubling, or halving. You could be a taxpayer-funded climastrologist with those error bars, deserving public ridicule. It seems to me that you have crossed that line by a factor of between 2 and 10.
February 22, 2014 at 5:49 pm
That’s not my logic, Tom, it’s entirely yours, a straw man based on things I didn’t say, and displaying your stellar ignorance at the same time. If you don’t know the difference between astrophysics and astronomy, I’d strongly recommend you not open your mouth on either subject.

David L
February 22, 2014 10:20 pm

Serious question: how do you measure sunspot activity from thousands of years ago? What proxy is used?
Second: anytime you are dealing with bounded data, in this case sunspots can’t physically be less than zero, Gaussian fits (normal distributions) are not statistically nor physically meaningful, nor are averages and standard deviations. Technically one would apply non-normal distributions such as log-normal fit, and for statistical central tendency report the mode and range of the data, or perform a log transformation on the sunspot activity and then fit with gaussians and report averages and standard deviations of the log transformed data set.

February 22, 2014 10:22 pm

David L says:
February 22, 2014 at 10:20 pm
Serious question: how do you measure sunspot activity from thousands of years ago?
See my explanation upthread at February 22, 2014 at 7:41 pm

February 22, 2014 11:05 pm

Willis I cannot see how you missed this
67 -1=66
21 + 1 = 22
66/22 = 3
Its related to orbits and phi

crosspatch
February 22, 2014 11:08 pm

Sunspots are magnetic fields that when extended from the sun keeps some cosmic rays out of the solar system. Cosmic rays smash into Nitrogen and Oxygen atoms in the atmosphere converting them to Carbon 14 which is radioactive. Carbon is plant food, so the radioactive Carbon ends up in tree rings which we can count and so measure how active cosmic rays [and thus the Sun] were thousands of years ago.

Wouldn’t that assume cosmic rays entering the solar system at a constant rate? I mean, sure, if cosmic rays entered the solar system at a constant rate, then the proxies for cosmic ray collisions with the atmosphere would also be proxies for solar magnetic activity. But do we really have evidence that the cosmic rays are constant on century timescales? If the solar system were to pass through an area of greater cosmic ray activity then the amount of 14C could vary with no variation in solar magnetic field.
E.g. Could the solar system be “hit” by a burst of cosmic rays from a nova long ago that we can no longer see and wouldn’t that affect these 14C and 10Be production rates with no change in solar activity?

AndyG55
February 22, 2014 11:24 pm

Just a point.
I feel it is very much a politeness to email the principle author of any paper that is being discussed so that they may join in and make their point of view.
I think it is very disingenuous if this is not done in every instance.
People have a right of reply, and it is only proper that they be informed of the discussion on what is one of the main scientific discussion blogs on the web.
I strongly suggest that Anthony make this a point of policy.

charles nelson
February 22, 2014 11:53 pm

Three thousand years of solar activity reconstructed from ‘Tree Rings’.

February 23, 2014 12:18 am

I have lost complete faith in any agency with an agenda to promote CAGW.
Having said that, I am just waiting for the Sun go quiet long enough to show its affects on climate.
Maybe then we can get science back on track.
Time for a reboot.

Patrick
February 23, 2014 12:18 am

Has someone said no-one knows yet? Because I am fairly sure no-one knows what the state of the sun was back 3000 years ago.

Karl W. Braun
February 23, 2014 12:34 am

We might soon have the opportunity to examine these sunspots right here on Earth:
http://waterfordwhispersnews.com/2014/01/21/north-korea-lands-first-ever-man-on-the-sun-confirms-central-news-agency/

Carl
February 23, 2014 1:05 am

If I ever write a paper to do with climate, I’ll be very careful to make sure I’ve got it right before I let you see it.

February 23, 2014 1:27 am

don’t the co2ers own wiki who instal a curfew thro deletion of non co2 evidence and making sure wiki has a ‘its settled science’ narrative? which is why they like to link to it?

Réaumur
February 23, 2014 1:30 am

I doubt you will ever really observe “i” (or “j” as the sparkies like to denote it) per se, in nature
In quantum mechanics i is not just a convenience for easier calculation, but a necessary part of the description of nature. I agree that it can’t be observed directly, but the “imaginary” seems to be built in to “reality”, what ever that is!

David L
February 23, 2014 1:52 am

lsvalgaard on February 22, 2014 at 10:22 pm
David L says:
February 22, 2014 at 10:20 pm
Serious question: how do you measure sunspot activity from thousands of years ago?
See my explanation upthread at February 22, 2014 at 7:41 pm
———————————
That can’t be a very precise measurement of historical sunspot activity, can it, especially the further one goes back?

February 23, 2014 2:10 am

Reliability of the sunspot reconstructions based on C14 and 10B cosmogenic nuclide production prior to 1600 can not be taken for granted. Solar modulation potential reconstructed from the common cosmic ray intensity record is also function of the precipitation which may account possibly up to 30% of variability.
Another major obstacle is the variability in the intensity of geomagnetic dipole. I have looked at more than half a dozen different GM dipoles, prior to 1840s (Gauss magnetometer) variability between models is further 5-10% .
As Steinhilber correctly states variation on the millennial time-scale of solar modulation potential depends on the geomagnetic field. If another geomagnetic field reconstruction is used it would show another trend on millennial time scales.”
And finally:
a) Solar modulation of the cosmic ray particles is calculated by removing the effect of the geomagnetic field based on paleomagnetic data.
b) Geomagnetic field intensity is estimated from paleomagnetic data using cosmogenic radionuclide production records to eliminate solar influence.

Greg Goodman
February 23, 2014 3:10 am

“Note the decadal averages of the upper panel data, which are shown in red in the lower panel … bearing in mind that the underlying data are perfectly cyclical, you can see that none of the variations in the decadal averages are real. Instead, the sixty-year swings in the red line are entirely spurious cycles that do not exist in the data, but are generated solely by the fact that the 10-year average is close to the 12-year sunspot cycle … and the Usoskin analysis is based entirely on such decadal averages.”
Good catch Willis.
This inappropriate use of means (and running means) which amount to sub-sampling without an anti-alias step is something I’ve been moaning about for years.
Once climate science gets past high school , maybe some of these basic faults will be come less common. for the moment they are par for the course in climatology.
It would be interesting to look at this new data which some proper processing. Maybe some of the oddities like negative numbers will sort themselves out.
I can instantly see bi-modal behaviour in the PDF. A closer looks shows just a hint of a third mode higher up. I’d say they’ve got the bounds about right. Again this may become clearer with better d.p. (or it may turn out to be an artefact, though I doubt it).
Who knows, it may even start to be a better match to Hoyt and Schatten without the spurious glitches caused by the 10y means.
It’s a fine day here,so I’m off to enjoy it , may have a look at filtering later.

Greg Goodman
February 23, 2014 3:24 am

BTW doing a PDF with 10y “bins” is like hitting it with 10 y mean as well. Since the recent data is fairly fine grained, a less clumsy grouping may be better.

February 23, 2014 4:05 am

Thank you Willis and all.

björn from sweden
February 23, 2014 4:48 am

See, this is what happens when you make data available to anyone. Some bitter truth-clinger will come along and prove you wrong.
I think Phil Jones said it best, how freedom of information threatens the advancement of concensus science:
“…We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try to find something wrong with it…”

David Norman
February 23, 2014 5:32 am

Your statement “Their data has a mode of 47, and a standard deviation of 16.8, call it 17. That means that if we go one sigma on either side of the mode, as they have done, we get 20 for the low threshold, a bit less than they did … and we get 64 for the high threshold, not 67 as they claim.” Should that not be 30 for the low threshold?
[HAIYEEE, NOT AGAIN … you are correct, sire, I’ve once again fixed my arithmetical errors, my thanks. Grrr … w.]

Rathnakumar
February 23, 2014 5:40 am

Well done again, sir!

Carla
February 23, 2014 7:57 am

As a result of all of those problems, I’m sorry to say that their claims about the sun having “modes” simply do not stand up to close examination. They may be correct, anything’s possible … but their analysis doesn’t even come near to establishing that claim of distinct solar “modes”.
——————————-
Dr. S., uses a “ceiling and floor,” to describe something similar to this paper. Perhaps he will tell us as to how he arrived at his ceiling and floor for describing different solar cycle activity levels?

February 23, 2014 8:06 am

Carla says:
February 23, 2014 at 7:57 am
Dr. S., uses a “ceiling and floor,” to describe something similar to this paper. Perhaps he will tell us as to how he arrived at his ceiling and floor for describing different solar cycle activity levels?
By inspection: slide 33 of
http://www.leif.org/research/Solar%20Wind%20During%20the%20Maunder%20Minimum.pdf
using Yogi Berra’s approach: http://www.brainyquote.com/quotes/quotes/y/yogiberra125285.html

Jenn Oates
February 23, 2014 8:10 am

Great stuff, Willis. Many thanks from this high school science teacher.

duked
February 23, 2014 8:14 am

Jean Parisot says:
February 22, 2014 at 6:48 pm
Why are we averaging relatively sparse data sets that can be visualized with today’s technology?
Lsvalgaard says: The modern sunspot data is produced [deliberately] with the technology of centuries ago.
bushbunny says:
February 22, 2014 at 7:01 pm
I can not rationalize where they got the data from for 3,000 years ago.
Lsvalgaard says: Sunspots are magnetic fields that when extended from the sun keeps some cosmic rays out of the solar system. Cosmic rays smash into Nitrogen and Oxygen atoms in the atmosphere converting them to Carbon 14 which is radioactive. Carbon is plant food, so the radioactive Carbon ends up in tree rings which we can count and so measure how active cosmic rays [and thus the Sun] were thousands of years ago.
Lsvalgaard – 2 questions – on point 1 above – do the results from this methodology come close to current measurements using today’s technology? How close?
On the second point, you partially answered above, but are their adequate controls for other inputs into the tree ring data (or can this effect be isolated?)

Carla
February 23, 2014 8:14 am

lsvalgaard says:
February 22, 2014 at 7:41 pm
bushbunny says:
February 22, 2014 at 7:01 pm
I can not rationalize where they got the data from for 3,000 years ago.
Sunspots are magnetic fields that when extended from the sun keeps some cosmic rays out of the solar system. Cosmic rays smash into Nitrogen and Oxygen atoms in the atmosphere converting them to Carbon 14 which is radioactive. Carbon is plant food, so the radioactive Carbon ends up in tree rings which we can count and so measure how active cosmic rays [and thus the Sun] were thousands of years
———————————–
Yes and those cosmic rays show a very intimate relationship, with solar activity, as well in the modern era.
Strange attractors indeed…
Thanks for the encapsulated explanation Dr. S.

February 23, 2014 8:23 am

duked says:
February 23, 2014 at 8:14 am
Lsvalgaard – 2 questions – on point 1 above – do the results from this methodology come close to current measurements using today’s technology? How close?
Today’s technology is the same as in centuries past [on purpose] so almost by definition the results should match.
On the second point, you partially answered above, but are there adequate controls for other inputs into the tree ring data (or can this effect be isolated?)
The cosmic rays also produce radioactive Beryllium [10Be] which can be found in ice cores. The 10Be and 14C records are in good agreement. The problem with Usoskin’s paper is not the 14C record [which stops in 1950 due to contamination by atom bombs in addition to the length of the long residence time of carbon dioxide in the atmosphere] but the red curve showing the flawed group sunspot number grafted onto the 14C cosmic ray record.

Jim G
February 23, 2014 8:44 am

lsvalgaard says:
But remember, Leif, that according to Yogi, “I never said most of the things I said.” Not a bad philosophy to follow considering some of the pronouncements on both sides of the global warming, climate change, or whatever it is presently called, debate.

February 23, 2014 8:54 am

Thanks, Willis. Good review of a poor paper.
You have a mind for this.

Carla
February 23, 2014 8:59 am

vukcevic says:
February 23, 2014 at 2:10 am
Reliability of the sunspot reconstructions based on C14 and 10B cosmogenic nuclide production prior to 1600 can not be taken for granted….
———————————————
Yes, this is true, the dependency on Earth’s magnetic field strength at a given time period.
But………………..
How long have we been measuring cosmic ray “intensity levels?”
When did we start measuring GCR intensitys above MeV levels to TeV and PeV? And greater?
The historical data does not reflect the various intensity levels.
Galactic cosmic rays at higher electron volt intensities, are not inhibited by the Earths magnetic field strength. This also mucks up the cosmic ray records..
Galactic cosmic rays in the terra electron volt range have gyro radius in the 100’s of AU range….eeek

Carla
February 23, 2014 9:03 am

Gee wouldn’t take lots of TeV GCR to create some gyro vortex like structures along its path ways..

February 23, 2014 9:06 am

Carla says:
February 23, 2014 at 8:59 am
How long have we been measuring cosmic ray “intensity levels?”
Accurately since 1953.
With less accuracy since 1930s
When did we start measuring GCR intensities above MeV levels to TeV and PeV? And greater?
There are so few of those high-energy ones that they don’t matter. Furthermore they do not tell us about solar modulation which is strongest at the lowest energies and disappears above 15 Gev.
The historical data does not reflect the various intensity levels.
The high-level rays don’t matter and won’t show solar modulation anyway.
Perspective, Carla. Perspective. The rattle of loose change in a billionaires pockets does not reflect her overall wealth.

February 23, 2014 9:10 am

As far as Leif’s et als reconstruction of the sun spot data, I have not seen any good logic or can think of any that indicates that he is incorrect. I think he is right. But this does not mean that the sun is not the main driver for periods like the Little Ice age or that it cannot have a small impact on temperatures in other times. The Milankovitch cycles drive climate change over longer several thousand year periods of time. Shorter term variations in my view are caused by a combination of solar changes and or random effects of various other cycles like the PDO/AMO and etc. During the Little Ice age, this was a larger than normal (historically) change in the sun and had a correspondingly larger impact on climate. The earth grew cooler and I believe much of that could have been caused by the sun. However, without going through a similar period when we have better ways of measuring the solar output, it is difficult to prove this hypothesis. Exactly how the sun caused the cooling is something that has not been proved. Perhaps it is changes in cosmic rays or changes in TSI during that period. The current warm period we are experiencing is not all that warm. The long term temperature data suggests that the world is not as warm now as it was during other warm periods or even the average temperature over a several thousand year period during the Holocene climate optimum. So, there is no need for any grand maximum of sun spots to explain the gradual warming from the Little Ice Age. All that is needed is average solar conditions for long enough to allow the earth to gradually warm back to the “average conditions” of the last few thousand years keeping in mind that there are significant temperature variations caused by changing climate cycles that cause shorter term cool and warm fluctuations that last a few decades. Climate alarmism is of course total rubbish.

February 23, 2014 10:21 am

BobG says:
February 23, 2014 at 9:10 am
The Milankovitch cycles drive climate change over longer several thousand year periods of time.
But those cycles are MUCH larger than solar activity cycles.

crosspatch
February 23, 2014 10:26 am

BobG
But this does not mean that the sun is not the main driver for periods like the Little Ice age or that it cannot have a small impact on temperatures in other times.

One thing that is interesting is that weather patterns can set up in a rather persistent manner for a while. For example, when we have a more “zonal” flow where the jets are going more or less west to east in the Northern Hemisphere, we get a rather “average” sort of year. Storms come in from the Pacific, track across the US, exit over the Atlantic. When we get into a situation where we have a more meridional flow, a flow that looks more like a sine wave, we can end up with more extreme conditions. And this is why taking continental averages can be somewhat misleading. If we have a meridional flow the continental average can be nominal yet we can see in that average extremely cold and extremely mild regional conditions.
The other thing is that while we could have a meridional flow, the phase of it matters, too. Shift that sine-ish pattern a couple of hundred miles to the east or west and you can change things dramatically. The drought in California has been caused by such a flow where the moisture from the Pacific down around Hawaii (“Pineapple Express”) was pushed at times due north into Alaska giving them very mild, very wet conditions while California has stayed dry due to those “atmospheric rivers” being pushed northward by a rather persistent ridge of high pressure. The moisture is still there, it just isn’t going where it normally goes. Push that phase a bit father to the east and California would have been positively inundated with rain this winter. If the ridge of high pressure had set up over the Rockies instead of just offshore of California, it would have made a huge difference.
By the same token, the “negative” portion of that sine-ish pattern has brought cold air down across the central plains and it has made it extremely cold across the great lakes region. I don’t have access to enough historical data to check this but I would wonder if more meridonal flow is more common when the ENSO values are neutral. We have had neutral-ish conditions for a rather long time and this is two winters in a row where we have had roughly the same sort of conditions off the west coast with a ridge of high pressure which has diverted the moisture to the north.

Carla
February 23, 2014 11:14 am

lsvalgaard says:
February 23, 2014 at 9:06 am
Carla says:
February 23, 2014 at 8:59 am
How long have we been measuring cosmic ray “intensity levels?”
Accurately since 1953.
With less accuracy since 1930s
When did we start measuring GCR intensities above MeV levels to TeV and PeV? And greater?
There are so few of those high-energy ones that they don’t matter. Furthermore they do not tell us about solar modulation which is strongest at the lowest energies and disappears above 15 Gev.
The historical data does not reflect the various intensity levels.
The high-level rays don’t matter and won’t show solar modulation anyway.
—————————————-
Ok, 15 GeV..
So, anything above 15 GeV is not modulated by Sun or Earth? Just free range of what gyro radius? So the ice core or tree ring has no idea whether or not it was a 1GeV or 5 GeV or 1 TeV?
We have about 5 operations like Milagro and ICE CUBE since 2000’s? that have been able to measure GCR above the 15 GeV level and above?
Thank you Dr. S.

February 23, 2014 11:26 am

Carla says:
February 23, 2014 at 11:14 am
So the ice core or tree ring has no idea whether or not it was a 1GeV or 5 GeV or 1 TeV?
There are so few TeV cosmic rays that almost none would be registered, so if you see a cosmic ray there is almost no chance it is a TeV one.
We have about 5 operations like Milagro and ICE CUBE since 2000′s? that have been able to measure GCR above the 15 GeV level and above?
sure, and they are HUGE because the very high energy cosmic rays are so rare, which is also why they have no measurable effect on anything even vaguely related to solar activity or climate. So are completely irrelevant for this [although interesting in themselves for the question of the origin of cosmic rays].

Matthew R Marler
February 23, 2014 11:30 am

Here is another introduction to estimating pdf via mixtures,, with specific reference to R:
http://exploringdatablog.blogspot.com/2011/08/fitting-mixture-distributions-with-r.html.

February 23, 2014 12:29 pm

For the record here is a comparison of the Hoyt and Schatten Group Sunspot Number [blue] and Our Preliminary Revision [red] of the Group Sunspot Number. We are still working on the pre-1749 data. Little circles and triangles show decadal averages:
http://www.leif.org/research/HS-and-Revised-GSN.png
Here is the method we are using: http://www.leif.org/research/Reconciling-Group-and-Wolf-Sunspot-Numbers.pdf

Anto
February 23, 2014 12:30 pm

Not very much “peer review” done on this one, was there?
….Or, perhaps there was.

February 23, 2014 12:58 pm

Anto says:
February 23, 2014 at 12:30 pm
Not very much “peer review” done on this one, was there?
Which one? Always refer to what you are commenting on.

Carla
February 23, 2014 1:09 pm

Well, I did have to do a little history on the higher eV GCR and our measurement of them.
<100 GeV Nagashima et al. (1998) Hall et al. (1999)
4 TeV Tibet ASγ Amenomori et al. (2006)
10 TeV Super Kamiokande Guillian et al. (2007)
4 TeV ARGO-YBJ Zhang et al. (2009)
5 TeV Milagro Abdo et al. (2009)
20 TeV IceCube Abbasi et al. (2010)
So, like the change in interstellar wind direction, these higher eV GCR could have also recently changed in quantity..

February 23, 2014 1:11 pm

Carla says:
February 23, 2014 at 1:09 pm
So, like the change in interstellar wind direction, these higher eV GCR could have also recently changed in quantity..
Neither have any effect on the Earth on the Sun, so try to regain some perspective.

February 23, 2014 1:48 pm

Purpose of solar reconstructions is in the main to quantify the solar influence on the Earth’s climate and to distinguish between the different forcings, so that climate model simulations can reproduce climate trends more accurately, or that is what they say.
In this graph (with Anthony’s permission)
http://www.vukcevic.talktalk.net/USL.htm
I compare Usoskin’s SSN, Steinhilber’s TSI and Loehle’s global temperature.
You can make your own estimate of how good or adequate they are.

Hank Mccard
February 23, 2014 2:03 pm

Lance Wallace says:
February 22, 2014 at 7:32 pm
In defense of JorgeKafkazar’s statement, I too was trained in astrophysics, and our general mantra was that “1=10″–that is, if we could estimate the energy of a blue giant as exp(55+-1) ergs we were happy
A nit but, AFAIK, exp(2) = 7.389 and exp(2.3026) ~ 10

Lars P.
February 23, 2014 3:12 pm

Latitude says:
February 22, 2014 at 3:42 pm
Figure 1 shows their reconstructed decadal averages of sunspot numbers for the last three thousand years…..
serious question?……how is that possible?

I guess this from the abstracts explains it:
“Methods. We present a new adjustment-free, physical reconstruction of solar activity over the past three millennia, using the latest verified carbon cycle, 14C production, and archeomagnetic field models.”

Matthew R Marler
February 23, 2014 3:13 pm

Willis Eschenbach: However, I’m generally cautious about ascribing anything to “mixtures”.
Me too. That’s why I agreed that the paper had little evidence for more than one category of sunspot activity. It explains, I think, where there figures for mean/mode and sd came from.
They did fit the mixture to an actual empirical density, which we already know a sine curve is not.

Konrad
February 23, 2014 3:24 pm

BobG says:
February 23, 2014 at 9:10 am
————————————
It is notable that the high priests of the Church of Radiative Climatology have no solid explanation for recorded historical events such as the Medieval Warm Period or the Little Ice Age. They prefer to erase these events from their adjusted records with hockey sticks and other religious iconography.
While Willis’ issues with the paper here discussed are valid, Leif’s dismissal of the possibility of significant solar influence on climate is considerably less robust.
To illustrate this point, consider the oceans. The priests of the Church of Radiative Climatology have decreed that direct solar SW alone does not have the power to heat our oceans above freezing. They support this conclusion by using instantaneous radiative flux calculations using average solar radiation at the oceans surface, instead of calculating for SW heating at depth in a diurnal cycle where radiation peaks at over 1000 w/m2.
Empirical experiment shows a very great difference in average temperatures between intermittent SW heating at depth in transparent materials than averaged SW heating at the surface of opaque materials. Direct solar SW alone has the power to warm our oceans and sacred downwelling LWIR need not be invoked to keep them from freezing.
If the high priests of the Church of Radiative Climatology do not understand even the basic physics of how the sun heats our oceans, how reliable is their gospel that solar variation has little influence on climate?

Lars P.
February 23, 2014 3:27 pm

Willis says: “The surprising part to me was the claim by Usoskin et al. that in the decade centered on 1445, there were minus three (-3) sunspots on average … and there might have been as few as minus ten sunspots. “
If I correctly remember Leif was mentioning that also when there have been no sunspots, the Sun still has a certain magnetic activity but it did not reach the level to form a sunspot.
I guess the negative numbers come from the difficulty to model a decrease in magnetic activity from zero sunspots (as the number is still zero but there are still variations in the magnetic activity.) which was then zeroed in real sunspots numbers?
This my two cents…

Anto
February 23, 2014 6:09 pm

@lsvalgaard – Yes, fair point. I was referring to Usoskin et al.

L.J. Neutron Man
February 23, 2014 6:32 pm

I believe the total Solar cycle is closer to 22 years. For an accurate Nyquist or Fourier analysis, a minimum sampling rate of slightly greater than 2n is required which would be closer to 50 years to prevent distortion of the resultant.

Dudley Horscroft
February 23, 2014 8:59 pm

Yogi Berra’s view on CAGW:
“In theory there is no difference between our climate warming predictions and actual temperatures. In practice there is.”
Yogi Berra
Read more at http://www.brainyquote.com/quotes/authors/y/yogi_berra.html#3eM3F0WIIUpmvXK6.99

Dudley Horscroft
February 23, 2014 9:03 pm

Willis Eschenbach says:
February 23, 2014 at 2:03 pm
Willis, I believe that your strawberry and lime (not plain vanilla) curves may have come from an axis-shifted record of passenger numbers from any large transit operation. Perhaps NYCTA? But is it not misleading to think of negative passengers?

Konrad
February 23, 2014 11:22 pm

Willis Eschenbach says:
February 23, 2014 at 8:17 pm
————————————-
My comment was posted in response to BobW, however you have chosen to step up to the plate. No matter. Batter up!
“So before making further untrue claims, you should actually run the numbers on the actual observations so you can check your numbers. Having done so myself, I can assure you that the downwelling solar radiation at the surface is nowhere near enough to explain the temperature at the ocean. And yes, this is including the SW heating at depth, and the peak radiation during the day.”
Willis, I am not making “untrue claims”. You always say “run the numbers” and I always say “run the empirical experiment”. After all the numbers are no use if you do not understand the fundamental physics. You have shown time and again that you do not. You still persist with the view that incident LWIR can heat or slow the cooling rate of liquid water that is free to evaporatively cool. Now you seem to be claiming that the oceans would freeze in the absence of downwelling LWIR. I have run the empirical experiments and I have clearly claimed –
“Empirical experiment shows a very great difference in average temperatures between intermittent SW heating at depth in transparent materials than averaged SW heating at the surface of opaque materials.”
What does “averaged SW heating at the surface of opaque materials” equate to? Could it be applying SB calculations to the surface of the oceans?
Willis, Anthony Watts, the host of this site, started his journey with empirical experiment and observation. It is your influence that has steered back to applying instantaneous radiative flux calculations to moving fluids and transparent materials where calculating non-radiative energy transports is critical.
You have chosen to step up to the plate. I cannot be blamed for disruptive behaviour this time. I stated –
“Empirical experiment shows a very great difference in average temperatures between intermittent SW heating at depth in transparent materials than averaged SW heating at the surface of opaque materials.”
Batter up! Is my claim “untrue” Willis?

February 24, 2014 12:30 am

Sunspot count during this February has been on high side and it is likely to end above 100 (non-smoothed). This would be a new monthly peak for SC24 and the highest monthly count since October 2002. Composite polar field has moved back to the ‘negative’ territory, it appears that SC24 isn’t ‘done’ yet and might still have a surprise or two.

cd
February 24, 2014 6:51 am

Willis
In the abstract they clearly state that they’re dealing with a bimodal distribution. However, it seems that they’re going a step further and assuming two unimodal distributions (not necessarily the same thing as bimodal – there are a suite of functions to assess the appropriateness of such assumptions); and they’re only looking at only one (the higher), in which case their mode may proximate to the mean for the “right” distribution. That’s what it looks like to me.
They may have also done a normal score transformation in order to acquire some of their stats (obviously not the mode) and hence the blue curve.
But it is certainly a little unconventional.
As for their “smoothing/averaging” of the sun-spot cycle – what? I didn’t/couldn’t read the article but it seems pretty poor stuff. Are you sure this is what they did? But given the quality of their data review it seems very likely.

tadchem
February 24, 2014 10:35 am

One of the first lessons of data analysis I was given in graduate school (by a Professor of Physical Chemistry) was this: “There are 3 kinds of liars. There are plain Liars, Damned Liars, and Deconvoluters.” The claim of resolving this ‘data’ into three ‘modes’ is an example of deconvolution.

1sky1
February 24, 2014 4:23 pm

Inasmuch as the Schwabe cycle is by no means strictly periodic and the power density peaks closer to 11yrs than to 12yrs, Willis’ Figure 5 is somewhat misleading regarding the strength of aliasing introduced by decimation of sunspot data to decadal average values. Nevertheless, to avoid aliasing in trying reveal longer-term variations, the original data should have been either pre-filtered to suppress the Schwabe cycle completely or a decadal average been computed, say, every 5 years.

Peter Sable
February 24, 2014 7:52 pm

I spent last night reviewing my college textbook “Discrete Time Signal Processing” (Oppenheim, et. al). If the following elements in analyzing finite-time series are not described in detail, then it’s highly likely the author has put spurious signals into the data and the analysis has to be discarded.
1. Leakage prevention by proper windowing of the finite series. The choice of window should be described (e.g. Kaiser, Hamming, etc). Note that the default (no window) is a boxcar, with bad results. Think of the finite series as an infinite repeat of that series when the math is actually done. If the beginning and ending don’t match you’re going to get spurious leakage.
2. Attention to the Nyquist criterion. Any time I see “averaged over” with decimation it’s clear that the author has done something wrong. The proper technique is upsampling followed by a quality filter such as Gaussian, Hamming, etc. i.e. like your CD player works.
3. Resolution (analogous to Nyquist but for the low frequency components). You can’t discern a 60 year cycle from 70 years of data, you need at minimum 120 years of data, and even then the resolution of frequency is very low. Proper padding with zeros needs to be done to prevent aliasing low frequencies down to DC.
4. Quantization noise. All measurements have errors, and this needs to be modeled in the analysis.
5. The original data and in-between results must be provided to show that the author didn’t make easy-to-make mistakes such as applying the above steps in the wrong order. This was a requirement when this was used at two industrial firms I worked at in the 1990s. Mistakes are very easy to make, it’s a very complex and hard to understand subject.
References: textbook. http://www.amazon.com/Discrete-Time-Signal-Processing-Edition-Prentice/dp/0131988425. Accurate but practically unreadable if you don’t already know what you are doing. I distinctly remember hating it in college:
More accessible: http://www.silcom.com/~aludwig/Signal_processing/Signal_processing.htm#Resolution_and_bandwidth

Konrad
February 25, 2014 12:06 am

Willis Eschenbach says:
February 24, 2014 at 11:59 am
——————————————
Willis,
I think your point of “negative sunspots” has been quite effectively made on this thread, so it should be safe to respond to the SW/LWIR into oceans issue.
“I pointed out that we have actual 20-minute data on this. We don’t need to use “average solar radiation” as you claim, and so we don’t. Your claim is wrong. We use instantaneous measurements of the SW heating at depth, not averages as you say.”
What I was referring to was initial two shell radiative modelling such as depicted in the infamous Trenberth-Kiehl diagram. This kind of modelling does indeed treat incident SW over the oceans as a constant ¼ sun. Initial claims of the radiative greenhouse effect are based on this kind of modelling.
“The TAO buoys are the empirical experiment par excellence. When we look at their data we are looking at experimental results.”
This is correct, and I note that in quantifying the ITCZ cloud thermostat you have found the data very useful. It is disappointing to see so much money wasted in the AGW madness that could be spent on repair and upgrade of this system. However what is being measured is the noisy real world.
“So instead of measuring the effect of varying longwave and shortwave on some foam-lined box in the lab, we are measuring the effect of varying longwave and shortwave above, at the surface of, and at depth in the actual factual ocean [..] From those results we can determine how much the sun heats the ocean and how much the LW heats the ocean.”
The problem with using the TAO buoy array is noise. To determine if incident LWIR can heat or slow the cooling rate of liquid water that is free to evaporatively cool, a simple clean lab experiment should be all that is required. No priest or acolyte of the Church of Radiative Climatology can ever produce one when challenged. Why is that? Every study claiming that LWIR can slow the cooling of the oceans has been a noisy outdoor study with chronic limitations. The Marriott “study” even went so far as to merge day and night data!
I have run the simple and above all clean empirical experiments. Incident LWIR can heat (but not above the temperature of the IR source) or slow the cooling of almost any material that does not evaporatively cool. It just doesn’t work for liquid water that is free to evaporatively cool.*
“For a discussion of the diurnal effects of longwave and shortwave, you might enjoy my post “Cloud Radiation Forcing in the TAO Dataset”.”
I did read this at the time Willis, and I believe I posed a question that went unanswered about night only data. The reason for this is the only way TAO buoy data could prove LWIR having an effect on the cooling rate of the oceans would be to observe the cooling curve of a surface following thermometer on a night with low drifting 4 octa cloud cover. Flat spots in the cooling curve (adjusted for surface wind speed variation) should correlate with peaks in LWIR from drifting cloud. (no using day data like Marriott due to low angle of incidence scattering of UV, SW and SWIR)
*my experiments show that LWIR does not heat nor slow the cooling rate of liquid water that is free to evaporatively cool and does have an effect close to the Trenberth-Kiehl style claims when evaporation is mechanically restricted. However I believe it should be possible to get a result if the water and air above it is very cold and no evaporation is occurring.
Regards,
Konrad.

William Astley
February 25, 2014 1:16 am

In reply to
lsvalgaard says:
Evidence for distinct modes of solar activity⋆
I. G. Usoskin1, G. Hulot2, Y. Gallet2, R. Roth3, A. Licht2, F. Joos3, G. A. Kovaltsov4, E. Thébault2 and A. Khokhlov
William:
The cosmogenic isotope data supports Usoskin’s assertion that the solar magnetic cycle activity in the last 70 years was the highest in 70 years.
We present a new adjustment-free, physical reconstruction of solar activity over the past three millennia, using the latest verified carbon cycle, 14C production, and archeomagnetic field models. This great improvement allowed us to study different modes of solar activity at an unprecedented level of details.
William: Your name calling indicates you have no scientific response to their adjustment free analysis.
Conclusions. The Sun is shown to operate in distinct modes – a main general mode, a Grand minimum mode corresponding to an inactive Sun, and a possible Grand maximum mode corresponding to an unusually active Sun. These results provide important constraints for both dynamo models of Sun-like stars and investigations of possible solar influence on Earth’s climate.
William: Curious that we are suddenly starting to observe cooling of both poles.

Konrad
February 25, 2014 1:36 am

Willis Eschenbach says:
February 24, 2014 at 12:33 pm
———————————————
“Empirical experiment shows a very great difference in average temperatures between intermittent SW heating at depth in transparent materials than averaged SW heating at the surface of opaque materials.”
I asked if this claim was untrue. You responded –
“No clue. Not enough information there to answer the question. What transparent and opaque materials are we speaking about, for example? Averaged over what period? You know … details. Not enough of them.”
Fair enough. This was a reference to a simple experiment described previously on this blog showing that SW heating of transparent materials at depth gives very different results than SW heating of the same materials if their upper surface is opaque. The purpose of the experiment is to show how an intermittent SW source peaking over 1000 w/m2 is quite sufficient to warm our oceans. I point out again that the hight priests of the Church of Radiative Climatology have decreed that solar SW alone is not enough to keep our oceans from freezing.
Here is the recipe for “Shredded Lukewarm turkey in Boltzmannic vinegar”
Take two 100 x 100 x 10mm blocks of clear acrylic. Paint one black on the base (block A), and the second black on the top surface (block B). Spray both blocks with several layers of clear-coat on their top surfaces to ensure equal reflectivity and IR emissivity. Attach thermocouples to upper and lower surfaces. Insulate the blocks on the sides and base. Enclose each in a small LDPE greenhouse to minimise conductive losses. Now expose to strong solar SW.
As little 3 hours should result in a 17C average differential between the blocks. The block with the black base runs hotter. SB equations will not give the correct answer. (caution – experiment temperatures can exceed 115C)
What would the priests of the Church of Radiative Climatology say? Both blocks are absorbing the same amount of solar radiation, both blocks have the same ability to emit LWIR, they should reach the same equilibrium temperature.
However block A reaches a far higher average temperature, why? The SW absorbed by block A heats from the base, and non-radiative transports (conduction) govern how fast energy returns to the surface to be radiated as LWIR. The SW absorbed by block B is absorbed at the surface and some is immediately re-radiated as LWIR before conduction can carry it down into the block below. Our oceans most closely resemble block A, however two shell radiative models that consider the ocean just “surface” model the oceans more like block B.
This is how solar SW alone is quite sufficient to heat our oceans. SW heating at depth is instantaneous, however the slow speed of non-radiative transport back to the surface allows energy to accumulate over the diurnal cycle. If our oceans could be instantly turned to ice, my crude guess is that it may take over a decade for the sun to thaw them, but they would thaw even under a non-radiative atmosphere.
“Put different amounts of energy into different things in different ways, you get different resulting temperatures.”
I am putting the SAME amount of energy into materials in different ways and getting different temperatures.
“However, I don’t understand your point in highlighting that obvious fact.”
The point is this –
The sun heats our oceans.
The net effect of the atmosphere over the oceans is ocean cooling.
The net effect of radiative gases in our atmosphere is atmospheric cooling.
AGW is a physical impossibility.
The next question I hope to answer through empirical experiment is how hot our oceans would get if all atmospheric features above excepting pressure were removed. No evaporative or conductive cooling but also no downwelling LWIR –
http://i42.tinypic.com/315nbdl.jpg
The priests of the Church of radiative Climatology claims indicate the the water sample should freeze. Do you seriously think there is any chance of that? How did they go with the whole “the effect of clouds on surface temperatures is neutral” thing?
Regards,
Konrad.

February 25, 2014 2:29 am

William Astley says:
February 25, 2014 at 1:16 am
The cosmogenic isotope data supports Usoskin’s assertion that the solar magnetic cycle activity in the last 70 years was the highest in 70 years.
Considering that the 14C data does not cover the last 70 years your statement is very curious.
possible Grand maximum mode corresponding to an unusually active Sun. These results provide important constraints for both dynamo models of Sun-like stars and investigations of possible solar influence on Earth’s climate.
As there has not been a Grand Maximum the last 70 years [the sun has not been unusually active] no such constraints seem of importance.

1sky1
February 25, 2014 4:23 pm

Peter Sable says:
February 24, 2014 at 7:52 pm
Data “windowing” to supress spectral “leakage” is relevant only for frequency-domain analysis. And zero-padding improves low-frequency resolution without distortion only if the original signal is time-limited (e.g., FIR filter). Neither is advisable for time-domain analysis, such as discussed here.

William Astley
February 25, 2014 4:24 pm

In reply to:
The surprising part to me was the claim by Usoskin et al. that in the decade centered on 1445, there were minus three (-3) sunspots on average … and there might have been as few as minus ten sunspots. Like I said, Usoskin et al. seem to have discovered the sunspot equivalent of antimatter, the “anti-sunspot” … however, they must have wanted to hide their light under a bushel, as they’ve conveniently excluded the anti-sunspots from what they show in Figure 1 …
Negative sunspot count occurs as sunspot count is used as a proxy solar variable for all of the processes that modulate the solar heliosphere which in turns modulates cosmogenic isotope count.
To disprove a paper it is first necessary to understand the paper.
Usoskin use recent direct observation of sunspot count current era vs cosmogenic isotope count for calibration. In the past the cosmogenic isotope count is very, very, low which is not in agreement with silly comments in this forum that the Maunder minimum was more active that the current era or that the current era was not a grand maximum.
Those comments are incorrect.
Planetary cooling due to the interruption of solar magnetic cycle 24 will move the conversion off of whether solar magnetic cycle changes does or does not modulate planetary temperature to how it does and why did the AGW mechanism saturate.
The reason for the delay in the cooling is related to the physical reason why the AGW mechanism saturates. There has been a recent set of experimental results that confirm there is a very, very, strong electromagnetic field about the sun that varies with the solar magnetic cycle. An example is the recent discovery is the size of the proton is too small (see recent Scientific America article). A second example is the discovery that there is a change in atomic decay rates depending on distance from the sun which can be explained by a scalar field about the sun. (A very, very strong electrostatic field affects both the proton size and atomic decay rates.)
There are series of papers that are beyond the scope of this forum concerning quasar observations that confirm matter changes to resist the collapse of a very large object. The object that forms when very large objects collapse is active and emits charge and forms a very, very strong magnetic field to arrest the collapse. A very, very strong electrostatic field also affect redshift which explains redshift anomalies concerning both quasars and their host galaxy. The active object that forms when large objects collape explains quasar jets, lineless quasar spectrum, naked quasars, quasar clustering, the galaxy rotational anomaly, and the phenomena that dark energy and dark matter were created to explain.
Our sun formed from the collapse of a super nova. The core of our sun is this different state of matter.

February 25, 2014 4:40 pm

William Astley says:
February 25, 2014 at 4:24 pm
Usoskin use recent direct observation of sunspot count current era vs cosmogenic isotope count for calibration.
He has no recent cosmogenic data with which to calibrate. The record stops before 1950. He does not ‘use recent direct observation of sunspot count’, but the old Group Sunspot Number by Hoyt and Schatten which has been shown to be faulty.
The rest of your post is pure speculation with little basis in fact.

Konrad
February 25, 2014 6:18 pm

Willis Eschenbach says:
February 25, 2014 at 9:08 am
————————————
Willis,
thank you for your continuing polite and considered responses. I trust we have moved beyond the “asinine tripe” stage.
“Say what? Why would SB equations not give the right answer? In order for you to convince me of that, you’ll have to show the equations that you are using. I say this, because the SB equations have given other people the right answer for centuries … so if they give you the wrong answer, I’ve got to include “pilot error” in the differential diagnosis …”
Pilot error? I have a PPL. I may not be a bold pilot, but I am an old pilot 😉 I do not actually bother with SB calcs as most of my work deals with issues that would require CFD. My experience is that sceptics now distrust computer modelling, so I use easily replicated empirical experiment to demonstrate my points. My easily replicated experiments are designed to conclusively illustrate where instantaneous radiative flux equations fail.
“What would the priests of the Church of Radiative Climatology say? I haven’t a clue … I never met one of them.”
Try Trenberth, Kiehl or the high priest Dr. Perriehumbert, writer of the sacred texts and player of the devils instrument.
“Both blocks are absorbing the same amount of solar radiation, both blocks have the same ability to emit LWIR, they should reach the same equilibrium temperature. – Yes, and they will … but not in three hours.”
No, they won’t. And understanding why they won’t is critical.
“But heck, let’s play your game. […] Neither one of them will be radiating less than that, and neither one will be radiating more than that.”
The issue here is not radiative flux at the surface but temperature of the material. I can easily illustrate this point with an abortion/emissivity example –
Take two aluminium plates 2mm thick of equal area, polish one to mirror and paint the other matt black. Place in a vacuum and illuminate each with equal constant solar radiation. The matt black plate heats faster, but given time the “mirror” plate reaches the higher equilibrium temperature.*
My point is just as emissivity & conduction can effect the equilibrium temperature of a material exposed to electromagnetic radiation, so to can surface transparency/translucency & conduction.
“And this is why we deal with radiative fluxes instead of temperatures, because radiative fluxes are conserved and temperature is not. In a steady state situation like the blocks or the earth, what goes in has to equal what comes out.”
And this, young Skywalker is why you fail. What radiation goes in and what goes out gives you no clue as to the temperature profiles of moving fluids in a gravity field. Attempting to parametrise non-radiative fluxes is a dead end. For one thing it totally ignores “emergent phenomena” as you have previously correctly noted.
Callendar tried to revive the AGW in 1938. His paper was published along with comments from Sir George Simpson –
“..but he would like to mention a few points which Mr. Callendar might wish to reconsider. In the first place he thought it was not sufficiently realised by non-meteorologists who came for the first time to help the Society in its study, that it was impossible to solve the problem of the temperature distribution in the atmosphere by working out the radiation. The atmosphere was not in a state of radiative equilibrium, and it also received heat by transfer from one part to another. In the second place, one had to remember that the temperature distribution in the atmosphere was determined almost entirely by the movement of the air up and down. This forced the atmosphere into a temperature distribution which was quite out of balance with the radiation. One could not, therefore, calculate the effect of changing any one factor in the atmosphere..”
I contend that decades before my birth, Sir George Simpson had it right. What say you?
*the matt black/mirror example can also be applied to our atmosphere. Which does our atmosphere containing radiative gases most closely resemble? The matt black material good at adsorbing and emitting IR or the mirror material poor at absorbing and emitting IR?
Regards,
Konrad.

February 26, 2014 12:13 am

William Astley says:
February 25, 2014 at 4:24 pm
Like I said, Usoskin et al. seem to have discovered the sunspot equivalent of antimatter, the “anti-sunspot” … however, they must have wanted to hide their light under a bushel, as they’ve conveniently excluded the anti-sunspots from what they show in Figure 1 …
Negative SSN is physical impossibility It is not unusual that the Grand solar minima reconstructions show negative values for the SSN, this may occur as the after product of calculations, one of the reasons could be conflict in the combination of uncertainty in the geomagnetic field effect (which is subtracted) and uncertainty in the radionuclide production
i.e.:
– Solar modulation of the cosmic ray particles is calculated by removing the effect of the geomagnetic field based on paleomagnetic data.
– Geomagnetic field intensity is estimated from paleomagnetic data using cosmogenic radionuclide production records to eliminate solar influence.

Tom
Reply to  Willis Eschenbach
February 26, 2014 1:38 pm

@ Willis Eschenbach February 26, 2014 at 10:29 am
Willis, you said:“Of the total upwelling radiation from the surface of ~ 400 W/m2, the resulting radiation to space is 240 W/m2”.
I have no difficulty with the 240 W/m^2 figure, noting that this applies to the entire surface of the earth. I think you really need to focus on the “upwelling … 400 W/m^2”. Firstly, the “upwelling” figure you cite should be 480 W/m^2; we can get to that later, if needs be. The source of the widespread misunderstanding of the ‘energy flux gap’ is the perceived need to conserve energy flux. There is no universal law which mandates that energy flux be conserved; however, there is a never-falsified law which requires that energy be conserved.
Equating the terrestrial flux output with the solar flux input is a fatal error. Isn’t this obvious? The incoming solar energy strikes only half of the earth’s surface area at any instant, compared to that surface area which is constantly radiating energy to space. If the area which a given amount of solar energy irradiates is halved, the energy flux, from the same amount of energy, doubles. So what? Quite simply, 480 W/m^2 enters half the system and 240 W/m^2 leaves from the entire system. There is no missing energy, ever.
I defy you to contest this “heated on half, cooled off double” logic. It is the physical reality of the geometry and thermodynamics of any planet heated by a single sun. Once you permit yourself to see that, the rest becomes obvious, and simple. Via Stefan-Boltzmann, a flux of 480 W/m^2 on half the globe would generate a (linearly averaged) temperature of +30C, which is kind of sensible, and which accords with people’s everyday experience that direct sunlight is hot. Ice melts and water evaporates. If, in the manner of Kiehl/Trenberth, you arbitrarily halve the actual incoming solar flux to artificially ‘average’ it over the entire global surface area, then, via SB, the sun generates only -18C. The unnecessary need for the atmospheric radiative GHE is generated; magic happens and an entire planet is heated by a frigid atmosphere to +15C – by a secondary alleged process, but with no new energy entering the system.
This entire scam is generated and sustained by wrongly trying to numerically equate terrestrial flux output from 2x m^2, with solar flux input to only x m^2. Pretty easy, really.

Nikola Milovic
February 26, 2014 10:56 am

All the previous works of researchers from the very beginning of thinking about phenomena on the sun and solar spots, have two results, both of them with no true conclusions why and how changes occur in the sun and that is the true cause of their occurrence. The first result is obtained by measuring the different sizes of physical phenomena in the sun, and the other is the result of giving fictional and completely illogical values, such as data on the number of sunspot before more than a few thousand years. Who then measured the number of these spots, and based on the assumption that the result obtained? I must stress, no matter how someone will realize that these are all the data and conclusions that do not lead to actual results. The appearance of the sun and everything related to these phenomena, they have completely different causes and since science does not want to learn, they will wander for many years deceptive and science and people who want to know the truth.
Here I ask all of you who argue about this: Do any of you want to come down to my level of knowledge about these phenomena, and to establish the correspondence, I with a contractual obligation to give his evidence and no idea how to solve this problem. There are laws in celestial mechanics by which to reach a solution to the enigma of all time (past, present, future.) If you want cooperation, yes and no fear of the police that you determine what you can and what you may not decide for themselves, agree that make a contract according to which this research is conducted. It will take a lot of astronomical data, and powerful program for computing and animation, but I’m sure I’ll get to the results. I have to be addressed, but with enormous difficulties, especially because none of you do now will not hear of some new methods for people who are unknown to you, as I am. I hope that among you there are those who respect this debate, especially since the time I deal with a few tens of years, and now I’m 77 years old and it is not appropriate to deceive someone. I am waiting for an answer.

Tom
February 26, 2014 6:43 pm

Searching for that .gif of a tumbleweed blowing through … maybe with a mission bell, faintly audible over the noise of the wind …

Konrad
February 26, 2014 8:33 pm

Willis Eschenbach says:
February 26, 2014 at 10:01 am
—————————————-
Willis,
thank you again for taking the time to respond.
“Analyses of the energy budget of the planet are done in terms of radiative fluxes (W/m2) and not in terms of temperature. Why? Because radiative flux is a measure of energy and energy is conserved … whereas temperature is not conserved. You can’t do an energy budget in terms of temperature.”
If we had better satellite measurement of ingoing and out going radiation from the planet we could say whether the planet was accumulating or losing energy. (errors as great as 5 w/m2 indicate we currently cannot do this) However we could not say from this measurement where in the oceans or atmosphere this was effecting temperatures. The global warming claims are all about near surface temperature. For this non-radiative transports need to be correctly modelled. In the original “basis physics” of the “settled science” this was not done correctly. The priest of the Church of Radiative Climatology tried to simply parametrise these fluxes as constants. The glaring errors are –
speed of tropospheric convective circulation not increased for increasing concentrations of radiative gases.
Timing of emergent phenomena not advanced in the diurnal cycle for increasing concentrations of radiative gases.
SW accumulation in the transparent oceans not properly modelled
I can easily illustrate this point with an abortion/emissivity example –
“We agree that fluxes and temperatures are very different.”
And this I contend is the heart of the matter.
“Whenever some …”
Given some of my own language on blogs I’d best let this one go 😉
“WE DON’T CARE ABOUT THE ABSOLUTE TEMPERATURE, KONRAD!
We’re attempting to follow the flow of the energy around the system, not the temperature.”
Again, the AGW scare concerns near surface temperatures. 2C is supposed to cause doooooom.
“Now, you are correct that without exact knowledge of the emissivity, we cannot say what the exact temperature of something is from knowing how much it radiates.”
Yes, however I am pointing out via empirical experiment that is is not just emissivity but translucency/transparency of the material that matters. Remember in the acrylic block experiment both blocks have the same surface LWIR emissivity.
“It’s true that radiation alone won’t tell us the absolute temperature profiles (although we can get quite close).”
The problem here is that both radiative and non-radiative energy transports must be correctly modelled, and non-radiative transports are far harder. As I show through empirical experiment “quite close” is nowhere near close enough.
——————————————————
“You are aware, I’m sure, of the existence of infrared thermometers. What you do is set the emissivity dial to the known emissivity of the substance in question, and the IR thermometer tells you its temperature.”
I own one. It can be seen here where I am repeating the transparent material experiment under intermittently cycled halogen lights –
http://i61.tinypic.com/2z562y1.jpg
Given the speed of the instrument you can compare the steady emission from block A and the “sawtooth” emission from block B.
This is used, for example, in stand-off measurements of the small-scale (centimetres) variation in the surface temperature of the ocean. The emissivity of water is known (0.96), so then an IR camera can record the minute fluctuations in SSTs over the field of vision of the camera. Fascinating stuff.
“So you are 100% correct that “what radiation goes in … gives you no clue as to the temperature”. The temperature cannot be derived from knowing what radiation is going in to an object. […] The thermal radiation given off by an object can give us a very detailed and accurate picture of the temperature.”
“Not only that, but since thermal radiation is used to measure the temperature of the surface of the ocean, it falsifies your objection that converting radiation to temperature won’t work on “moving fluids in a gravity field”. “
The attempts of the climate “scientists” to use this method have led them to the provably false conclusion that the oceans would freeze in the absence of downwelling LWIR.
—————————————
“I say that he [Sir George Simpson] was right. I would add that it it is not sufficiently realised by non-meteorologists that it is also impossible to solve the problem of the temperature distribution in the atmosphere without working out the radiation …”
We seem to be in agreement, both radiative and non-radiative transports need to be correctly worked out. I am showing through empirical experiment that climate “scientists” have gotten the non-radiative transports hideously wrong.
“Of the total upwelling radiation from the surface of ~ 400 W/m2, the resulting radiation to space is 240 W/m2. That puts the overall global average strength of the greenhouse effect at (400-240)/400 = 40%. In other words, some 40% of the upwelling longwave from the surface is intercepted by the atmosphere.”
And around 90% of the solar energy absorbed by the land, oceans and atmosphere is emitted back to space via radiative gases in the atmosphere. While there is a radiative greenhouse effect on earth there is not a NET radiative greenhouse effect.
I have shown via empirical experiment that it is not just emissivity that governs the equilibrium temperature of a material exposed to solar radiation, but translucency/transparency as well. On this basis I clam that the gospel of the Church of Radiative Climatology is in error regarding how our oceans heat. I contend that if our oceans could be retained without an atmosphere they would likely reach 80C or beyond. This means that the net effect of the atmosphere over the oceans is ocean cooling. Given that radiative gases are the only effective means of atmospheric cooling this means that the net effect of radiative gases in our atmosphere is planetary cooling at all concentrations above 0.0ppm.
Willis, do you feel that if our oceans could be retained without an atmosphere they would freeze?
Regards,
Konrad.

Tom
February 28, 2014 5:24 pm

@ Willis
Re my February 26, 2014 at 1:38 pm
Willis,
I haven’t seen a reply to my post with the above time stamp, or any posts after Konrad’s February 26, 2014 at 8:33 pm. I would appreciate a considered response.
What say you to the folly of trying to numerically equate terrestrial flux output with solar flux input, given the 2 x factor difference in area involved and the crucial T^4 relationship via the SB law?
Sincerely,
Tom.

Nikola Milovic
March 1, 2014 3:22 am

Willis , you are the first specialist who deals with finding the causes of climate change , which has decided to come down to my level of knowledge of the causes of climate change . I see that you and the other is not clear what I want to achieve this my request. My response is gradually becoming clearer for anyone who wants to join in my understanding of the true causes of climate change . I have some indicators that confirm the cause. Everything is up to date explored in this direction , it gave the right answers . I offer some ideas that , in my opinion , can give quite true causes of these phenomena. Since this is a level of knowledge and solutions enormously important , so far unsolved mysteries , not normal that I ‘m offering free of charge , especially since some people rich on erroneous and fraudulent ideas. I offer a contractual obligation to resolve this issue . If this is not my true bugs , in my own theory will find more than worth the cost of the enterprise. If there is interest in knowing the truth the causes of climate change , why there is not an institution that wants to help to solve it for the benefit of mankind ?

Tom
March 1, 2014 3:51 am

@ Willis February 28, 2014 at 5:51 pm
Thanks for the reply. I won’t reciprocate the somewhat smug, patronising tone in this one. You simply don’t, or can’t, appreciate the reality of the physics of the earth/sun system. Instead of dealing with the physical fact that only half of the globe is illuminated at any one time, you alter this reality to an invented one. The invention doubles the surface area lit by the same sun.
From that point on, any such model is physically meaningless. Any logic which flows from it is physically meaningless, as are any conclusions drawn from it – because none of it is actually happening in reality.
In real physics, matter which is affected by energy reacts instantaneously. It does not decide to hang around until an ‘average’ develops. If you deal in reality, one half of the globe gets appropriately hot for the energy hitting it. In your averaged delusion, your sunlight is cold – too cold to melt ice or evaporate water That is precisely the absurdity which results from such “a profound misunderstanding” of the actual physics of the system.
I congratulate you on your announcement of a new law in physics: the Willis Eschenbach Law of Conservation of Energy … Flux”. By the way, your ‘thought experiment’ analogy is DOA. You forgot to rotate the planet, again.

Tom
Reply to  Willis Eschenbach
March 1, 2014 11:37 am

@ Willis March 1, 2014 at 10:34 am
All very intriguing Willis. Especially this bit:
“Averages have worked for scientists for centuries, I see no reason to stop using them now”.
By that statement, if you had your top half in a furnace, and your bottom half in a freezer, on average, you’d be fine, I presume. Or you could solve a constant thrust, reducing mass, calculation for a space launch by using the average of the mass of the craft at launch and at orbital entry. Good luck with both of those.
In your flat earth, constantly lit paradigm, we are to believe the absurd notion that direct sunlight can’t generate enough heat at the earth’s surface to melt ice and evaporate water (‘average’ temp = -18C). But that’s ok, so long as the fluxes are artificially equalised by pretending the entire globe is always in daylight.
You could probably pick up a Nobel prize for that sort of tosh – hang on a minute …

Tom
Reply to  Willis Eschenbach
March 1, 2014 1:08 pm

Willis, how ironic. You are doing precisely what you accuse me of doing (using averages in such an inappropriate manner).
You are halving the actual solar input flux, by pretending the earth is flat and all of it is constantly lit by the sun, which is obviously not the case. With the associated T^4 relationship, a gross error is induced by using the flux-conserved temperature, instead of the energy-conserved temperature which is actually generated at the surface.
You may not be a physicist Willis, but you are not stupid. Why can’t you see this? What would it take for you to confront and accept this physical reality, which is entirely in keeping with the Law of Conservation of Energy?

Tom
Reply to  Willis Eschenbach
March 1, 2014 2:33 pm

@ Willis March 1, 2014 at 1:33 pm
Willis, I’m trying to get you to address the fundamental behaviour of matter and energy in an actual physical arrangement. You give me a piece on the arithmetic mean in a demographic context.
I will concede you this working-out of an average: the linearly averaged solar flux which is actually incident on half of the globe at any instant is 480 W/m^2. This can generate a linearly averaged temperature of +30C. This is real, Willis, unless you’d care to contest the SB law.
Any other arrangement is not real. It is imaginary. It doesn’t exist. If you follow the numbers in this method, you can preserve the arithmetic which ‘shows’ that flux in = flux out – BUT – you do not preserve the physics of the actual arrangement.
If you genuinely can’t see this, then maybe I spoke too soon.

Tom
Reply to  Willis Eschenbach
March 2, 2014 12:40 am

@ Willis March 1, 2014 at 2:33 pm
“However, if you think getting an average of 480 W/m2 half the time is somehow different than getting an average of 240 W/m2 all the time, I’m not sure what I can say …
No, Willis. It isn’t “somehow” different. It is fundamentally different. Averages are useful, up to the point where an absurdity is created, For example, if it takes a woman 9 months to gestate a baby, you don’t get the baby in 4.5 months if you put 2 women on the job. Or if an aircraft needs 10,000ft of runway to get airborne, there’s no point in offering the captain 2 x nearby 5,000ft runways.
Meanwhile, back to your flat earth, cold sun absurdity: the plain fact is that +30C ≠ -18C. The former is sufficient to spontaneously generate and sustain the water cycle; the latter isn’t. You just can’t skip over this large difference which is caused by that T^4 relationship. You arrive at that absurdity by wrongly conserving energy flux, instead of conserving energy. You conduct a successful arithmetic operation by your pseudo-scientific method, but the patient (physical reality) dies. If you can’t, or won’t, see this, then I don’t know what to say.

Tom
Reply to  Willis Eschenbach
March 2, 2014 1:02 am

Always happy to highlight the difference between real and pseudo science Willis. You’re welcome.