Scafetta: Benestad and Schmidt’s calculations are “robustly” flawed.

New tomato strain - more robust in full sun than beefksteak
New tomato strain - more robust in full sun than beefsteak

Nicola Scafetta Comments on “Solar Trends And Global Warming” by Benestad and Schmidt

From Climate Science — Roger Pielke Sr.

On July 22 2009 I posted on the new paper on solar forcing by Lean and Rind 2009 (see). In that post, I also referred to the Benestad and Schmidt 2009 paper on solar forcing which has a conclusion at variance to that in the Lean and Rind paper.

After the publication of my post, Nicole Scafetta asked if he could present a comment (as a guest weblog) on the Benestad and Schmidt paper on my website, since it will take several months for his comment  to make it through the review process. In the interests of presenting the perspectives on the issue of solar climate forcing, Nicola’s post appears below. I also invite Benestad and Schmidt to write responses to the Scaftta contribution which I would be glad to post on my website.

GUEST WEBLOG BY NICOLA SCAFETTA

Benestad and Schmidt have recently published a paper in JGR. (Benestad, R. E., and G. A. Schmidt (2009), Solar trends and global warming, J. Geophys. Res., 114, D14101, doi:10.1029/2008JD011639).

This paper criticizes the mathematical algorithms of several papers that claim that the temperature data show a significant solar signature. They conclude that such algorithms are “nonrobust” and conclude that

“the most likely contribution from solar forcing a global warming is 7 ± 1% for the 20th century and is negligible for warming since 1980.”

By using the word “robust” and its derivates for 18 times, Benestad and Schmidt claim to disprove two categories of papers:

those that use the multilinear regression analysis [Lean and Rind, 2008; Camp and Tung, 2007; Ingram, 2006] and those that present an alternative approach [Scafetta and West, 2005, 2006a, 2006b, 2007, 2008]. (See the references in their paper.)

Herein, I will not discuss the limitation of the multilinear regression analysis nor the limits of Benestad and Schmidt’s critique to those papers. I will briefly focus on Benestad and Schmidt’s criticism to the papers that I coauthored with Dr. West. I found Benestad and Schmidt’s claims to be extremely misleading and full of gratuitous criticism due to poor reading and understanding of the data analysis that was accomplished in our works.

Let us see some of these misleading statements and errors starting with the less serious one and ending with the most serious one:

1.  Since the abstract Benestad and Schmidt claim that they are rebutting several our papers [Scafetta and West, 2005, 2006a, 2006b, 2007, 2008]. Already the abstract is misleading. Indeed, their criticism focuses only on Scafetta and West [2005, 2006a]. The other papers used different data and mathematical methodologies.

2.  Benestad and Schmidt claim that we have not disclosed nor detailed the mathematical methodology and some parameters that we use. For example:

a) In paragraph 39  Benestad and Schmidt criticize and dismiss my paper with Willson [2009] by claiming that we “did not provide any detailed description of the method used to derive their results, and while they derived a positive minima trend for their composite, it is not clear how a positive minima trend could arise from a combination of the reconstruction of Krivova et al. [2007] and PMOD, when none of these by themselves contained such a trend).” However, the arguments are quite clear in that paper and in the additional figures that we published as supporting material. Moreover, it is not clear to me how Benestad and Schmidt  could conclude that our work is wrong if Benestad and Schmidt acknowledge that they have not understood it. Perhaps, they just needed to study it better.

b) In paragraph 41 Benestad and Schmidt claim that: “It is not clear how the lagged values were estimated by Scafetta and West [2006a]“.  However, in paragraph 9 of SW06a it is written “we adopt the same time-lags as predicted by Wigley’s [1988, Table 1] model.” So, again, Benestad and Schmidt just needed to study better the paper that they wanted to criticize.

c) In paragraph 48 Benestad and Schmidt claim that: “over the much shorter 1980-2002 period and used a global surface temperature from the Climate Research Unit, 2005 (they did not provide any reference to the data nor did they specify whether they used the combined land-sea data (HadCRUT) or land-only temperatures (CRUTEM).” However, it is evident from our work SW05 that we were referring to the combined land-sea data which is properly referred to as “global surface temperature” without any additional specification (Land or Ocean, North or South). We also indicate the webpage where the data could be downloaded.

d) In paragraph 57 Benestad and Schmidt claim that: “The analysis using Lean [2000] rather than Scafetta and West’s own solar proxy as input is shown as thick black lines.” However, in our paper SW06a it is crystal clear that we too use Lean’s TSI proxy reconstruction. In particular we were using Lean 1995 which is not very different from Lean 2000. Benestad and Schmidt apparently do not know that since 1978 Lean 1995 as well as Lean 2000 do not differ significantly from PMOD because PMOD was build  (by altering the published TSI satellite data)  by using Lean 1995 and Lean 2000 as guides. Moreover, we also merge the Lean data with ACRIM since 1978 to obtain an alternative scenario, as it is evident in all our papers.  The discontinuity problem addressed by Benestad and Schmidt in merging two independent sequences (Lean’s proxy model and the ACRIM) is not an issue because it is not possible to avoid it given the fact that there are no TSI satellite data before 1978.

3. In Paragraphs 48-50 Benestad and Schmidt try to explain one of our presumed major mathematical mistakes.  Benestad and Schmidt’s states:  “A change of 2*0.92 W/m2 between solar minimum and maximum implies a change in S of 1.84 W/m2 which amounts to 0.13% of S, and is greater than the 0.08% difference between the peak and minimum of solar cycle 21 reported by Willson [1997] and the differences between TSI levels of the solar maxima and minima seen in this study (~1.2 W/m2; Figure 6).” Benestad and Schmidt’s are referring to our estimate of the amplitude of the solar cycle referring to the 11-year modulation that we called A7,sun = 0.92 W/m2 in SW05. Benestad and Schmidt are claiming that our estimate is nor reasonable because in their opinion according to our calculations the change of TSI between solar maximum and solar minimum had to be twice our value A7,sun , so they write 2*0.92=1.84 W/m2, and this would be far too large. However, as it is evident from our paper and in figure 4a in SW05 the value A7,sun refers to the peak-to-trough amplitude of the cycle, so it should not be multiplied by 2, as Benestad and Schmidt misunderstood. This is crystal clear in the factor ½ before the equation f(t)= ½ A sin(2pt) that we are referring to and that Benestad and Schmidt also report in their paragraph 48. It is hard to believe that two prominent scientists such as Benestad and Schmidt do not understand the meaning of a factor ½! So, again,  Benestad and Schmidt just needed to think more before writing a study that criticizes ours.

4) Finally, Benestad and Schmidt’s paper is full of misleading claims that they are reproducing our analysis. Indeed, Benestad and Schmidt’s paper is self-contradictory on this crucial issue. In paragraph 85 Benestad and Schmidt claim that theyhave repeated the analyses of Scafetta and West, together with a series of sensitivity tests to some of their arbitrary choices.” However, in their paragraph 76 Benestad and Schmidt acknowledge: “In our emulation, we were not able to get exactly the same ratio of amplitudes, due to lack of robustness of the SW06a method and insufficient methods description.” It is quite singular that Benestad and Schmidt claim to have repeated our calculation, at the same time they acknowledge that, indeed, they did not succeed in repeating our calculation and, ironically, they blame us for their failure. It is not easy to find in the scientific literature such kind of tortuous reasoning!

In fact, the reason why Benestad and Schmidt did not succeed in repeating our calculation is because they have misapplied the wavelet decomposition algorithm known as the maximum overlap discrete wavelet transforms (MODWT). This is crystal clear in their figures 4 where it is evident that they applied the MODWT decomposition in a cyclical periodic mode. In other words they are implicitly imposing that the temperature in 2001 is equal to the temperature in 1900, the temperature in 2002 is equal to the temperature in 1901 and so on. This is evident in their figure 4 where the decomposed blue and pink component curves in 2000 just continue in 1900 in an uninterrupted cyclical periodic mode as shown in the figure below which is obtained by plotting their figure 4 side by side with itself:

Any person expert in time series processing can teach Benestad and Schmidt that it is not appropriate to impose a cyclical periodic mode to a non stationary time series such as the temperature or TSI records that present clear upward trends from 1900 to 2000.  By applying a cyclical periodic mode Benestad and Schmidt are artificially introducing two large and opposite discontinuities in the records in 1900 and 2000, as the above figure shows in 2000. These large and artificial discontinuities at the two extremes of the time sequence disrupt completely the decomposition and force the algorithm to produce very large cycles in proximity of the two borders, as it is clear in their figure 4. This severe error is responsible for the fact that Benestad and Schmidt find unrealistic values for Z22y and Z11y that significantly differ from ours by a factor of three. In their paragraph 50 they found Z22y = 0.58 K/Wm-2, which is not realistic as they also realize later, while we found Z22y = 0.17 K/Wm-2, which is more realistic.

This same error in data processing also causes the reconstructed solar signature in their figures 5 and 7 to present a descending trend minimum in 2000 while the Sun was approaching one of its largest maxima. Compare their figures 4a (reported above), 5 and 7 with their figure 6 and compare them also with our figure 3 in SW06a and in SW08! See figure below where I compare Benestad and Schmidt’s  figures 6 and 7 and show that the results depicted in their Figure 7 are non-physical.

Because of the severe and naïve error in applying the wavelet decomposition, Benestad and Schmidt’s calculations are “robustly” flawed. I cannot but encourage Benestad and Schmidt to carefully study some book about wavelet decomposition such as the excellent work by Percival and Walden [2000] before attempting to use a complex and powerful algorithm such as the Maximum Overlap Discrete Wavelet Transform (MODWT) by just loading a pre-compiled computer R package.

There are several other gratuitous claims and errors in Benestad and Schmidt’s paper. However, the above is sufficient for this fast reply. I just wonder why the referees of that paper did not check Benestad and Schmidt’s numerous misleading statements and errors. It would be sad if the reason is because somebody is mistaking a scientific theory such as the “anthropogenic global warming theory” for an ideology that should be defended at all costs.

Nicola Scafetta, Physics Department, Duke University

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

218 Comments
Inline Feedbacks
View all comments
August 4, 2009 2:43 pm

Steve Hempell (14:13:16) :
Is the raw data available for this plot? Heliospheric-Magnetic-Field-Since-1900.png
It is a subset of the full dataset at
http://www.leif.org/research/HMF-1835-now.xls

August 4, 2009 2:51 pm

Leif,
“This doesn’t matter if the data is faulty to begin with [which it is]. So discussing what is wrongly done to garbage is just silly.”
I’m going to have to agree with the gigo point, however the extremely low technical level of the mistake is what makes this so interesting. It shows such a complete misunderstanding of the algorithms function that it has left a smirk on my face all day. It’s astounding to me that someone who spends his blogging days proclaiming superior knowledge and intellect to all who will listen found such a large and obvious pile of poo to jump into.
I cannot wait to read his answer to this one, it seems he’s preparing himself for a disagreement/rebuttal so it will be interesting.
I have a question for you though. Your dataset clearly shows no trend, why wouldn’t they use that if it will guarantee the point they obviously want is proven? They can claim this is the correct data to use not that one and the consensus continues. Is there some stigma or disagreement with the derivation of your curve vs theirs?

August 4, 2009 2:51 pm

Dave (14:33:31) :
Actually, it makes Benestad and Schmidt doubly wrong. Which is itself informative.
But it also makes Scafetta wrong, and BS are not wrong because SW show they are wrong as SW themselves are wrong.

Paul Vaughan
August 4, 2009 2:52 pm

John S. (09:22:26) “It’s very trendy in “peer-reviewed” circles to use some new analysis method, such as wavelet transforms.”
Most of the wavelet transforms I see published don’t show anything.
Wise words from Cleveland’s “Visualizing Data” website apply here:
“The data analyst needs to be hard-boiled in evaluating the efficacy of a visualization tool. It is easy to be dazzled by a display of data, especially if it is rendered with color or depth. Our tendency is to be mislead into thinking we are absorbing relevant information when we see a lot. But the success of a visualization tool should be based solely on the amount we learn about the phenomenon under study.”
http://www.stat.purdue.edu/~wsc/visualizing.html
If a wavelet transform is going to show something useful, the analyst will (usually) know this before they produce it. We are far from the stage when most editors will know what to look for in a wavelet plot.
However, wavelet methods are powerful and it is important that people don’t simply reject wavelet plots because they do not understand wavelet methods. (This actually happens a lot.) More efficient education is the key. [Wavelet methods are actually simple. They will withstand the test of time.]

VG
August 4, 2009 2:55 pm

Leif Svalgaard (12:03:11) :
Here
http://users.comcen.com.au/~journals/bioinfo.htm
online-j-bioinformatics.com
OJB

Paul Vaughan
August 4, 2009 3:10 pm

Peer review this.
Peer review that.
…Whatever,
nevermind.
Block nothing.
Read everything.
and Integrate the elements of truth.
A more efficient education system would help eliminate related fears.

David L. Hagen
August 4, 2009 3:12 pm

See Dr. Nicola Scafetta’s web site.

pwl
August 4, 2009 3:18 pm

Bam! Wow, that’s quite the cutting response to a cutting error filled critique.

Steve Hempell
August 4, 2009 3:40 pm

Thanks Leif:
Is this stuff readily available on your website? I looked but could not find it before I asked. Just so I don’t have to bug you in the future! :]

August 4, 2009 3:54 pm

Jeff Id (14:51:01) :
I have a question for you though. Your dataset clearly shows no trend, why wouldn’t they use that if it will guarantee the point they obviously want is proven? They can claim this is the correct data to use not that one and the consensus continues. Is there some stigma or disagreement with the derivation of your curve vs theirs?
It goes back to the Hockey Stick. If you can show that there is no temperature variations over centuries, then clearly you don’t need the solar input. This was tried, but the HS is now so discredited that some climate variation must be admitted, e.g. coming out of the LIA, therefore the Sun in invoked assuming [and selecting data with] a large TSI variation to take us out of the LIA, and to explain why temps rose from 1900 to 1960 [as much as from 1970-now]. Therefore AGW needs and must have solar forcing [and a lot of it]. There is even an AGW argument that says that since the Sun has decreased its activity since the 1950s [cycle 19 in 1957 was the biggest ever], but temps have risen in spite of that, man must be the culprit. Take away the solar card and AGW is at a loss to explain the natural variability.

August 4, 2009 4:09 pm

Steve Hempell (15:40:14) :
Is this stuff readily available on your website? I looked but could not find it before I asked. Just so I don’t have to bug you in the future! :]
Everything I do is on my website. Click on ‘files’ at the bottom of the page. There is a lot of junk there too.

Curiousgeorge
August 4, 2009 4:20 pm

A little OT, sorry, but this is important.
http://www.dtnprogressivefarmer.com/dtnag/common/link.do?symbolicName=/free/news/template1&paneContentId=5&paneParentId=70104&product=/ag/news/topstories&vendorReference=cdc37f49-a12b-4710-8d92-f41326abfc58
Partial Quote:
Climate Bill a Land-Use Battle
Questions Loom Over Possible Acreage Shift From Crops
Sen. Mike Johanns, R-Neb., said an analysis by the American Farm Bureau Federation that said 40 million acres would come out of crop production. Farm Bureau had used EPA data to make that forecast. Johanns also wrote in a later op-ed piece that “another analysis predicts a loss of 78 million acres to trees. That’s nearly 20 percent of our nation’s total cropland — a staggering number.” Republicans on the Senate Agriculture Committee have also called for more hearings on the bill.
The climate bill, H.R. 2454, is designed to reduce greenhouse-gas emissions 17 percent from 2005 levels by 2020 and to continue to reach an 83 percent reduction by 2050. Agriculture is expected to be affected by possibly higher energy and input costs, particularly if coal plants convert to cleaner-burning natural gas. Natural gas is a key component in fertilizer production.
An EPA analysis of the climate bill last month stated overall land area in crops would shift to forest under the bill. But that analysis was completed before House lawmakers struck an agreement that created agricultural carbon offsets. The EPA has not released an updated study factoring in an agricultural carbon program pushed by House Agriculture Committee Chairman Collin Peterson, D-Minn.
“That modeling was done before the whole deal with Chairman Peterson,” said Fred Yoder, an Ohio farmer who has closely followed the legislation. “If you just run the numbers, there’s no way to get enough carbon credits to take land out of production.”

August 4, 2009 4:30 pm

Leif,
Thanks for the answer, fantastic really. Can you point me to the basis for the TSI you demonstrate vs theirs. Anything you can find would be very interesting.

SteveSadlov
August 4, 2009 4:30 pm

RE: “Take away the solar card and AGW is at a loss to explain the natural variability.”
Take away the direct solar card(s)?
Now, if we want to talk about some larger card, for example, interactions between the magnetosphere, general “fabric” of the near space abroad’s complete EM spectra, geophysical energetics, and, various oscillations in currents, terrestrial thermal gradients (in the oceans and air), etc, I am not even sure what the cards really are.
So much research yet to do to understand natural variability.

tallbloke
August 4, 2009 4:35 pm

Leif
There is even an AGW argument that says that since the Sun has decreased its activity since the 1950s [cycle 19 in 1957 was the biggest ever], but temps have risen in spite of that, man must be the culprit. Take away the solar card and AGW is at a loss to explain the natural variability.

That’s a dumb and easily disproved argument though. The average number of sunspots was higher in the ’90’s due to the short minima and steep rises/falls of short cycles than it was in the 50’s/60’s/70’s, even with the highest cycle recorded in the middle. And much much higher than it was at the turn of the century around 1900-1910. Which is why Scafetta and West’s phenomenological reconstruction is NOT ‘garbage’ as you so quaintly describe it.
Thanks for the info on Heliomagnetism by the way, it fits well with my LOD and Geomagnetic data.

Wansbeck
August 4, 2009 4:55 pm

Look at figure 6 in the BS09 paper. This is the TSI used in the paper. Whether it is right or wrong is another argument, it is the data used.
Look at figure 7 in the BS09 paper. This is the solar signature claimed to be the result of the TSI data that is used in the paper.
Scafetta has highlighted the figures as they approach the year 2000.
Reported TSI goes up yet solar signature goes down.
Scafetta claims that this non-physical behaviour is the result of a very basic mathematical error.
How did the authors miss this?
How did the peer reviewers miss this?
Do they not even take the time to look at the pictures?

John S.
August 4, 2009 4:56 pm

Paul Vaughn (14:52:36):
I agree completely!. But the full-color plates make for a pretty interesting picture in visually dull journals. The effect is much like the centerfold in Playboy.

August 4, 2009 5:02 pm

Leif Svalgaard (16:09:46) :
Steve Hempell (15:40:14) :
Is this stuff readily available on your website? I looked but could not find it before I asked. Just so I don’t have to bug you in the future! :]
Everything I do is on my website. Click on ‘files’ at the bottom of the page. There is a lot of junk there too.

Perhaps Steve Hempell was looking for this information:
http://lasp.colorado.edu/sorce/tsi_data/six_hourly/sorce_tsi_L3_c06h_m29_v09_20030225_20090728.txt

slow to follow
August 4, 2009 5:04 pm

leif at (08:11:13) – re: your figure
Sorry Leif I’m coming to this late and I don’t follow the solar stuff closely but please can you expand on how the green “dTSI/dt per year” green line is calculated? Is it on an average value of the data sets plotted or referenced to a specific one? And over what period is the rate of change annualised?
Apologies if this is covered/obvious and I’ve missed it.

August 4, 2009 5:08 pm

tallbloke (16:35:38) :
That’s a dumb and easily disproved argument though. The average number of sunspots was higher in the ’90’s due to the short minima and steep rises/falls of short cycles than it was in the 50’s/60’s/70’s, even with the highest cycle recorded in the middle. And much much higher than it was at the turn of the century around 1900-1910. Which is why Scafetta and West’s phenomenological reconstruction is NOT ‘garbage’ as you so quaintly describe it.
Thanks for the info on Heliomagnetism by the way, it fits well with my LOD and Geomagnetic data.

There’s still something which has not been considered; the rate of photonic excitation and deexcitation effect. We could add also another effect provided by the interplanetary medium due to the mentioned photonic excitation and deexcitation which definitively modifies the temperature of the stratosphere and of the troposphere.

SSam
August 4, 2009 5:24 pm

Out on a limb here given the current discussion, but some of the topic makes me ask if the is a way to determine how much radiant heating comes from the photosphere (the observable disk) at ~5,778 K and ~32 arc minutes verses the corona at ~5×106 K, which covers a highly variable but comparable angular segment of the sky?

SSam
August 4, 2009 5:25 pm

Re: my last, that should read ~5×10^6 K

Mark T
August 4, 2009 5:30 pm

Wavelets aren’t really “new,” though I suppose compared to something like Fourier analysis, you could call them new. I did my MS thesis on wavelets in 1995, and at least one of my references was to Haar in 1910 (though Haar did refer to his basis in terms of wavelets). Ingrid Daubechies’ “Ten Lectures on Wavelets” was published in 1992. Indeed, she credits Morlet, Grossman, Arens, Fourgeau, and Giard with the name circa 1982-1983. We (the signal processing community) have had plenty of time to understand and incorporate wavelet analysis methods into our general framework.
Mark

August 4, 2009 5:35 pm

The mistake resides on taking the bulk load of energy that is hitting on the Earth’s atmosphere when we should monitor the intensity of insolation on the surface. It is useless to measure the amount of solar radiation at any point of the outer space emitted on August 3 2009 if there is something at the thermosphere or at the ionosphere which could be modifying the means of access organization for such radiation, reducing the insolation or increasing it?
We must not be so childish as to think that only the “skins” or the “peels” of ocean and land are heated up. Heat is energy in movement and it is dissipated or dispersed into the system that is at a lower temperature. Sometimes, when taking temperatures of large volumes of water, yes, we notice the surface layer is hotter than the subsurface layer, but the energy is flowing towards that colder subsurface layer, though we don’t “see” the process.

August 4, 2009 5:36 pm

tallbloke (16:35:38) :
“Take away the solar card and AGW is at a loss to explain the natural variability.”
That’s a dumb and easily disproved argument though.

It is not my argument, it is theirs… And explains why they like the old TSI-reconstructions.
Which is why Scafetta and West’s phenomenological reconstruction is NOT ‘garbage’ as you so quaintly describe it.
They also use a TSI reconstruction with a secular trend, so have the same problem. See their Figure 1A in http://www.acrim.com/Reference%20Files/Sun%20&%20Global%20Warming_GRL_2006.pdf
Thanks for the info on Heliomagnetism by the way, it fits well with my LOD and Geomagnetic data.
It shouldn’t as the HMF has nothing to do with either [I’m assuming the ‘Geomagnetic’ is the main field and not the transient activity]. In any case it doesn’t fit the Global Temperatures at all.
slow to follow (17:04:27) :
can you expand on how the green “dTSI/dt per year” green line is calculated?
That line shouldn’t be there as it is not relevant for the discussion, but I was reusing an old Figure and was just too lazy to remove the green line. It is simply the difference in TSI from one year to the next.
Jeff Id (16:30:22) :
Can you point me to the basis for the TSI you demonstrate vs theirs. Anything you can find would be very interesting.
In Froehlich, C. & J. Lean (2004) Solar Radiative Output and its Variability: Evidence and Mechanisms, Astron..& Astrophys. Rev.,
12(4), 273. Doi:10.1007/s00159-004-0024-1. they discuss the problem:
1) there is a clear solar cycle in TSI
2) is there in addition also a long-term, secular component?
In a presentation at the AGU Fall meeting in 2007 we argued that there is no evidence for a long-term change: http://www.leif.org/research/GC31B-0351-F2007.pdf
The Figure on page 9 is from Froehlich and Lean 2004.
That leaves the solar cycle variation.
If you look at the ‘trend’ in reconstructions of TSI since Hoyt and Schatten’s 1993 attempt to today, you’ll find that the long-term secular trend has slowly grown smaller and smaller. At the SORCE 2008 meeting Judith Lean reminded us that no long-term trend in TSI has been detected and questioned if such a trend even existed. Here is one of her slides: http://www.leif.org/research/TSI-LEAN2008.png

1 3 4 5 6 7 9