Dr. Michael Mann, Smooth Operator

Guest Post by Willis Eschenbach

People sometimes ask why I don’t publish in the so-called scientific journals. Here’s a little story about that. Back in 2004, Michael Mann wrote a mathematically naive piece about how to smooth the ends of time series. It was called “On smoothing potentially non-stationary climate time series“, and it was published in Geophysical Research Letters in April of 2004. When I read it, I couldn’t believe how bad it was. Here is his figure illustrating the problem:

Figure 1a. [ORIGINAL CAPTION] Figure 1. Annual mean NH series. (blue) shown along with (a) 40 year smooths of series based on alternative boundary constraints (1) – (3). Associated MSE scores favor use of the ‘minimum roughness’ constraint. 

Note the different colored lines showing different estimates of what the final averaged value will be, based on different methods of calculating the ends of the averages. The problem is how to pick the best method.

I was pretty naive back then. I was living in Fiji for one thing, and hadn’t had much contact with scientific journals and their curious ways. So I innocently thought I should write a piece pointing out Mann’s errors, and suggesting a better method. I append the piece I wrote back nearly a decade ago. It was called “A closer look at smoothing potentially non-stationary time series.”

My main insight in my paper was that I could actually test the different averaging methods against the dataset by truncating the data at various points. By doing that you can calculate what you would have predicted using a certain method, and compare it to what the true average actually turned out to be.

And that means that you can calculate the error for any given method experimentally. You don’t have to guess at which one is best. You can measure which one is best. And not just in general. You can measure which one is best for that particular dataset. That was the insight that I thought made my work worth publishing.

Now, here comes the story.

I wrote this, and I submitted it to Geophysical Research Letters at the end of 2005. After the usual long delays, they said I was being too hard on poor Michael Mann, so they wouldn’t even consider it … and perhaps they were right, although it seemed pretty vanilla to me. In any case, I could see which way the wind was blowing. I was pointing out the feet of clay, not allowed.

I commented about my lack of success on the web. I described my findings over at Climate Audit, saying:

Posted Oct 24, 2006 at 2:09 PM

[Mann] recommends using the “minimum roughness” constraint … apparently without noticing that it pins the endpoints.

I wrote a reply to GRL pointing this out, and advocating another method than one of those three, but they declined to publish it. I’m resubmitting it.

w.

So, I pulled out everything but the direct citations to Mann’s paper and resubmitted it basically in the form appended below. But in the event, I got no joy on my second pass at publishing it either. They said no thanks, not interested, so I gave up. I posted it on my server at the time (long dead), put a link up on Climate Audit, and let it go. I was just a guy living in Fiji and working a day job, what did I know?

Then a year later, in 2007 Steve McIntyre posted a piece called “Mannomatic Smoothing and Pinned End-points“. In that post, he also discussed the end point problem.

And now, with all of that as prologue, here’s the best part.

In 2008, after I’d foolishly sent my manuscript entitled “A closer look at smoothing potentially non-stationary time series” to people who turned out to be friends of Michael Mann, Dr. Mann published a brand new paper in GRL. And here’s the title of his study …

“Smoothing of climate time series revisited”

I cracked up when I saw the title. Yeah, he better revisit it, I thought at the time, because the result of his first visit was Swiss cheese.

And what was Michael Mann’s main insight in his new 2008 paper? What method did he propose?

“In such cases, the true smoothed behavior of the time series at the termination date is known, because that date is far enough into the interior of the full series that its smooth at that point is largely insensitive to the constraint on the upper boundary. The relative skill of the different methods can then be measured by the misfit between the estimated and true smooths of the truncated series.”

In other words, his insight is that if you truncate the data, you can calculate the error for each method experimentally … curious how that happens to be exactly the insight I wasted my time trying to publish.

Ooooh, dear friends, I’d laughed at his title, but when I first read that analysis of “his” back in 2008, I must admit that I waxed nuclear and unleashed the awesome power that comes from splitting the infinitive. The house smelled for days from the sulfur fumes emitted by my unabashed expletives … not a pretty picture at all, I’m ashamed to say.

But before long, sanity prevailed, and I came to realize that I’d have been a fool to expect anything else. I had revealed a huge, gaping hole in Mann’s math to people who were obviously his friends … and while for me it was an interesting scientific exercise, for him it represented much, much more. He could not afford to leave the hole unplugged or have me plug it.

And since I had kindly told him how to plug the hole, he’d have been crazy to try something else. Why? Because my method worked … hard to argue with success.

The outcome also proved to me once again that I could accomplish most anything if I didn’t care who got the credit.

Because in this case, the sting in the tale is that at the end of the day, my insights on how to deal with the problem did get published in GRL. Not only that, they got published by the guy who would have most opposed their publication under my name. I gotta say, whoever is directing this crazy goat-roping contest we call life has the most outré, wildest sense of humor imaginable …

Anyhow, that’s why I’ve never pushed too hard to try to publish my work in what used to be scientific journals, but now are perhaps better described as popular science magazines. Last time I tried, I got bit … so now, I mostly just skip getting gnawed on by the middleman and put my ideas up on the web directly.

And if someone wants to borrow or steal or plagiarise my scientific ideas and words and images, I say more power to them, take all you want. I cast my scientific ideas on the electronic winds in the hope that they will take root, and I can only wish that, just like Michael Mann did, people will adopt my ideas as their own. There’s much more chance they’ll survive that way.

Sure, I’d prefer to get credit—I’m as human as anyone, or at least I keep telling myself that. So an acknowledgement is always appreciated.

But if you just want to just take some idea of mine and run, sell it under another brand name, I say go for it, take all you want, because I’ve learned my lesson. The very best way to keep people from stealing my ideas is to give them away … and that’s the end of my story.

As always, my best wishes for each of you … and at this moment my best wish is that you follow your dream, you know the one I mean, the dream you keep putting off again and again. I wish you follow that dream because the night is coming and no one knows what time it really is …

w.

[UPDATE] In my above-mentioned comment on Steve McIntyre’s blog, I mentioned the analysis of Mannian smoothing by Willie Soon, David Legates, and Sallie Baliunas, entitled Estimation and representation of long-term (>40 year) trends of Northern-Hemisphere-gridded surface temperature: A note of caution. 

Dr. Soon has been kind enough to send me a copy of that study, which I have posted up here. My thanks to him, it’s an interesting paper.

=====================================================

APPENDIX: Paper submitted to GRL, slightly formatted for the web.

—————

A closer look at smoothing potentially non-stationary time series

Willis W. Eschenbach

No Affiliation

[1] An experimental method is presented to determine the optimal choice among several alternative smoothing methods and boundary constraints based on their behavior at the end of the data series. This method is applied to the smoothing of the instrumental Northern Hemisphere (NH) annual mean, yielding the best choice of these methods and constraints.

1. Introduction

[2] Michael Mann has given us an analysis of various ways of smoothing the data at the beginning and the end of a time series of data (Mann 2004, Geophysical Research Letters, hereinafter M2004).

These involve minimizing different boundary conditions at those boundaries, and are called the “minimum norm”, “minimum slope”, and “minimum roughness” methods. These methods minimize, in order, the zeroth, first, and second derivatives of the smoothed average. M2004 describes the methods as follows:

“To approximate the ‘minimum norm’ constraint, one pads the series with the long-term mean beyond the boundaries (up to at least one filter width) prior to smoothing.

To approximate the ‘minimum slope’ constraint, one pads the series with the values within one filter width of the boundary reflected about the time boundary. This leads the smooth towards zero slope as it approaches the boundary.

Finally, to approximate the ‘minimum roughness’ constraint, one pads the series with the values within one filter width of the boundary reflected about the time boundary, and reflected vertically (i.e., about the ‘‘y’’ axis) relative to the final value. This tends to impose a point of inflection at the boundary, and leads the smooth towards the boundary with constant slope.” (M2004)

[3] He then goes on to say that the best choice among these methods is the one that minimizes the mean square error (MSE) between the smoothed data and the data itself:

“That constraint providing the minimum MSE is arguably the optimal constraint among the three tested.” (M2004)

2. Method

[4] However, there is a better and more reliable way to choose among these three constraints. This is to minimize the error of the final smoothed data point in relation, not to the data itself, but to the actual final smoothed average (which will only be obtainable in the future). The minimum MSE used in M2004 minimizes the squared error between the estimate and the data points. But this is not what we want. We are interested in the minimum mean squared error between the estimate and the final smoothed curve obtained from the chosen smoothing method. In other words, we want the minimum error between the smoothed average at the end of the data and the smoothed average that will actually be obtained in the future, when we have enough additional data to determine the smoothed average exactly.

[5] This choice can be determined experimentally, by realizing that the potential error increases as we approach the final data point. This is because as we approach the final data point, we have less and less data to work with, and so the potential for error grows. Accordingly, we can look to see what the error is with each method in the final piece of data. This will be the maximum expected error for each method. While we cannot determine this for any data nearer to the boundary than half the width of the smoothing filter, we can do so for all of the rest of the data. It is done by truncating the data at each data point along the way, calculating the estimated value of the final point in this truncated dataset using the minimum norm, slope, and roughness methods, and seeing how far they are from the actual value obtained from the full data set.

[6] In doing this, a curious fact emerges — if we calculate the average using the “minimum roughness” method outlined above, the “minimum roughness” average at the final data point is just the final data point itself. This is true regardless of the averaging method used. If we reflect data around both the time axis and the y-axis at the final value, the data will be symmetrical around the final value in both the “x” and “y” directions. Thus the average will be just the final data point, no matter what smoothing method is used. This can be seen in Fig. 1a of M2004:

ORIGINAL CAPTION: Figure 1. Annual mean NH series. (blue) shown along with (a) 40 year smooths of series based on alternative boundary constraints (1)–(3). Associated MSE scores favor use of the ‘minimum roughness’ constraint. (Mann 2004)

[7] Note that the minimum roughness method (red line) goes through the final data point. But this is clearly not what we want to do. Looking at Fig. 1, imagine a “smoothed average” which, for a data set truncated at any given year, must end up at the final data point. In many cases, this will yield wildly inaccurate results. If this method were applied to the data truncated at the high temperature peak just before 1880, for example, or the low temperature point just before that, the “average” would be heading out of the page. This is not at all what we are looking for, so the choice that minimizes the MSE between the data and the average (the “minimum roughness” choice) should not be used.

[8] Since the minimum roughness method leads to obvious errors, this leaves us a choice between the minimum norm and minimum slope methods. Fig. 2 shows the same data set with the point-by-point errors from the three methods (minimum norm, minimum slope, and minimum roughness) calculated for all possible points. (The error for the minimum roughness method, as mentioned, is identical to the data set itself.)

[9] To determine these errors, I truncated the data set at each year, starting with the year that is half the filter width after the start of the start of the dataset. Then I calculated the value for the final year of the truncated data set using each of the different methods, and compared it to the actual average for that year obtained from the full data set. I am using a 41-year Gaussian average as my averaging method, but the underlying procedure and its results are applicable to any other smoothing method. I have used the same dataset as Mann, the Northern Hemisphere mean annual surface temperature time series of the Climatic Research Unit (CRU) of the University of East Anglia   [Jones et al., 1999], available at http://www.cru.uea.ac.uk/ftpdata/tavenh2v.dat.

Figure 2. Errors in the final data point resulting from different methods of treating the end conditions. The “minimum roughness” method error for the dataset truncated at any given year is the same as the data point for that year.

3. Applications

[10] The size of the errors of the three methods relative to the smoothed line can be seen in the graph, and the minimum slope method is clearly superior for this data set. This is verified by taking the standard deviation of each method’s point-by-point distance from the actual average. Minimum roughness has the greatest deviation from the average, a standard deviation of 0.110 degrees. The minimum norm method has a standard deviation of 0.065 degrees from the actual average, while the minimum slope’s standard deviation is the smallest at 0.048.

[11] Knowing how far the last point in the average of the truncated data wanders from the actual average allows us to put an error bar on the final point of our average. Here are the three methods, each with their associated error bar (all error bars in this paper show 3 standard deviations, and are slightly offset horizontally from the final data point for clarity).

Figure 3. Potential errors at the end of the dataset resulting from different methods of treating the end conditions. Error bars represent 3 standard deviations. The minimum slope constraint yields the smallest error for this dataset.

[12] Note that these error bars are not centered vertically on the final data point of each of the series. This is because, in addition to knowing the standard deviation of the error of each end condition, we also know the average of each error. Looking at Fig. 2, for example, we can see that the minimum norm end condition on average runs lower than the true Gaussian average. Knowing this, we can improve our estimate of the error of the final point. In this dataset, the centre of the confidence limits for the minimum norm will be higher than the final point by the amount of the average error.

3.1 Loess and Lowess Smoothing

[13] This dataset is regular, with a data point for each year in the series. When data is not regular but has gaps, loess or lowess smoothing is often used. These are similar to Gaussian smoothing, but use a window that encompasses a certain number of data points, rather than a certain number of years.

[14] When the data is evenly spaced, both lowess and loess smoothing yield very similar results to Gaussian smoothing. However, the treatment of the final data points is different from the method used in Gaussian smoothing. With loess and lowess smoothing, rather than using less and less data as in Gaussian smoothing, the filter window stays the same width (in this case 41 years). However, the shape of the curve of the weights changes as the data nears the end.

[15] The errors of the loess and lowess averaging can be calculated in the same way as before, by truncating the dataset at each year of the data and plotting the value of the final data point. Fig. 4 shows the errors of the two methods.

Figure 4. Lowess and loess smoothing along with their associated end condition errors.

[16] The end condition errors for lowess and loess are quite different, but the average size of the errors is quite similar. Lowess has a standard deviation of .062 from the lowess smoothed data, and loess has a standard deviation of .061 from the loess smoothed data. Fig 5 shows the Gaussian minimum slope (the least error of the three M2004 end conditions), and the lowess and loess smoothings, with their associated error bars.

Figure 5. Gaussian, lowess and loess smoothing along with their associated error bars. Both lowess and loess have larger errors than the Gaussian minimum slope error.

  [17] Of the methods tested so far, the error results are as follows:

METHOD                      Standard Deviation of Error

Gaussian Minimum Roughness            0.111

Gaussian Minimum Norm                 0.065

Lowess                                0.062

Loess                                 0.061Gaussian Minimum Slope                0.048

[18] Experimentally, therefore, we have determined that of these methods, for this data set, the Gaussian minimum slope method gives us the best estimate of the smoothed curve which we will find once we have enough additional years of data to determine the actual shape of the curve for the final years of data.

3.2 Improved and Alternate Methods

[19] At least one better method of dealing with the end conditions exists. I call it the “minimum assumptions” method, as it makes no assumptions about the future state of the data. It simply increases the result of the Gaussian smoothing by an amount equal to the weight of the missing data. Gaussian smoothing works by multiplying each data point within the filter width by a Gaussian weight. This weight is greatest for the central point of the filter. From there it decreases in a Gaussian “bell-shaped” curve for points further and further away from the central point. The weights are chosen so that the total of the weights summed across the width of the filter adds up to 1.

[20] Let us suppose that as the center of the filter approaches the end of the dataset, the final two weights do not have data associated with them because they are beyond the end of the dataset. The Gaussian average is calculated in the usual manner, by multiplying each data point with its associated weight and summing the weighted data. The final two points, of course, do not contribute to the total, as they have no data associated with them.

[21] However, we know the total of the weights for the other data points. Normally, all of the weights would add up to 1, but as we approach the end of the data there are missing data points within the filter width. Their total of the existing data points might only be say 0.95, instead of 1. Knowing that we only have 95% of the correct weight, we can approximate the correct total by dividing the sum of the existing weighted data points by 0.95. The net effect of this is a shifted weighting which, as the final data point is approached, shifts the center of the weighting function further and further forwards toward the final data point.

[22] The standard deviation of the error of the minimum slope method, calculated earlier, was 0.048. The standard deviation of the error of the minimum assumptions method is 0.046. This makes it, for this data set, the most accurate of the methods tested. Fig. 6 shows the difference between these two methods at the end of the data set.

Figure 6. Gaussian minimum slope and minimum assumptions error bars. The minimum assumptions method provides the better estimate of the future smoothed curve.

[23] We can also improve upon an existing method. The obvious candidate for improvement is the minimum norm method. It has been calculated by padding the data with the average of the full dataset, from the start to the end of the data. However, we can choose an alternate interval on which to take our average. We can calculate (over most of the dataset) the error resulting from any given choice of interval. This allows us to choose the particular interval that will minimize the error. For the dataset in question, this turns out to be padding the end of the dataset with the average of the previous 5 years of data. Fig 7 shows the individual errors from this method, compared with the minimum assumptions method. Since the results from the two very different methods are quite similar, this increases confidence in the conclusion that these are the best of the alternatives.

Figure 7. Smoothed data (red), minimum assumptions errors (green), tuned minimum norm (previous 5-year average) errors (blue)

[24] The standard deviation of the error from the minimum norm with a 5-year average is slightly smaller than from the minimum assumptions method, 0.045 versus 0.046.

4. Discussion

[25] I have presented a method for experimentally determining which of a number of methods yields the closest approximation to a given smoothing of a dataset at the ends of the dataset. The method can be used with most smoothing filters (Gaussian, loess, low-pass, Butterworth, or other filter). The method also experimentally determines the average error and the standard deviation of the error of the last point of the dataset. Although the Tuned Minimum Norm method yields the best results for this dataset, this does not mean that it will give the best results for other datasets. It also does not mean that the Tuned Minimum Norm method is the best smoothing method possible; there may be other smoothing methods out there, known or unknown, which will give a better result on a given dataset.

[26] The method for experimentally determining the smoothing method with the smallest end-point error is as follows:

1)  For each data point for which all of the data is available to determine the exact smoothed average, determine the smoothed result that would be obtained by each candidate method if that data point were the final point of the data. (While this can be done by truncating the data at each point, padding the data if required, and calculating the result, it is much quicker to use a modified smoothing function which simply treats each data point as if it were the last point of the dataset and applies the required padding.)

2)  For each of these data points, subtract the actual smoothed result of the given filter at that point from the smoothed result of treating that point as if it were the final point. This gives the error of the smoothing method for the series if it were truncated at that data point.

3)  Take the average and the standard deviation of all of the errors obtained by this analysis.

4)  Use the standard deviation of these errors to determine the best smoothing method.

5)  Use the average and the standard deviation of these errors to establish confidence limits at the final point of the smoothed data.

5. Conclusions

1)  The Minimum Roughness method will always yield the largest standard deviation of the endpoint error in relation to the smoothed data and is thus the worst method to choose.

2)  For any given data set, the best method can be chosen by selecting the method with the smallest standard deviation of error as measured on the dataset itself.

3)  The use of an error bar at the end of the smoothed average allows us to gauge the reliability of the smoothed average as it reaches the end of the data set.

References

Mann, M., 2004, On smoothing potentially non-stationary climate time series, Geophysical Research Letters, Vol. 31, 15 April 2004

Get notified when a new post is published.
Subscribe today!
5 5 votes
Article Rating
207 Comments
Inline Feedbacks
View all comments
March 30, 2013 1:13 pm

Another reason I won’t called him a recipient of the PhD degree. He has forfeited that in oh so many ways. Mr. Mann needs a job. I recommend janitorial services at PSU.

Roy
March 30, 2013 1:19 pm

If Michael Mann just happened to use the method proposed by Willis Eschenbach without attributing it to Williss then that would be a case of bad manners. However if the method itself was the main subject of Mann’s article, then either this case of apparent borrowing was simply one of roughly simultaneous discovery or of outrageous plagiarism.
Roughly simultaneous discovery is not uncommon in science and can sometimes lead to disputes about priority, two well know cases being the invention of calculus by Newton and Leibniz, and the discovery of evolution by means of natural selection by Darwin and Wallace. The former dispute was quite a bitter one, but one involving associates of the two great men more than the discoverers themselves. In the case of Darwin and Wallace the situation was amicably resolved before any dispute could be developed, but perhaps the way in which credit was apportioned was slightly unfair to Wallace.
From what Willis wrote it does not seem that Mann independently hit on the same idea but, to be scrupulously fair, shouldn’t Mann be offered to opportunity to explain in this blog (or anywhere else if he would prefer it) where exactly he got the idea from?

Mpaul
March 30, 2013 1:24 pm

Willis, you should consider it an honor to be plagiarized by a Nobel Prize winner.

scf
March 30, 2013 1:34 pm

I’ve had a similar experience with scientific journals, being out of the academic establishment and having submitted papers. Journals are scientific cliques, with submissions that come from outside sources being treated negatively. Just like in many fields, it’s not what you know, it’s who you know. If you don’t now all the tricks, the mannerisms, the types of language, the precise structure that academics have constructed for themselves and expect in a submission, your paper will go nowhere, regardless of the content.

The Iceman Cometh
March 30, 2013 1:36 pm

Don Easterbrook says: March 30, 2013 at 9:02 am “I spent huge amounts of time over a two-year period, had all the papers peer reviewed by world experts, got preliminary approval by GSA to proceed, and submitted the final draft ready for publication. At that point, the GSA editor informed me that because the papers did not support the ‘consensus’ they would not publish it”
Some years ago I came across a American Geophysical Union paper that had some appalling flaws in it. I drew the editor’s attention to the problem, and received a very curt brushoff. I then found the editor had been a co-worker of the author of the paper, and, worse still, “The editor has complete responsibility and authority to accept a submitted article for publication or to reject it” – there was no requirement for him to have it reviewed, and indeed it had been accepted for publication within days of submission.
If we do not speak up about the corruption of the process of scientific publication, science will lose and we will all be the worse for our silence.

Sean
March 30, 2013 1:44 pm

Willis, it is clear that you have evidence that Mann committed a worse offense with his second paper – he plagiarized and stole from another paper, and he failed to give credit.
Among other things this is grounds for the journal to withdraw Mann’s second paper for plagiarism. As for Mann’s university – there are codes of conduct for academic fraud like this, I am sure that he should be reprimanded at minimum, terminated for cause at max.
You should file complaints with both his university and with the journal.

Paul Vaughan
March 30, 2013 1:47 pm

Something I noticed a few weeks ago and found time to summarize yesterday:
multidecadal heliosphere structure, solar cycle deceleration, & terrestrial climate
Superposed is figure 5 (p.198) from section 8 (pp.196-198) of:
Obridko, V.N.; & Shelting, B.D. (1999). Structure of the heliospheric current sheet derived for the interval 1915-1996. Solar Physics 184, 187-200.
http://helios.izmiran.troitsk.ru/hellab/Obridko/189.pdf
“[…] quasi-periodic oscillations […] The convergence region of the field lines moves up and down with the same period. […] results in secular variations of the entire structure of the heliosphere.”
Compare with Figure 4:
Wyatt, M.G.; Kravtsov, S.; & Tsonis, A.A. (2011). Atlantic Multidecadal Oscillation and Northern Hemisphere’s climate variability. Climate Dynamics.

John Tillman
March 30, 2013 2:05 pm

Emailed link to this story to National Review for use as ammo in Mann-Steyn case.

Snotrocket
March 30, 2013 2:19 pm

Great post, Willis! Up until now I had always thought that popcorn futures were over-priced and, most certainly, over-subscribed. But now, I think they are a good punt.
It is obvious that CG1, CG2 (plus whitewash inquiries) have had no effect at denting the hubris that is the Green Reich. Well, now, if Mann doesn’t come after you with suit, he will surely demonstrate that which he is: a WUSS (as we say in UK) of the first order. Not to mention, as a quote I found from back in the ’30s (which could have been about Mann): “…a willful, obstinate, unsavory, obnoxious, pusillanimous, pestilential, pernicious, and perversable liar” Yep. I think that covers it…

BarryW
March 30, 2013 2:27 pm

I have a serious problem with centered smoothing for end conditions. You are trying to predict the result based on information that you don’t have for the end points. Creating smoothed points were you know both the a priori and a posteriori data doesn’t tell you how to predict smoothed values where you only know the previous time series values. I’ve wondered about this for awhile, but have been too lazy to look at it. The question is, given the previous time series values, can I predict what the next average value would be?

Bart
March 30, 2013 2:31 pm

DirkH says:
March 30, 2013 at 12:52 pm
“No free lunch by going to the frequency domain.”
Definitely not. The advocated method is a generally inferior means of low pass filtering, as doing it in the digital domain means you are not actually eliminating the entire frequency band you are trying to take out, just the components at those discrete frequencies. And, if you haven’t properly zero-padded the data, you are going to get aliasing from the circular convolution with the effective response.
A far superior method is to use the power spectral density as a means of identifying a model, then applying an optimal filter algorithm to determine the behavior of the major components of that model.

rogerknights
March 30, 2013 2:43 pm

This is an opportunity to finally “nail” that slippery charlatan, who has slithered out of other tight spots. Don’t let this opportunity go to waste. I urge those with experience in filing complaints to communicate with Willis by sending him drafts of letters he could send and FOIAs he could file, and by offering him assistance in pursuing the matter.
BTW, if Mann gets nailed, this would be a help to NRO, as it would reduce his legal presumption of credibility.

Jack
March 30, 2013 2:46 pm

With the advantage of hindsight there is no scientist that I am aware of who had been remembered for being wrong. But there are times when a man’s name becomes an historical artifact: think Benedict Arnold, think Quisling. I am sure there are others. Names that come to symbolize a pejorative noun rather than an honored individual.
What will Mann’s be, I wonder? It seems to me that his strategy is to delay the inevitable disgrace until he retires.

clipe
March 30, 2013 3:00 pm

Smoothing is a perfectly good word as a noun, adjective, verb (v.tr, v.intr) and, reaching here, an adverb.
http://www.ecowho.com/foia.php?file=4578.txt&search=smoothing+revisited

March 30, 2013 3:01 pm

Paul Linsay says:
March 30, 2013 at 10:31 am
I always grind my teeth when I look at a climate paper because of the smoothing. It’s not the algorithm used, it’s the very fact that it’s done at all that is upsetting. The data should be allowed to “speak for itself”. Smoothing imposes a model on the reader that may be completely invalid. Where is it written that all the short period flutuations don’t have any information in them? Smoothing creates false impressions of trends where none may exist. The correct rule for smoothing is DON’T.

Agree. Although, I don’t know why you say ‘the reader’. The model is imposed on the data. Specifically the model is that there is a forcing signal (from GHGs/CO2) and natural variability noise, and smoothing removes the natural variability noise to expose the forcing signal. Complete equine manure. Putting the proverbial cart before the horse. Assuming the data supports the (forcing) model. When the first question should be, does the data support the model/theory?

Joseph Bastardi
March 30, 2013 3:01 pm

Can I say one thing as a PSU grad in meteo 1978
HELLLLLLLLLLLLLLLLLLLLLLLPPPPPPPPPPPPPPP
thank you for letting me get that out

george e. smith
March 30, 2013 3:05 pm

Well Willis, that is quite a story. I can’t say that I fully understand all the ramifications of your method, but your presentation is very readable.
I am personally of the opinion, that the very best representation of experimental data, is the actual raw data itself. Statistication, can only remove information, not add it.
I also would make the comment, that ….Hoser….’s proposition of using Fourier transform filtering, is one that I find highly meritorious. If you are going to remove information, what better way is there to know exactly what it is you are removing.
Fourier transform filtering is widely used in image processing, and there you do like to know what you are throwing away.
As for your experience as regards plagiarism, your example is quite shocking to say the least, and one wonders where on the Richter scale, your eruption ended up.
I was once invited to add my name, as a co-inventor on a patent application in the process of being filed by a person who shall remain nameless, and who happened to be leader of the project; in effect my supervisor.
I declined, saying the last thing I would want is to have my name on a patent, that was somebody else’s invention. So he filed for the patent (which issued) with him as sole inventor.
My company official lab note book, maintained stricty for IP documentation, contained a complete and full description of the invention, dated at least five years, before, we decided to finally design a product based on the idea, at which point I was assigned to his group to work on the project. I still have one of the two full production ready prototypes of the product, which was then once again killed, and never did see the light of day The company eventually sold off the associated business.
Virtually all of my fellow employees, were fully cognisant of the fact, I had documented it years earlier. The patent didn’t bother me. The loss of a useful and advanced product did.
So I fully understand your ire at being so Mannhandled Willis. Some people just have no shame at all.

Severian
March 30, 2013 3:13 pm

WE, you’ve got a good approach in that you want the good science to be out there but lack the overweening egotism of many, IMHO a sign of a mature person, but hard to do. I faced similar issues in engineering after, as a young just out of school kid. I slaved over an analysis to the ballistic models of a system, made some huge improvements in accuracy with minimal mods to the code, and watched as it got called the Joe Blow algorithm, with Joe Blow being ,y boss. I was similarly outraged, and in private I’m sure my vocabulary matched yours. After a while I realized that I could accomplish a lot if I injected ideas and such into other people and supported them when they pushed them and if I didn’t care if I got credit. Also is a way to avoid blame if it craters! I managed to get a lot done that way, and eventually people figured out I was a good sharp guy and a “team player” (I really was not but if it made them happy to think so) and I was pretty successful in my career. After I grew up a little I realized the important thing to me was does the system work well, I got my ego strokes out of that instead of kudos.
The fact that the climate “science” community is that insular and averse to the facts if they disagree with the consensus is the real problem. If this was some backwater theoretical physics realm it wouldn’t matter as much as it does when sloppy science is being Lysenkoized to rob people of wealth, health, and life.

James Fosser
March 30, 2013 3:18 pm

Just how many persons do not publish work that would advance Science? I never ever publish because of an incident several years ago. I was working with five other students on a simple project to examine mutations in a gene associated with Marfan Syndrome (I had never heard of it and the course was an elective for my degree).The short course was over one semester of three months and I did all the work on my own without any liaison with the rest of the ”team” and In my usual fashion. (no modesty intended), I stumbled across a simple diagnostic way to detect mutations in any gene (For the course, the university gave the supervisor $300 per student for materials despite the cost of the course per student being around $10,000) After the course (I got a lousy mark) I threw my work into a bottom draw and went onto other matters and forgot it. A year or two later a friend who knew that I had found an easy way to detect gene mutations said that she had seen something very similar in a Peoples Republic of China Scientific Journal. I looked up the paper and lo and behold! The main author was one of those fellow students on that previous course and almost every single word in the paper plus methods and materials was lifted from my Marfan Syndrome assignment hand-in! But not a mention of me! I was not angry because I realised that perhaps my low mark was because I had also questioned the supervisor-whose life work was the Marfan Syndrome- that she might be on the wrong track over other matters relating to her research (I believe that sensibilities and science are spelt differently). Anyhow, as that Chinese paper was peer reviewed and the work it contained considered worthy of being published, I was happy. I also realised that that plagiarising ex-student (who was then a Doctor and whose name appeared in other papers) inhabited a world to which I did not wish to belong. Consequently, I have now constructed a home laboratory, work completely on my own, and place all my discoveries into that same bottom draw (and several of them would revolutionise medical science) plus I have learned never to trust my fellow humans.

DR
March 30, 2013 3:20 pm
March 30, 2013 3:21 pm

Willis:
All of us who have been involved within academic circles of researchers who appear to governed by a culture of “publish or perish” have had ideas ripped off by supposed colleagues in the pursuit of science. The professional societies use the publication of research to demonstrate their involvement in promoting science within their constituency of subscribers for their journals and to maintain their control of the science through their editors, committees, boards of trustees, and crony reviewers. They sponsor technical meetings for the researchers to gather as a scientific community to discuss the agenda of ideas that they want promote. In this cultural environment the free discussion of ideas is not free but very guarded because researchers with new ideas might find that their ideas appear in another researcher’s next proposal for support. When have you ever seen a reference in a paper to idea informally suggested by someone else? Sharing ideas is not very open, rather these forums are used criticize the research work of others against the backdrop of their own research endeavors. Why would anyone want to do this? It is perceived to be the only game in town to gain recognition as a scientist.
You are a noble exception. You have openly shared your ideas on this and other web sites about the climate science. I applaud your philosophy of putting a concept out there for people to debate and to learn. Instead of limiting your audience to readers of a particular professional society, everyone can get access without joining any society and paying the very high prices for subscriptions or costs to climb a pay wall.
The absence of a peer review process for ideas is also fallacious. An idea is published in electronic print and is open to everyone to ask questions, to offer valid criticisms or comments, to transmit quickly to others who may be interested, and is a barometer that can be used to measure the writer’s credibility and diligence. In the current scientific publishing environment surrounding the issue of climate, waiting for journal to publish the results of research work can take months or years before the research appears in print. There is no delay on this BLOG.
As you are also painfully aware, publishing ideas on a BLOG leads to comments that are useless, misleading, and name calling. I appreciate that you try to answer, clarify, amend, or apologize in response to critical comments. On this BLOG, many of the authors of comments are responsible people with sufficient science knowledge to offer comments which are worth reading and contribute to a better understanding even when the comment is critical of your idea. Thank you for adopting a proactive posture about scientific dialog in sharing ideas rather than maintaining ownership though publication. What could be nobler? “Keep on truckin’”.

Chad Wozniak
March 30, 2013 3:24 pm

H –
I’d say the comon man to whom you refer is quite capable of seeing the fallacy in AGW if he is just given two simple pieces of information: (1) temps have declined overall since the 1930s (i.e., for the past 80 years, not even just the last 16); and (2) the infinitesimal-ness of man’s contribution to an infinitesimal (that is, if even identifiable) factor in climate change. Q.E.D.
The CRL (criminal reactionary left) news media should be COERCED (to use the NYT term for what should be done to climate skeptics – tit for tat! a dose of their own medicine!) to reveal these facts to the public

rogerknights
March 30, 2013 3:31 pm

PPS: The third step would be to file a formal complaint with the AGU about the behavior of its editors and peer reviewers. I don’t see how a Guilty verdict could be avoided. That would make Mann the scholarly equivalent of a “convicted felon.” More important, it would undeniably expose the “Teamwork” that goes on behind the scenes in climatology, casting all its procedures into doubt.

george e. smith
March 30, 2013 3:33 pm

“””””…..Bart says:
March 30, 2013 at 2:31 pm
DirkH says:
March 30, 2013 at 12:52 pm
“No free lunch by going to the frequency domain.”
Definitely not. The advocated method is a generally inferior means of low pass filtering, as doing it in the digital domain means you are not actually eliminating the entire frequency band you are trying to take out, just the components at those discrete frequencies. And, if you haven’t properly zero-padded the data, you are going to get aliasing from the circular convolution with the effective response…………”””””
I get the point, you and DirkH raise. Frequency domain band limiting, would be a good filtering method, specially for eliminating aliassing due to improper sampling in the first place. The problem being that the inadequacy of the raw data, means you can’t first get a correct Fourier transform from it.
I’ve never been a fan of the FFT, although it is efficient for those who have to use it, but I have always been suspicious of how you trust a spectrum derived from an often very short list of samples.
One advantage of Fourier transform filtering in the optical imaging realm, is that the optical Fourier transform is analog, and not digital, so you do get a more accurate spectrum.(If your optics are good enough)
I guess Fourier transform processing, is useful if simply looking for the presence of certain components, but has the traps you both raise.
Well that reinforces my belief that the raw data, is the most accurate information.

ferdberple
March 30, 2013 3:44 pm

Pamela Gray says:
March 30, 2013 at 11:16 am
Ferdberple, how does copyright mix with plagiarism? I think we are talking two different things here.
=======
my post was in response to those saying there was recourse thru copyright. the copyright faq says otherwise.
as to plagiarism, I do think that is an avenue that could be pursued. all that is required is a letter of complaint. however, unless one can get hold of corroborative evidence… Which could be why there there is such a battle to withhold emails.

1 3 4 5 6 7 9