Dr. Michael Mann, Smooth Operator

Guest Post by Willis Eschenbach

People sometimes ask why I don’t publish in the so-called scientific journals. Here’s a little story about that. Back in 2004, Michael Mann wrote a mathematically naive piece about how to smooth the ends of time series. It was called “On smoothing potentially non-stationary climate time series“, and it was published in Geophysical Research Letters in April of 2004. When I read it, I couldn’t believe how bad it was. Here is his figure illustrating the problem:

smoothing fig 1

Figure 1a. [ORIGINAL CAPTION] Figure 1. Annual mean NH series. (blue) shown along with (a) 40 year smooths of series based on alternative boundary constraints (1) ā€“ (3). Associated MSE scores favor use of the ā€˜minimum roughnessā€™ constraint. 

Note the different colored lines showing different estimates of what the final averaged value will be, based on different methods of calculating the ends of the averages. The problem is how to pick the best method.

I was pretty naive back then. I was living in Fiji for one thing, and hadn’t had much contact with scientific journals and their curious ways. So I innocently thought I should write a piece pointing out Mann’s errors, and suggesting a better method. I append the piece I wrote back nearly a decade ago. It was called “A closer look at smoothing potentially non-stationary time series.”

My main insight in my paper was that I could actually test the different averaging methods against the dataset by truncating the data at various points. By doing that you can calculate what you would have predicted using a certain method, and compare it to what the true average actually turned out to be.

And that means that you can calculate the error for any given method experimentally. You don’t have to guess at which one is best. You can measure which one is best. And not just in general. You can measure which one is best for that particular dataset. That was the insight that I thought made my work worth publishing.

Now, here comes the story.

I wrote this, and I submitted it to Geophysical Research Letters at the end of 2005. After the usual long delays, they said I was being too hard on poor Michael Mann, so they wouldn’t even consider it ā€¦ and perhaps they were right, although it seemed pretty vanilla to me. In any case, I could see which way the wind was blowing. I was pointing out the feet of clay, not allowed.

I commented about my lack of success on the web. I described my findings over at Climate Audit, saying:

Posted Oct 24, 2006 at 2:09 PM

[Mann] recommends using the ā€œminimum roughnessā€ constraint ā€¦ apparently without noticing that it pins the endpoints.

I wrote a reply to GRL pointing this out, and advocating another method than one of those three, but they declined to publish it. Iā€™m resubmitting it.

w.

So, I pulled out everything but the direct citations to Mann’s paper and resubmitted it basically in the form appended below. But in the event, I got no joy on my second pass at publishing it either. They said no thanks, not interested, so I gave up. I posted it on my server at the time (long dead), put a link up on Climate Audit, and let it go. I was just a guy living in Fiji and working a day job, what did I know?

Then a year later, in 2007 Steve McIntyre posted a piece called “Mannomatic Smoothing and Pinned End-points“. In that post, he also discussed the end point problem.

And now, with all of that as prologue, here’s the best part.

In 2008, after I’d foolishly sent my manuscript entitled “A closer look at smoothing potentially non-stationary time series” to people who turned out to be friends of Michael Mann, Dr. Mann published a brand new paper in GRL. And here’s the title of his study …

“Smoothing of climate time series revisited”

I cracked up when I saw the title. Yeah, he better revisit it, I thought at the time, because the result of his first visit was Swiss cheese.

And what was Michael Mann’s main insight in his new 2008 paper? What method did he propose?

“In such cases, the true smoothed behavior of the time series at the termination date is known, because that date is far enough into the interior of the full series that its smooth at that point is largely insensitive to the constraint on the upper boundary. The relative skill of the different methods can then be measured by the misfit between the estimated and true smooths of the truncated series.”

In other words, his insight is that if you truncate the data, you can calculate the error for each method experimentally … curious how that happens to be exactly the insight I wasted my time trying to publish.

Ooooh, dear friends, I’d laughed at his title, but when I first read that analysis of “his” back in 2008, I must admit that I waxed nuclear and unleashed the awesome power that comes from splitting the infinitive. The house smelled for days from the sulfur fumes emitted by my unabashed expletives ā€¦ not a pretty picture at all, I’m ashamed to say.

But before long, sanity prevailed, and I came to realize that I’d have been a fool to expect anything else. I had revealed a huge, gaping hole in Mann’s math to people who were obviously his friends ā€¦ and while for me it was an interesting scientific exercise, for him it represented much, much more. He could not afford to leave the hole unplugged or have me plug it.

And since I had kindly told him how to plug the hole, he’d have been crazy to try something else. Why? Because my method worked … hard to argue with success.

The outcome also proved to me once again that I could accomplish most anything if I didn’t care who got the credit.

Because in this case, the sting in the tale is that at the end of the day, my insights on how to deal with the problem did get published in GRL. Not only that, they got published by the guy who would have most opposed their publication under my name. I gotta say, whoever is directing this crazy goat-roping contest we call life has the most outrĆ©, wildest sense of humor imaginable …

Anyhow, that’s why I’ve never pushed too hard to try to publish my work in what used to be scientific journals, but now are perhaps better described as popular science magazines. Last time I tried, I got bit … so now, I mostly just skip getting gnawed on by the middleman and put my ideas up on the web directly.

And if someone wants to borrow or steal or plagiarise my scientific ideas and words and images, I say more power to them, take all you want. I cast my scientific ideas on the electronic winds in the hope that they will take root, and I can only wish that, just like Michael Mann did, people will adopt my ideas as their own. There’s much more chance they’ll survive that way.

Sure, I’d prefer to get creditā€”I’m as human as anyone, or at least I keep telling myself that. So an acknowledgement is always appreciated.

But if you just want to just take some idea of mine and run, sell it under another brand name, I say go for it, take all you want, because I’ve learned my lesson. The very best way to keep people from stealing my ideas is to give them away … and that’s the end of my story.

As always, my best wishes for each of you ā€¦ and at this moment my best wish is that you follow your dream, you know the one I mean, the dream you keep putting off again and again. I wish you follow that dream because the night is coming and no one knows what time it really is ā€¦

w.

[UPDATE] In my above-mentioned comment on Steve McIntyre’s blog, I mentioned the analysis of Mannian smoothing by Willie Soon, David Legates, and Sallie Baliunas, entitled Estimation and representation of long-term (>40 year) trends of Northern-Hemisphere-gridded surface temperature: A note of caution. 

Dr. Soon has been kind enough to send me a copy of that study, which I have posted up here. My thanks to him, it’s an interesting paper.

=====================================================

APPENDIX: Paper submitted to GRL, slightly formatted for the web.

—————

A closer look at smoothing potentially non-stationary time series

Willis W. Eschenbach

No Affiliation

[1] An experimental method is presented to determine the optimal choice among several alternative smoothing methods and boundary constraints based on their behavior at the end of the data series. This method is applied to the smoothing of the instrumental Northern Hemisphere (NH) annual mean, yielding the best choice of these methods and constraints.

1. Introduction

[2] Michael Mann has given us an analysis of various ways of smoothing the data at the beginning and the end of a time series of data (Mann 2004, Geophysical Research Letters, hereinafter M2004).

These involve minimizing different boundary conditions at those boundaries, and are called the “minimum norm”, “minimum slope”, and “minimum roughness” methods. These methods minimize, in order, the zeroth, first, and second derivatives of the smoothed average. M2004 describes the methods as follows:

“To approximate the ā€˜minimum normā€™ constraint, one pads the series with the long-term mean beyond the boundaries (up to at least one filter width) prior to smoothing.

To approximate the ā€˜minimum slopeā€™ constraint, one pads the series with the values within one filter width of the boundary reflected about the time boundary. This leads the smooth towards zero slope as it approaches the boundary.

Finally, to approximate the ā€˜minimum roughnessā€™ constraint, one pads the series with the values within one filter width of the boundary reflected about the time boundary, and reflected vertically (i.e., about the ā€˜ā€˜yā€™ā€™ axis) relative to the final value. This tends to impose a point of inflection at the boundary, and leads the smooth towards the boundary with constant slope.” (M2004)

[3] He then goes on to say that the best choice among these methods is the one that minimizes the mean square error (MSE) between the smoothed data and the data itself:

“That constraint providing the minimum MSE is arguably the optimal constraint among the three tested.” (M2004)

2. Method

[4] However, there is a better and more reliable way to choose among these three constraints. This is to minimize the error of the final smoothed data point in relation, not to the data itself, but to the actual final smoothed average (which will only be obtainable in the future). The minimum MSE used in M2004 minimizes the squared error between the estimate and the data points. But this is not what we want. We are interested in the minimum mean squared error between the estimate and the final smoothed curve obtained from the chosen smoothing method. In other words, we want the minimum error between the smoothed average at the end of the data and the smoothed average that will actually be obtained in the future, when we have enough additional data to determine the smoothed average exactly.

[5] This choice can be determined experimentally, by realizing that the potential error increases as we approach the final data point. This is because as we approach the final data point, we have less and less data to work with, and so the potential for error grows. Accordingly, we can look to see what the error is with each method in the final piece of data. This will be the maximum expected error for each method. While we cannot determine this for any data nearer to the boundary than half the width of the smoothing filter, we can do so for all of the rest of the data. It is done by truncating the data at each data point along the way, calculating the estimated value of the final point in this truncated dataset using the minimum norm, slope, and roughness methods, and seeing how far they are from the actual value obtained from the full data set.

[6] In doing this, a curious fact emerges — if we calculate the average using the “minimum roughness” method outlined above, the “minimum roughness” average at the final data point is just the final data point itself. This is true regardless of the averaging method used. If we reflect data around both the time axis and the y-axis at the final value, the data will be symmetrical around the final value in both the “x” and “y” directions. Thus the average will be just the final data point, no matter what smoothing method is used. This can be seen in Fig. 1a of M2004:

smoothing fig 1

ORIGINAL CAPTION: Figure 1. Annual mean NH series. (blue) shown along with (a) 40 year smooths of series based on alternative boundary constraints (1)ā€“(3). Associated MSE scores favor use of the ā€˜minimum roughnessā€™ constraint. (Mann 2004)

[7] Note that the minimum roughness method (red line) goes through the final data point. But this is clearly not what we want to do. Looking at Fig. 1, imagine a “smoothed average” which, for a data set truncated at any given year, must end up at the final data point. In many cases, this will yield wildly inaccurate results. If this method were applied to the data truncated at the high temperature peak just before 1880, for example, or the low temperature point just before that, the “average” would be heading out of the page. This is not at all what we are looking for, so the choice that minimizes the MSE between the data and the average (the “minimum roughness” choice) should not be used.

[8] Since the minimum roughness method leads to obvious errors, this leaves us a choice between the minimum norm and minimum slope methods. Fig. 2 shows the same data set with the point-by-point errors from the three methods (minimum norm, minimum slope, and minimum roughness) calculated for all possible points. (The error for the minimum roughness method, as mentioned, is identical to the data set itself.)

[9] To determine these errors, I truncated the data set at each year, starting with the year that is half the filter width after the start of the start of the dataset. Then I calculated the value for the final year of the truncated data set using each of the different methods, and compared it to the actual average for that year obtained from the full data set. I am using a 41-year Gaussian average as my averaging method, but the underlying procedure and its results are applicable to any other smoothing method. I have used the same dataset as Mann, the Northern Hemisphere mean annual surface temperature time series of the Climatic Research Unit (CRU) of the University of East Anglia   [Jones et al., 1999], available at http://www.cru.uea.ac.uk/ftpdata/tavenh2v.dat.

smoothing fig 02

Figure 2. Errors in the final data point resulting from different methods of treating the end conditions. The “minimum roughness” method error for the dataset truncated at any given year is the same as the data point for that year.

3. Applications

[10] The size of the errors of the three methods relative to the smoothed line can be seen in the graph, and the minimum slope method is clearly superior for this data set. This is verified by taking the standard deviation of each method’s point-by-point distance from the actual average. Minimum roughness has the greatest deviation from the average, a standard deviation of 0.110 degrees. The minimum norm method has a standard deviation of 0.065 degrees from the actual average, while the minimum slope’s standard deviation is the smallest at 0.048.

[11] Knowing how far the last point in the average of the truncated data wanders from the actual average allows us to put an error bar on the final point of our average. Here are the three methods, each with their associated error bar (all error bars in this paper show 3 standard deviations, and are slightly offset horizontally from the final data point for clarity).

smoothing fig 03

Figure 3. Potential errors at the end of the dataset resulting from different methods of treating the end conditions. Error bars represent 3 standard deviations. The minimum slope constraint yields the smallest error for this dataset.

[12] Note that these error bars are not centered vertically on the final data point of each of the series. This is because, in addition to knowing the standard deviation of the error of each end condition, we also know the average of each error. Looking at Fig. 2, for example, we can see that the minimum norm end condition on average runs lower than the true Gaussian average. Knowing this, we can improve our estimate of the error of the final point. In this dataset, the centre of the confidence limits for the minimum norm will be higher than the final point by the amount of the average error.

3.1 Loess and Lowess Smoothing

[13] This dataset is regular, with a data point for each year in the series. When data is not regular but has gaps, loess or lowess smoothing is often used. These are similar to Gaussian smoothing, but use a window that encompasses a certain number of data points, rather than a certain number of years.

[14] When the data is evenly spaced, both lowess and loess smoothing yield very similar results to Gaussian smoothing. However, the treatment of the final data points is different from the method used in Gaussian smoothing. With loess and lowess smoothing, rather than using less and less data as in Gaussian smoothing, the filter window stays the same width (in this case 41 years). However, the shape of the curve of the weights changes as the data nears the end.

[15] The errors of the loess and lowess averaging can be calculated in the same way as before, by truncating the dataset at each year of the data and plotting the value of the final data point. Fig. 4 shows the errors of the two methods.

smoothing fig 04

Figure 4. Lowess and loess smoothing along with their associated end condition errors.

[16] The end condition errors for lowess and loess are quite different, but the average size of the errors is quite similar. Lowess has a standard deviation of .062 from the lowess smoothed data, and loess has a standard deviation of .061 from the loess smoothed data. Fig 5 shows the Gaussian minimum slope (the least error of the three M2004 end conditions), and the lowess and loess smoothings, with their associated error bars.

smoothing fig 05

Figure 5. Gaussian, lowess and loess smoothing along with their associated error bars. Both lowess and loess have larger errors than the Gaussian minimum slope error.

  [17] Of the methods tested so far, the error results are as follows:

METHOD                      Standard Deviation of Error

Gaussian Minimum Roughness            0.111

Gaussian Minimum Norm                 0.065

Lowess                                0.062

Loess                                 0.061Gaussian Minimum Slope                0.048

[18] Experimentally, therefore, we have determined that of these methods, for this data set, the Gaussian minimum slope method gives us the best estimate of the smoothed curve which we will find once we have enough additional years of data to determine the actual shape of the curve for the final years of data.

3.2 Improved and Alternate Methods

[19] At least one better method of dealing with the end conditions exists. I call it the “minimum assumptions” method, as it makes no assumptions about the future state of the data. It simply increases the result of the Gaussian smoothing by an amount equal to the weight of the missing data. Gaussian smoothing works by multiplying each data point within the filter width by a Gaussian weight. This weight is greatest for the central point of the filter. From there it decreases in a Gaussian “bell-shaped” curve for points further and further away from the central point. The weights are chosen so that the total of the weights summed across the width of the filter adds up to 1.

[20] Let us suppose that as the center of the filter approaches the end of the dataset, the final two weights do not have data associated with them because they are beyond the end of the dataset. The Gaussian average is calculated in the usual manner, by multiplying each data point with its associated weight and summing the weighted data. The final two points, of course, do not contribute to the total, as they have no data associated with them.

[21] However, we know the total of the weights for the other data points. Normally, all of the weights would add up to 1, but as we approach the end of the data there are missing data points within the filter width. Their total of the existing data points might only be say 0.95, instead of 1. Knowing that we only have 95% of the correct weight, we can approximate the correct total by dividing the sum of the existing weighted data points by 0.95. The net effect of this is a shifted weighting which, as the final data point is approached, shifts the center of the weighting function further and further forwards toward the final data point.

[22] The standard deviation of the error of the minimum slope method, calculated earlier, was 0.048. The standard deviation of the error of the minimum assumptions method is 0.046. This makes it, for this data set, the most accurate of the methods tested. Fig. 6 shows the difference between these two methods at the end of the data set.

smoothing fig 08

Figure 6. Gaussian minimum slope and minimum assumptions error bars. The minimum assumptions method provides the better estimate of the future smoothed curve.

[23] We can also improve upon an existing method. The obvious candidate for improvement is the minimum norm method. It has been calculated by padding the data with the average of the full dataset, from the start to the end of the data. However, we can choose an alternate interval on which to take our average. We can calculate (over most of the dataset) the error resulting from any given choice of interval. This allows us to choose the particular interval that will minimize the error. For the dataset in question, this turns out to be padding the end of the dataset with the average of the previous 5 years of data. Fig 7 shows the individual errors from this method, compared with the minimum assumptions method. Since the results from the two very different methods are quite similar, this increases confidence in the conclusion that these are the best of the alternatives.

smoothing fig 09

Figure 7. Smoothed data (red), minimum assumptions errors (green), tuned minimum norm (previous 5-year average) errors (blue)

[24] The standard deviation of the error from the minimum norm with a 5-year average is slightly smaller than from the minimum assumptions method, 0.045 versus 0.046.

4. Discussion

[25] I have presented a method for experimentally determining which of a number of methods yields the closest approximation to a given smoothing of a dataset at the ends of the dataset. The method can be used with most smoothing filters (Gaussian, loess, low-pass, Butterworth, or other filter). The method also experimentally determines the average error and the standard deviation of the error of the last point of the dataset. Although the Tuned Minimum Norm method yields the best results for this dataset, this does not mean that it will give the best results for other datasets. It also does not mean that the Tuned Minimum Norm method is the best smoothing method possible; there may be other smoothing methods out there, known or unknown, which will give a better result on a given dataset.

[26] The method for experimentally determining the smoothing method with the smallest end-point error is as follows:

1)Ā  For each data point for which all of the data is available to determine the exact smoothed average, determine the smoothed result that would be obtained by each candidate method if that data point were the final point of the data. (While this can be done by truncating the data at each point, padding the data if required, and calculating the result, it is much quicker to use a modified smoothing function which simply treats each data point as if it were the last point of the dataset and applies the required padding.)

2)  For each of these data points, subtract the actual smoothed result of the given filter at that point from the smoothed result of treating that point as if it were the final point. This gives the error of the smoothing method for the series if it were truncated at that data point.

3)  Take the average and the standard deviation of all of the errors obtained by this analysis.

4)  Use the standard deviation of these errors to determine the best smoothing method.

5)  Use the average and the standard deviation of these errors to establish confidence limits at the final point of the smoothed data.

5. Conclusions

1)Ā  The Minimum Roughness method will always yield the largest standard deviation of the endpoint error in relation to the smoothed data and is thus the worst method to choose.

2)  For any given data set, the best method can be chosen by selecting the method with the smallest standard deviation of error as measured on the dataset itself.

3)  The use of an error bar at the end of the smoothed average allows us to gauge the reliability of the smoothed average as it reaches the end of the data set.

References

Mann, M., 2004, On smoothing potentially non-stationary climate time series, Geophysical Research Letters, Vol. 31, 15 April 2004

5 4 votes
Article Rating
207 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Fred from Canuckistan
March 30, 2013 7:33 am

Speaks volumes about Mannian ethics.
How deep is a puddle?

stan stendera
March 30, 2013 7:39 am

I am not surprized at Mikey Mann’s plagiarism!

fredb
March 30, 2013 7:40 am

It’d be a nice addition if you posted the GRL reviewers comments (anonymized) that you received, instead of just paraphrasing.

godzi11a
March 30, 2013 7:43 am

“I could accomplish most anything if I didnā€™t care who got the credit.”
_________________________________________________________
I understand and try to be that way also…. but it would sure be nice if you could specify who DOESN’T get the credit.

March 30, 2013 7:45 am

hmmm…any obvious plagiarism in the second paper?

Craig Moore
March 30, 2013 7:48 am

I waxed nuclear and unleashed the awesome power that comes from splitting the infinitive. The house smelled for days from the sulfur fumes emitted by my unabashed expletives ā€¦

I have this picture of you singing Deep Purple’s ‘Smoke on the Water.’
You being a citizen scientist is a good thing. Much appreciated. I would like to see the rent seekers sequestered back to the real world.

Devavrata
March 30, 2013 7:49 am

Quote:
“My main insight in my paper was that I could actually test the different averaging methods against the dataset by truncating the data at various points.”
This is actually a valid statistical procedure. The term often used in statistics is “jackknife” or “resampling”. By re-do the analysis with leaving one or more data point out, one can test the robustness of the analysis methods and results.

DirkH
March 30, 2013 8:05 am

They will steal much more, this time from the blogs, when they will turn around, look the public in the face all blue-eyed and tell it “Listen dear, we just found this minor error in our models, and we really hate to say it but it looks like a glaciation is just around the corner. More research is needed.”

Steve McIntyre
March 30, 2013 8:05 am

Willis, a couple of days after the release of Mann’s smoothing article, UC observed that there was an error in Mann’s published algorithm, which was reported at CA here.
Within hours, Mann plagiarized UC. He changed the code (supposedly for the article) to implement UC’s changes without any credit or acknowledgement. Only one of a number of similar plagiarism incidents involving Mann.
Mann’s recent Facebook diatribe responding to criticism of his misleading use of obsolete data, included a curious paragraph that appears to be a preemptive defence against his apparent plagiarism of material from Hansen et al. I was thinking of writing a post on the curious outburst but have been otherwise occupied.

Rob Dawg
March 30, 2013 8:12 am

Time for a FOIA request. “Mann, GRL, smoothing.”.

Pamela Gray
March 30, 2013 8:18 am

By not at least citing your unpublished paper back then I believe a charge of plagiarism is appropriate.

March 30, 2013 8:20 am

MatheMANNics is what makes ALGOREithms possible.

Hoser
March 30, 2013 8:21 am

Data truncation and testing for later fit is complete BS. Just because you get a decent fit for that particular data set doesn’t mean you have found a generally superior method, and it doesn’t even mean you’ll get a better result when you use the method on your full data set. The point being the truncated data could be replaced with any other (continuous) data set, and if you did that, would the “better” method still be the best?
Instead, your best bet is to use Fourier smoothing. Transform to frequencies, remove the higher frequencies, and then transform back to spatial..

Dave
March 30, 2013 8:22 am

I wonder, do you think the Mann consulted Gleick about the ethics of stealing other people’s work and ideas?

Tom H
March 30, 2013 8:23 am

That’s what academics live for — publishing papers (whether the papers be right or wrong). Publishing is a large component of their promotions (including the all-important “tenure”) and salary increases.
Someone once asked why academic disputes can be so vicious — a wag replied, “because the stakes are so low”.

Skiphil
March 30, 2013 8:27 am

an aside, but highly relevant to Mann’s misbehaviors (cross posted in part with CA)….
Mann has claimed that the Wahl/Ammann (ā€œJesus Paperā€) was some kind of ā€œindependentā€ vindication of his proxy work, yet it is clear that in 2006-07 Mann was including Wahl and Ammann in co-authorship on this work which Jean S links (see comment linked below).
So not only are Wahl & Ammann not ā€œindependentā€ of Mann in any plausible sense but they are getting the professional credit from Mann including them on another paperā€™s co-authorship in the same time period. For non-tenured scientists (as I believe both W. and A. were then) that is a big conflict of interest relative to a supposed “independent” assessment. They were crucially dependent upon Mann for the progress of their careers at that point.
Talk about ethically conflicted! It might be time for someone able to write up a critical review of the Mann-Wahl-Ammann and Schneider saga with new info. Not trying to add to Steveā€™s load, really, maybe someone at WUWT or BH could take this onā€¦.. it might get attention at Stanford in time for Goreā€™s talk in a few weeks…. putting a spotlight on Mann & company’s misbehavior in time for the Stanford event might finally give these issues some needed traction. (I know, hope springs eternal….)
re Mann, Schneider, Wahl, and Ammann

Yancey Ward
March 30, 2013 8:36 am

Pretty ballsy behavior by Mann, but as McIntyre mentions above, Mann has been caught doing this sort of intellectual theft before.

John Tillman
March 30, 2013 8:37 am

Another reason for pal reviewers’ names to be public.

kanga
March 30, 2013 8:47 am

Is there an open source license so others dont fear taking your material and using it.
Or simply provide at the end of each document “you may use blah blah blah and all I request is that you include my full name as a reference.” or something similar and by all chance, better.

March 30, 2013 8:48 am

Part of the reason for rejection by GRL is that they back then had a strict limit of four Figures per paper. One way to get around that is to combine several [similar] Figures into one [and call them Figure 2a, 2b, 2c, etc]

Greg Goodman
March 30, 2013 8:50 am

Perhaps someone with the password for Climategate 3.0 should search for references to Eschenbach ?? šŸ˜‰
This was a journal they consider they had “control of” . Much of the machinations revealed in CG1 had to do with maintaining their control of that journal.

Rud Istvan
March 30, 2013 8:51 am

Steve M, writing up your facts is probably an important thing to do. There are reputational lawsuits in progress, plus Penn State needs to be safeguarding what is left of its tattered reputation. Plagiarism is straightforward academic misconduct.
Highest regards

March 30, 2013 9:02 am

Willis, when I see this kind of blatant plagerism and abuse of science, the air at my place is also filled with expletives and sulfur. Your magnanimous offer to share with the world the results of your analyses is the way science should be, but it also should carry with it due credit and not reward the plagerists. If it’s any solace, your work, which is widely recognized and respected, probably reaches more people online than it would by publication in scientific journals.
A few years ago, the Geological Society of America (GSA) asked me to put together a volume of papers related to climatic changes. I spent huge amounts of time over a two-year period, had all the papers peer reviewed by world experts, got preliminary approval by GSA to proceed, and submitted the final draft ready for publication. At that point, the GSA editor informed me that because the papers did not support the ‘consensus’ they would not publish it even though there was not a single reference to anything being wrong with the data. We continue to see this kind of degradation of science over and over and then are told that because there are more papers published praising CO2 than contrary to it, the consensus must be right!
Keep up the good work. The highlight of my day is often seeing a new post by you.
Don

SayNoToFearmongers
March 30, 2013 9:05 am

Ballsy behaviour? No, this is the work of a thieving, lying coward. As usual.
At what point does the warmist religion stop lauding this utter parasite?

polistra
March 30, 2013 9:10 am

Thanks for this reminder. I’ve been suffering from Salieri Syndrome in a context of programming, and this was a needed attitude adjustment. When someone builds on my code, it’s basically a compliment, not an insult. It means my code is readable and useful.

Athelstan.
March 30, 2013 9:12 am

Albert Einstein, he has a lot of good ideas and didn’t mind sharing at all, he believed all of his very considerable insight and ideas belonged to mankind.
I do sometimes wonder if Michael E. Mann will ever come and join us, magnanimity is also a gift and nobleness sometimes has to be earned.

John Tillman
March 30, 2013 9:15 am

Slimy is not the same as smooth.

Doug Proctor
March 30, 2013 9:20 am

In politics, the choice is whether you want to be the king or the king-maker.
There is a human desire to get acknowledgement for your effort, reward for service, profit from cost, however you wish to put it. It takes great maturity and self-confidence and lack of ego to provide benefit for zero return. Good for you. One day I’d like to be at that stage, but I’m afraid that after 58 years I’m unlikely to have enough years left to get there.
Gnashing of teeth is not much fun, but there is some satisfaction in it despite the self-destructive feelings.

Beta Blocker
March 30, 2013 9:21 am

Steve McIntyre says: March 30, 2013 at 8:05 am
……. Within hours, Mann plagiarized UC. He changed the code (supposedly for the article) to implement UCā€™s changes without any credit or acknowledgement. Only one of a number of similar plagiarism incidents involving Mann. …..

Repeating what Iā€™ve said previously here on WUWT, he is Michael Mann, LLC, pursuing his business interests as a Limited Liability Copiest.
He is a one-man cottage industry inside the climate change industrial complex, and his every pronouncement is calculated to further his brand name recognition.
There is no bad publicity as far as he is concerned, because his business model as a one-man cottage industry depends upon his ability to keep the Michael Mann brand name prominently displayed and distributed within the climate change pronouncement marketplace.
Michael Mann is simply a well-paid huckster for the paleoclimate studies market segment within the climate change industrial complex.
His behavior is perfectly rational given that there is no such thing as a truth-in-advertising law within the climate change marketplace.
He is merely doing whatever he needs to do to maintain his brand position inside his market segment; and so far, his promotional strategy is working quite successfully.
Michael Mann is a canny businessman and product marketeer serving a set of well-defined, well-funded customers inside a growing industry.
Thatā€™s all there is to him, nothing more, nothing less.

March 30, 2013 9:22 am

Perhaps someone with the password for Climategate 3.0 should search for references to Eschenbach ?? šŸ˜‰
This was a journal they consider they had ā€œcontrol ofā€ . Much of the machinations revealed in CG1 had to do with maintaining their control of that journal.
###################
I’ve constructed a concordance. Consider this done. will report anon

bob sykes
March 30, 2013 9:29 am

Plaigarism is more than unattributed quotes. It includes theft of ideas. Also, the plaigarized work does not have to published, mere submission to a journal or granting agency suffices.
Mann might have plaigarized your submitted paper, and if he got it from one of the editors they are equally guilty. Universities and journals generally take charges of plaigarism very seriously, and punish faculty found guilty of it. If you think you might have a case, I suggest you formally. charge Mann with plaigarism at his current university, which I believe is U Mass.

FerdinandAkin
March 30, 2013 9:32 am

Yes Mr. Eschenbach, but please give credit to where it is most deserving. You must realize that you would not be able to do what you do except for the fact you are standing on top of the shell of an enormous turtle. And that turtle is standing on the shell of another …

pax
March 30, 2013 9:34 am

Your paper was an easy read, contained no unnecessary BS lingo, didn’t pretend that temperature time series is rocket science, was very easy to understand, and made a valid point. Of course it wasn’t accepted.

Les Johnson
March 30, 2013 9:44 am

Not just Mann who likes to “borrow” without attribution. Steig, through Schmidt, did the same thing.
http://climateaudit.org/2009/02/02/when-harry-met-gill/
My commenst that led to Gavin’s admissions start at 221.
http://www.realclimate.org/index.php/archives/2009/01/warm-reception-to-antarctic-warming-story/comment-page-5/

Bill Yarber
March 30, 2013 9:44 am

Willis
You are a unique and interesting person. Enjoy your posts and insights into how your mind functions. Sorry our paths have never crossed, you are someone I’d enjoy calling friend.
Bill

Bill H
March 30, 2013 9:47 am

I think the word “Plagiarism” comes to mind…. You can bet money Mann’s was given access to the piece right from the beginning.
Showing Mann’s inner core (thief) as the gutless piece of unethical crap he is… And he is rewarded with lots of federal grants because he is willing to go so low to forward their political agenda

eco-geek
March 30, 2013 9:48 am

The scientific journals, yes.
I tried to publish a paper showing that the theory behind an earlier paper was extremely incorrect. In fact my proof in effect showed that the paper was entirely fraudulent. It was also the basis for sixteen further papers and presumably about the same number of research grants (and many of these papers would be suspect too).
The establishment declined on the grounds (but not the words) “Scientific fraud in the literature is entirely acceptable provided that it is perpetrated by the Establishment”.
It is as bare faced as that.

March 30, 2013 9:49 am

Lying, cheating, stealing . . . another example of the moral cesspool in which AGW ferments.
The paper Mann uses to do his amoral shtick is biomass, right? The-Judge-Jury-and-Executioner-in-Chief should give it to the poor people in Ghana to burn along with the shit they burn to cook their food. After all, Mann’s stuff would seem to fall into the same general category as that other kind of biomass . . .

OldWeirdHarold
March 30, 2013 9:51 am

What Kanga said. Putting a copyleft notice on all of these kinds of “papers” makes it abundantly clear that:
1. This is free for all to use and redistribute,
2. Credit MUST be attributed,
3. There is legal recourse against any and all who use without credit.

jc
March 30, 2013 9:51 am

Quite apart from any issues about ego, or a proper acknowledgement of effort and achievement, there are practical ramifications.
Where someone is given credit for something not of their doing, apart from any reward that might accrue directly from that one instance, it creates a false impression of what they are capable of.
This means people will look to them to achieve or influence things at a certain level, which they are not capable of. It therefore entrenches mediocrity, and, in order to support this position, further predatory and dishonest behavior. It compels the world to trash.
Also, the person or organization not properly recognized will inevitably not be in a position to contribute to their maximum potential.
It is more than just personal.

Bill H
March 30, 2013 9:53 am

pax says:
March 30, 2013 at 9:34 am
Your paper was an easy read, contained no unnecessary BS lingo, didnā€™t pretend that temperature time series is rocket science, was very easy to understand, and made a valid point. Of course it wasnā€™t accepted.
===================================================
Pointing out that even the COMMON MAN can understand the complex if it is presented in clear terms is what they can not allow. Who would then need scientists to figure it out and deal with for them.. tell the masses what they must do.. how they must live.. how they will die… It just wouldn’t fit in with the liberal/socialist,marxist/democrat control agenda.. Just sayin…:)

kakatoa
March 30, 2013 9:53 am

W,
Thanks for sharing your experience! My gut tells me Telsla would fully support your approach to sharing.

Mark T
March 30, 2013 9:59 am

Hoser says:
March 30, 2013 at 8:21 am

Instead, your best bet is to use Fourier smoothing. Transform to frequencies, remove the higher frequencies, and then transform back to spatial..

Uh, technically, temporal, not spatial, but that’s just a nit. This notion assumes a priori that the “signal” consists of sinusoids, which is all you will get back. Sorry, but this is not superior in any real sense of the word.
Mark

March 30, 2013 9:59 am

Hi Willis – There are now open review venues. I suggest you consider
Atmospheric Chemistry and Physics (ACP) & Discussions (ACPD) – http://www.atmospheric-chemistry-and-physics.net/
or other EGU open review journal – http://www.egu.eu/publications/open-access-journals/
Roger

markx
March 30, 2013 9:59 am

pax says: March 30, 2013 at 9:34 am
Your paper was an easy read, contained no unnecessary BS lingo, didnā€™t pretend that temperature time series is rocket science, was very easy to understand, and made a valid point. Of course it wasnā€™t accepted.
Bingo!
Mann on the other hand is a master of obfuscation and will never use one word when ten will do the job (he is, in truth and with some admiration from me, a master of this art – I have read several of his papers and they are all the same … takes ages to wade through them, but in the end, every phrase does in fact make sense, and is perhaps worded that way to impress the impressionable).
His paragraph above quoted by Willis is testament to this, and begs for a summary:
ā€œIn such cases, the true smoothed behavior of the time series at the termination date is known, because that date is far enough into the interior of the full series that its smooth at that point is largely insensitive to the constraint on the upper boundary. The relative skill of the different methods can then be measured by the misfit between the estimated and true smooths of the truncated series.ā€
Or as Willis would perhaps say:
“It is possible to test the different averaging methods against the dataset by truncating the data at various points, which will allow calculation of the predicted result for each method, and allow comparison to the true actual average.”

Mark T
March 30, 2013 10:07 am

OldWeirdHarold says:
March 30, 2013 at 9:51 am

What Kanga said. Putting a copyleft notice on all of these kinds of ā€œpapersā€ makes it abundantly clear that:

Unnecessary in any country that has adopted the Berne Convention rules, the US in particular (1989). Copyright is automatic, though a claim of accidental violation may reduce, or eliminate, any damages. Given that the original article was actually submitted for publication, such a claim would be hard to make.
There’s a point at which somebody needs to start filing complaints. I believe that point has been reached.
Mark

March 30, 2013 10:08 am

It will all catch up with him, one way or another. Right now, people like Mann need people like you, Willis, to do all the hard work for them. Right now, also, their worst nightmares are coming true because people are finding out just how false they are. I would so hate to be in any of their shoes.

Mark T
March 30, 2013 10:11 am

jc said:

This means people will look to them to achieve or influence things at a certain level, which they are not capable of. It therefore entrenches mediocrity, and, in order to support this position, further predatory and dishonest behavior.

Which may explain Mann’s behavior in particular.
Mark

Pamela Gray
March 30, 2013 10:12 am

I am more convinced than ever. Plagiarism is an appropriate charge here.

TomE
March 30, 2013 10:13 am

Perhaps Mark Steyn and National Review will find some use of your experience in their legal discussion with Mann. The more time that passes, the more incidents that are revealed, the more of a slime ball that Mann is revealed to be.

GlynnMhor
March 30, 2013 10:15 am

A lot of explanation for simply doing the obvious and, in the time domain, running the smoothing operator over the entire dataset without truncating it.
Or for far better control over the frequencies, do as Hoser suggests and use an Ormsby zero phase filter operator (128 or 256 samples) in the Fourier domain.

F. Ross
March 30, 2013 10:16 am

If there is justice in this universe then much good karma on you Willis… as for the other guy, not so much.

March 30, 2013 10:23 am

Very interesting article, Willis.
Steve McIntyre comments:
“… a couple of days after the release of Mannā€™s smoothing article, UC observed that there was an error in Mannā€™s published algorithm, which was reported at CA here.
“Within hours, Mann plagiarized UC. He changed the code (supposedly for the article) to implement UCā€™s changes without any credit or acknowledgement. Only one of a number of similar plagiarism incidents involving Mann.”
And that, folks, is another reason why Mann will never make good on his litigation threats.

Lars P.
March 30, 2013 10:25 am

Josh has it all in his rotten core science sample:
http://www.bishop-hill.net/blog/2013/3/28/well-sampled-science-josh-226.html
As we see the bottom of the core has some additional characteristics when one makes a closer analysis….

Luther Wu
March 30, 2013 10:28 am

That was so not fair, Willis- challenging/detailing Mann’s math skills.

Paul Linsay
March 30, 2013 10:31 am

I always grind my teeth when I look at a climate paper because of the smoothing. It’s not the algorithm used, it’s the very fact that it’s done at all that is upsetting. The data should be allowed to “speak for itself”. Smoothing imposes a model on the reader that may be completely invalid. Where is it written that all the short period flutuations don’t have any information in them? Smoothing creates false impressions of trends where none may exist. The correct rule for smoothing is DON’T.
Error bars are my other pet peeve with climate data series. Where are they? Your previous post on WUWT is as good an example as any, no error bars on quantities that I’d bet aren’t known to 10%.
/rant

Mark T
March 30, 2013 10:33 am

Willis,

Although I must admit, the amount of damage being done by these folks does argue that stopping them might be more important than the method ā€¦

That’s the point. Somewhere along the line, these people came to the conclusion that rules we all live by were not meant to apply to them. They are wrong. GRL, as well, should be sent a message.
Steve Mc is the same in that he does encourage legal pursuits, though it is becoming clear that simply highlighting incompetence and forcing the occasional corrigendum is not working.
Mark

Jerry
March 30, 2013 10:35 am

Willis, you really ought to post what the reviewers had to say (both times). Redact the names, if you are obliged to, but if not, publish them, too. “Prestigious” journals ought to be exposed for the insider-trading schemes they really are.
Oh, and you deserve credit for your work. To allow a politician like Hanson get credit is to allow him to steal from you on our dime. I applaud you for posting this here. We need to know when stuff like this happens.

NileQueen
March 30, 2013 10:36 am

Thanks for your stories and your wishes,Willis. Very interesting.
And “sting in the tale”… I thought it was a typo at first šŸ˜€

Mark T
March 30, 2013 10:37 am

Oh, and I don’t disagree, Willis. My initial position was always that the damage done by public display of such things was enough. Unfortunately, it continues unabated. I’m now reassessing my original position.
Mark

ferdberple
March 30, 2013 10:38 am

Do I understand this correctly. In 2005 and 2006 GRL declines 2 times to publish a paper showing an improved method to deal with the averaging end-point problem. Then in 2008 GRL publishes a paper showing the same technique, by a different author, without attribution for the 2005 and 2006 submissions?
This is a very serious matter. There is no way GRL should have published a paper duplicating the methods of an early unpublished submission. Otherwise scientific journals would be free to steal ideas. What would prevent journal editors or reviewers from rejecting a paper, then using the ideas in the paper to create their own papers under their own names?
Shine the light of day on this issue.

Mark T
March 30, 2013 10:43 am

The only time “better control over the frequencies” is actually necessary is when the signal primarily consists of sinusoids. These methods are with respect to potentially non-stationary data. If the data are non-stationary, the concept of a sinusoidal smooth is unlikely to be “better” in any general sense. As Willis notes, test them and see.
Mark

ferdberple
March 30, 2013 10:48 am

http://www.copyright.gov/help/faq/faq-general.html
Copyright does not protect facts, ideas, systems, or methods of operation, although it may protect the way these things are expressed. See Circular 1, Copyright Basics, section “What Works Are Protected.”

March 30, 2013 10:50 am

AMO reconstruction is where Dr. Mann made his name. Here is his reconstruction (green) compared to the AMO data (blue).
http://www.vukcevic.talktalk.net/AMO-2R.htm
My reconstruction is made from combination of the sunspot and Earthā€™s magnetic oscillations (red).
I consider that my reconstruction is bit closer to the real thing (confirmation bias? maybe), so I would look forward to Dr. Mann in future altering his data and claiming the credit.

bones
March 30, 2013 11:11 am

I think that it is a common practice among the editors of many journals to pass criticisms along to anyone whose previously published work is criticized. I have even had the author of an erroneous result seleced as the reviewer for my correction! In my view, peer review in some journals is hopelessly broken. I would offer good odds on a bet that Mann got a look at your paper.
Stan

March 30, 2013 11:11 am

Don Easterbrook
Thanks for taking the time to present your excellent summary to the Washington State Senate.
Senator Doug Ericksen links to Easterbrook’s full presentation
Easterbrooks summary and presentation are posted at the Washington Senate Energy, Environment & Telecommunications, March 26, 2013 1:30 PM – Climate Change

zootcadillac
March 30, 2013 11:12 am

I think this goes far beyond a simple claim of plagiarism, especially as the work was unpublished. If it could be proven that Mann had access to the work via the people who had seen it then it’s outright theft of intellectual property. Copyright existed the moment Willis expressed this idea and even though the assets are intangible the law allows for recourse in these matters.
However that would have to be a civil case and I doubt that Willis, even were he inclined at all to be litigious, would want to go down this exhausting and often fruitless route.
I’ve been through civil courts twice. taken over 4 years of my life and ended up with me being awarded over Ā£23,000 by the UK civil courts. Over ten years on and I’ve still to see a penny.

Pamela Gray
March 30, 2013 11:16 am

Ferdberple, how does copyright mix with plagiarism? I think we are talking two different things here.

JFD
March 30, 2013 11:19 am

Willis, I use some of your ideas sometimes. I am always proud to state where the information comes from. In my many years involved with science and engineering, you are unique. Not only are you a world class thinker, you have amazing technical talents and insights plus being a clear and fast writer. You don’t need peer reviewed papers to score at the top of my list of exceptionally talented people.

David L. Hagen
March 30, 2013 11:23 am

vukcevic
It appears your AMO model vs Mann’s AMO model vs AMO data shows another example of Mann “hiding the decline”.

Joe
March 30, 2013 11:24 am

Isn’t plagiarism meant to be an academic career stopper?

March 30, 2013 11:28 am

Willis, while it is nice to get the credit for something, the bottom line is that the end result is satisfactory to everyone.
200 years down the line, mostly, the historians find out who really deserve the credit.
You, Anthony et al will get the credit eventually while Mann, Gore et al will just be footnotes.
Doesn’t buy much beer now, though.

Silver Ralph
March 30, 2013 11:39 am

Willis. I know all about the sulphurous fumes, as I was plagiarised once. It not fun. Not so bad, perhaps, if it is general plagiarism, but it hurts more when the plagiarist says things like: “I was so brilliant and had that bold insight that if you….. ” etc: etc:
But I did learn that plagiarism is not a copying of ideas (not in UK law). Once you put an idea out there, anyone can use it, and even claim it to be theirs. That, is no plagiarism.
Plagiarism is a copying of text (in UK law). If you can show two or three ‘critical’ sentences that have been copied, verbatim, and no citation given, then that is plagiarism. All you need to do, is demonstrate prior publication – in a journal is best, but to a wide circle of colleagues who can attest to this is also fine.
As an aside, this ‘lifting of material’ is widespread in scholarly circles. Bright young things write in with all their clever ideas, and the old master then pens a new article with ‘very similar ideas’. I had two articles lifted – one of 15 pages and the other of 70 pages. The first plagiarist was more clever, and dressed it in his own style. The second was stupid, and did a hasty cut and paste job. That was easy to prosecute, and I got a 5-figure compensation package (from the publisher, not the author, because the publisher had been prior-warned to be careful and was not).
.

RockyRoad
March 30, 2013 11:41 am

I agree with Willis–science needs to be self-correcting and the field of climate science needs to learn to police its own members.
But since everybody cowers before “the Mann”, I’d like to correct a glaring misconception:
Where we say Michael E. Mann, PhD, the “PhD” really stands for “Plagarism–his Downfall” or “Plagarism–his Defense”. Both are correct and in Mann’s case supercede the traditional meaning.

Crispin in Waterloo but actually in Yogyakarta
March 30, 2013 11:51 am

I am much taken by the nature of dealing with reality by not trying to tie everything down just because it was discovered or worked out by oneself. Last week I was in Beijing to attend a technology show and saw the best knock off of one of my products (which is patented) I have seen to date. It was a pretty good example of implementing the principles involved. Not quite everything but a pretty good rendition for $10.
I was delighted to see that the insights are finally catching on. Heh heh. What can I say? I am not going to take anyone to court for making the world of low income people better, am I? What the heck. I have better things to do and bigger fish to fry.
Willis, I have dealt with this for years by taking the attitude that I am not a one-hit wonder with a single trick to my pony stable. If you were ripped off once by someone who could not do it himself, it is just a matter of time before you do something else original.(or correct) and the other guys falls flat again, and again.
I was informed by a colleague that there are more than 4000 copies of one of my machines in the Eastern Cape of South Africa – all used by poor people – mostly by women’s groups. Garsh! Who knew? Is that ripping off? Maybe, but it is innocent which is a totally different situation from a submission being refused publication and then circulated within the Team, or parts of it. I suspect it is in the mails. Dynamite when it is found of course.
You are 100+ hit wonder and if you manage to contribute to the development of 100,000 students, that is a life worth living even without all the other accomplishments. Scholarship is a virtue in its own right. Speaking of right, it seems you were.

March 30, 2013 11:53 am

Wonderful post, Willis, and I certainly will quote “I must admit that I waxed nuclear and unleashed the awesome power that comes from splitting the infinitive” and other gems. But I think what did it was “No Affiliation” below your name. Even James Lovelock, who became a Fellow of the Royal Society after inventing the electron capture detector, found that he could not publish from “13 Acacia Avenue” and got an adjunct job at some local college just so as to not end up in the trash. It was ever so, and universities are medieval institutions at heart: guilds forever trying to corner the knowledge market.

Matthew R Marler
March 30, 2013 11:54 am

1. if at first you don’t succeed, try, try again.
2. notarize before submission, keep paper copy.
3. ask a few dear friends, whom you acknowledge in the paper, to review and make suggestions. That way you have witnesses — and co-authors in any letters you may submit on your behalf.
Lots of examples of this sort of thing have been reported in all fields throughout the decades that I have been reading Science .

Silver Ralph
March 30, 2013 11:59 am

Athelstan. says: March 30, 2013 at 9:12 am
Albert Einstein, he has a lot of good ideas and didnā€™t mind sharing at all, he believed all of his very considerable insight and ideas belonged to mankind.
____________________________
Sorry, but I don’t buy this Free Source Code B.S.
I need a roof over my head, some food in my belly, and some clothes for the kids to wear. I don’t get any of that if some plagiarist thief comes along, steals my material and sells it for a semi-fortune under his name. And even if I (or the good-hearted McIntyres and Eschenbachs of this world) had good day jobs to support themselves, would it not be better that they were paid for their work, and could devote all of their time to these problems?
Plagiarism is no different to patent fraud. A company expects to have 25 years of competition-free patent security, in order to justify the expenditure of research and development. And violators (especially in China and the Far East) should be tried and punished for stealing their ideas and products. Same goes for science or any other authorship (or political speeches, come to that).
http://www.telegraph.co.uk/news/worldnews/barackobama/2607505/Joe-Biden-plagiarised-Neil-Kinnock-speech.html
http://www.cbsnews.com/2100-502323_162-3850012.html
Scientific plagiarism takes food from the mouths of a good scientist’s children, and threatens to put them out of house and home, to live in the gutter. It is theft, pure and simple – it is exactly the same as if they came to your house, loaded up all your belongings, and drove away. Such people should be locked up, and the key thrown away, before all research and development grinds to a halt.
.

Puppet_Master_Blaster_Master
March 30, 2013 12:03 pm

Excellent example of why the money-train plug needs to be pulled on the AGU, among many others. Basically the AGU has become an environmental-religious for-profit lobbing organization.

Skiphil
March 30, 2013 12:05 pm

There may be a good chance that Mann himself was a reviewer on Eschenbach’s GRL submissions?? If not Mann, one of his pals….. This matter needs a bright spotlight on the journal….

Solomon Green
March 30, 2013 12:06 pm

As I see it Mr. Eschenbach has very publicly accused Dr. Mann of plagiarism. He has also accused Geophysical Research Letters of aiding and abetting that plagiarism. Because his accusations are so serious, if they are unfounded he should expect defamation suits from either or both Dr. Mann and Geophysical Research Letters. There is no need for Mr. Eschenbach to take any further action. The suits will not come.

March 30, 2013 12:16 pm

David L. Hagen says: March 30, 2013 at 11:23 am
It appears your AMO model vs Mannā€™s AMO model vs AMO data shows another example of Mann ā€œhiding the declineā€.
Mannā€™s AMO reconstruction was either precursor to the Yamal hockey stick, or a meant to give it a ā€˜fillipā€™ šŸ™‚
Either way man (Mann) lost touch with the reality. Mannā€™s graph was off scale, now is shown in its full glory..
Data from http://climexp.knmi.nl/data/iamo_manna.txt

UC
March 30, 2013 12:17 pm

Record breaking properties for time series after the smoothing process are also interesting, http://www.youtube.com/watch?v=pCgziFNo5Gk .

BerƩnyi PƩter
March 30, 2013 12:19 pm

Willis, you may consider publishing stuff under the GNU Free Documentation License
“this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others”

UC
March 30, 2013 12:21 pm

” Steven Mosher says:
Perhaps someone with the password for Climategate 3.0 should search for references to Eschenbach ?? ;)”
And Butterworth. Something happened between 99-04.

March 30, 2013 12:22 pm

Speaking of catching and punishing errors, Thomas Jefferson had a pertinent quote which I have always adopted in dealing with certain environmentalists and others who engage in utterly unforgiveably bad science: “I think it is in our interest to punish the first insult; because an insult unpunished is the parent of many others.” Call ’em out over the first and every error and hit them hard. If you publicly make them look like utter idiots, they are loath to go down that path twice.

Steve Garcia
March 30, 2013 12:27 pm

Two points:

After the usual long delays, they said I was being too hard on poor Michael Mann, so they wouldnā€™t even consider it ā€¦ and perhaps they were right, although it seemed pretty vanilla to me. . .
…So, I pulled out everything but the direct citations to Mannā€™s paper and resubmitted it basically in the form appended below.

WTF? “Being too hard on… Michael Mann?” Where? Even with direct citations, what in this was addressing anything more than procedures? The fact that Mann’s choice came out sucking hind teat?
Second point:
I truly find it hard to credit the world of statisticians if Willis had to show them the best way to treat end conditions in a data set. In all their history no one had shown how to handle this? EVERY dataset has end conditions (even flat line curves); you would think this problem would have been dealt with decades ago.
Willis and Steve M (if you know) – No one had come up with a solution before?
Steve Garcia

Luther Wu
March 30, 2013 12:27 pm

JFD says:
March 30, 2013 at 11:19 am
Willis, I use some of your ideas sometimes.
_______________
Me too… especially, his “retire early and often” idea. Actually, we may have to share credit on that one.

McComber Boy
March 30, 2013 12:28 pm

Puppet M B M said, “for-profit lobbing organization”. It’s an obvious typo, but I cant figure out if you meant lobbying or robbing. Either one would do.

Steve Garcia
March 30, 2013 12:35 pm

From Steve M’s post http://climateaudit.org/2007/06/09/mannomatic-smoothing-and-pinned-end-points/, Steve says:
“Accordingly with a symmetric filter (as these things tend to be), everything cancels out except the final value. The Mannomatic pins the series on the end-point exactly the same as Emanuelā€™s ā€œincorrectā€ smoothing.
Just for fun, I applied the Mannomatic smooth with the short (1,4,6,4,1) filter to a hurricane series ā€“ category 3 days and obtained the following result, where Iā€™ve emphasized the closing value to show that the Mannomatic smooth pins at exactly the closing value. Itā€™s crazy, but itā€™s so.”
Steve, since you said “everything cancels out except the final value” – Willis’ take on it, too – I wonder why you then went on to say, “It’s crazy, but it’s so.”
It seems obvious that mirroring on BOTH the vertical and horizontal should thus have each value cancel out except the one NOT mirrored – the last one. When I read in Willis’ submission about the minimum roughness method, I thought, “WTF? Doing both cancels all of them out – what is the point?”
Why others would ever consider this a legitimate smoothing method, I cannot imagine. Are they deef? This is so obviously wrong a 4th grader should be able to spot it.
Who in the hell let Mann get away with this? Was on of his reviewers Phil Jones, the man who can’t do Microsoft XL?
Steve Garcia

GlynnMhor
March 30, 2013 12:48 pm

I understood what you were trying to demonstrate, Willis, but the operators you use are rather limited and imprecise in terms of frequency content.
The seismic processing industry uses low pass, high pass and band pass filters all the time and uses them to control the frequency content of the data being examined. We use other operators as well, for rephasing, deconvolution, phase matching, and other more complex purposes, and due to the computational loads involved, operate almost always in the Fourier domain.
And we always apply the operator to the entire dataset without truncation, your ‘minimum assumptions’ approach. Otherwise the data length would be shortened by half the operator length every time another operation was performed, though that only makes any sense in the time domain, and not in Fourier space.

rogerknights
March 30, 2013 12:51 pm

I strongly second the notion that Willis should pursue this formally, at first by asking the journal if it had forwarded your paper to Mann, and by asking Mann if he had seen it–or at least been told of its gist. (Perhaps send a registered letter in addition to an e-mail.) That first turn of the screw wouldn’t demand a lot of Willis’s time–and he could get help on the procedural steps and language from others who are more expert at that sort of request.
The next step would be a FOIA request to the school where Mann taught at the time (UVA, I think) asking specifically for any relevant e-mails. UVA couldn’t characterize that as a fishing expedition. again, this wouldn’t necessitate getting involved in the legal system–and it’s something that others could help Willis prepare.
I wonder if these e-mails are what Mann doesn’t want revealed from UVA, and why he is fighting so hard to prevent their release.

DirkH
March 30, 2013 12:52 pm

Hoser says:
March 30, 2013 at 8:21 am
“Data truncation and testing for later fit is complete BS. Just because you get a decent fit for that particular data set doesnā€™t mean you have found a generally superior method, and it doesnā€™t even mean youā€™ll get a better result when you use the method on your full data set. The point being the truncated data could be replaced with any other (continuous) data set, and if you did that, would the ā€œbetterā€ method still be the best?
Instead, your best bet is to use Fourier smoothing. Transform to frequencies, remove the higher frequencies, and then transform back to spatial..”
You can’t escape from the border conditions that afflict averages or time domain smoothing operators this way; as when you are doing the Fourier transform your transform will suffer from the Gibbs phenomenon. You will have to use a Hamming or other window function before doing the transform if you want to avoid artefacts.
So, you can’t use the information right at the end of your time series. Same as with time domain filtering. (Smoothing operators are time domain filters)
https://en.wikipedia.org/wiki/Gibbs_phenomenon
https://en.wikipedia.org/wiki/Hamming_window#Hamming_window
No free lunch by going to the frequency domain.

March 30, 2013 1:01 pm

I once had a submitted paper reviewed and it was turned down by one journal criticizing my conclusions (teen addicts recover better in centers with adults than in all teen centers). But the interesting thing is that one of the reviewers names was “accidentally ” left in the return comments–happened to be one of the men’s work that I had published against.
No matter, My paper ended up being published in a better journal. However, I thought it highly unethical that the journal allowed a man whose work was criticized in my paper to review my work. science seems to continue on the downhill ethical trend.
thanks Willis..I’m glad you are getting exposure here.
Dave says:

I wonder, do you think the Mann consulted Gleick about the ethics of stealing other peopleā€™s work and ideas?

No Dave, it was Gleick who consulted Mann.

Steve from Rockwood
March 30, 2013 1:03 pm

I wonder what the ratio of working stiffs to the publish or perish crowd (university, government organizations) is at JGR. I bet its pretty low. I can imagine a “we have to publish – they don’t” attitude at journals that are typically run by the publish or perish crowd.

March 30, 2013 1:13 pm

Another reason I won’t called him a recipient of the PhD degree. He has forfeited that in oh so many ways. Mr. Mann needs a job. I recommend janitorial services at PSU.

Roy
March 30, 2013 1:19 pm

If Michael Mann just happened to use the method proposed by Willis Eschenbach without attributing it to Williss then that would be a case of bad manners. However if the method itself was the main subject of Mann’s article, then either this case of apparent borrowing was simply one of roughly simultaneous discovery or of outrageous plagiarism.
Roughly simultaneous discovery is not uncommon in science and can sometimes lead to disputes about priority, two well know cases being the invention of calculus by Newton and Leibniz, and the discovery of evolution by means of natural selection by Darwin and Wallace. The former dispute was quite a bitter one, but one involving associates of the two great men more than the discoverers themselves. In the case of Darwin and Wallace the situation was amicably resolved before any dispute could be developed, but perhaps the way in which credit was apportioned was slightly unfair to Wallace.
From what Willis wrote it does not seem that Mann independently hit on the same idea but, to be scrupulously fair, shouldn’t Mann be offered to opportunity to explain in this blog (or anywhere else if he would prefer it) where exactly he got the idea from?

Mpaul
March 30, 2013 1:24 pm

Willis, you should consider it an honor to be plagiarized by a Nobel Prize winner.

scf
March 30, 2013 1:34 pm

I’ve had a similar experience with scientific journals, being out of the academic establishment and having submitted papers. Journals are scientific cliques, with submissions that come from outside sources being treated negatively. Just like in many fields, it’s not what you know, it’s who you know. If you don’t now all the tricks, the mannerisms, the types of language, the precise structure that academics have constructed for themselves and expect in a submission, your paper will go nowhere, regardless of the content.

The Iceman Cometh
March 30, 2013 1:36 pm

Don Easterbrook says: March 30, 2013 at 9:02 am “I spent huge amounts of time over a two-year period, had all the papers peer reviewed by world experts, got preliminary approval by GSA to proceed, and submitted the final draft ready for publication. At that point, the GSA editor informed me that because the papers did not support the ā€˜consensusā€™ they would not publish it”
Some years ago I came across a American Geophysical Union paper that had some appalling flaws in it. I drew the editor’s attention to the problem, and received a very curt brushoff. I then found the editor had been a co-worker of the author of the paper, and, worse still, “The editor has complete responsibility and authority to accept a submitted article for publication or to reject it” – there was no requirement for him to have it reviewed, and indeed it had been accepted for publication within days of submission.
If we do not speak up about the corruption of the process of scientific publication, science will lose and we will all be the worse for our silence.

Sean
March 30, 2013 1:44 pm

Willis, it is clear that you have evidence that Mann committed a worse offense with his second paper – he plagiarized and stole from another paper, and he failed to give credit.
Among other things this is grounds for the journal to withdraw Mann’s second paper for plagiarism. As for Mann’s university – there are codes of conduct for academic fraud like this, I am sure that he should be reprimanded at minimum, terminated for cause at max.
You should file complaints with both his university and with the journal.

Paul Vaughan
March 30, 2013 1:47 pm

Something I noticed a few weeks ago and found time to summarize yesterday:
multidecadal heliosphere structure, solar cycle deceleration, & terrestrial climate
Superposed is figure 5 (p.198) from section 8 (pp.196-198) of:
Obridko, V.N.; & Shelting, B.D. (1999). Structure of the heliospheric current sheet derived for the interval 1915-1996. Solar Physics 184, 187-200.
http://helios.izmiran.troitsk.ru/hellab/Obridko/189.pdf
“[…] quasi-periodic oscillations […] The convergence region of the ļ¬eld lines moves up and down with the same period. […] results in secular variations of the entire structure of the heliosphere.”
Compare with Figure 4:
Wyatt, M.G.; Kravtsov, S.; & Tsonis, A.A. (2011). Atlantic Multidecadal Oscillation and Northern Hemisphereā€™s climate variability. Climate Dynamics.

John Tillman
March 30, 2013 2:05 pm

Emailed link to this story to National Review for use as ammo in Mann-Steyn case.

Snotrocket
March 30, 2013 2:19 pm

Great post, Willis! Up until now I had always thought that popcorn futures were over-priced and, most certainly, over-subscribed. But now, I think they are a good punt.
It is obvious that CG1, CG2 (plus whitewash inquiries) have had no effect at denting the hubris that is the Green Reich. Well, now, if Mann doesn’t come after you with suit, he will surely demonstrate that which he is: a WUSS (as we say in UK) of the first order. Not to mention, as a quote I found from back in the ’30s (which could have been about Mann): “…a willful, obstinate, unsavory, obnoxious, pusillanimous, pestilential, pernicious, and perversable liar” Yep. I think that covers it…

BarryW
March 30, 2013 2:27 pm

I have a serious problem with centered smoothing for end conditions. You are trying to predict the result based on information that you don’t have for the end points. Creating smoothed points were you know both the a priori and a posteriori data doesn’t tell you how to predict smoothed values where you only know the previous time series values. I’ve wondered about this for awhile, but have been too lazy to look at it. The question is, given the previous time series values, can I predict what the next average value would be?

Bart
March 30, 2013 2:31 pm

DirkH says:
March 30, 2013 at 12:52 pm
“No free lunch by going to the frequency domain.”
Definitely not. The advocated method is a generally inferior means of low pass filtering, as doing it in the digital domain means you are not actually eliminating the entire frequency band you are trying to take out, just the components at those discrete frequencies. And, if you haven’t properly zero-padded the data, you are going to get aliasing from the circular convolution with the effective response.
A far superior method is to use the power spectral density as a means of identifying a model, then applying an optimal filter algorithm to determine the behavior of the major components of that model.

rogerknights
March 30, 2013 2:43 pm

This is an opportunity to finally “nail” that slippery charlatan, who has slithered out of other tight spots. Don’t let this opportunity go to waste. I urge those with experience in filing complaints to communicate with Willis by sending him drafts of letters he could send and FOIAs he could file, and by offering him assistance in pursuing the matter.
BTW, if Mann gets nailed, this would be a help to NRO, as it would reduce his legal presumption of credibility.

Jack
March 30, 2013 2:46 pm

With the advantage of hindsight there is no scientist that I am aware of who had been remembered for being wrong. But there are times when a man’s name becomes an historical artifact: think Benedict Arnold, think Quisling. I am sure there are others. Names that come to symbolize a pejorative noun rather than an honored individual.
What will Mann’s be, I wonder? It seems to me that his strategy is to delay the inevitable disgrace until he retires.

clipe
March 30, 2013 3:00 pm

Smoothing is a perfectly good word as a noun, adjective, verb (v.tr, v.intr) and, reaching here, an adverb.
http://www.ecowho.com/foia.php?file=4578.txt&search=smoothing+revisited

March 30, 2013 3:01 pm

Paul Linsay says:
March 30, 2013 at 10:31 am
I always grind my teeth when I look at a climate paper because of the smoothing. Itā€™s not the algorithm used, itā€™s the very fact that itā€™s done at all that is upsetting. The data should be allowed to ā€œspeak for itselfā€. Smoothing imposes a model on the reader that may be completely invalid. Where is it written that all the short period flutuations donā€™t have any information in them? Smoothing creates false impressions of trends where none may exist. The correct rule for smoothing is DONā€™T.

Agree. Although, I don’t know why you say ‘the reader’. The model is imposed on the data. Specifically the model is that there is a forcing signal (from GHGs/CO2) and natural variability noise, and smoothing removes the natural variability noise to expose the forcing signal. Complete equine manure. Putting the proverbial cart before the horse. Assuming the data supports the (forcing) model. When the first question should be, does the data support the model/theory?

Joseph Bastardi
March 30, 2013 3:01 pm

Can I say one thing as a PSU grad in meteo 1978
HELLLLLLLLLLLLLLLLLLLLLLLPPPPPPPPPPPPPPP
thank you for letting me get that out

george e. smith
March 30, 2013 3:05 pm

Well Willis, that is quite a story. I can’t say that I fully understand all the ramifications of your method, but your presentation is very readable.
I am personally of the opinion, that the very best representation of experimental data, is the actual raw data itself. Statistication, can only remove information, not add it.
I also would make the comment, that ….Hoser….’s proposition of using Fourier transform filtering, is one that I find highly meritorious. If you are going to remove information, what better way is there to know exactly what it is you are removing.
Fourier transform filtering is widely used in image processing, and there you do like to know what you are throwing away.
As for your experience as regards plagiarism, your example is quite shocking to say the least, and one wonders where on the Richter scale, your eruption ended up.
I was once invited to add my name, as a co-inventor on a patent application in the process of being filed by a person who shall remain nameless, and who happened to be leader of the project; in effect my supervisor.
I declined, saying the last thing I would want is to have my name on a patent, that was somebody else’s invention. So he filed for the patent (which issued) with him as sole inventor.
My company official lab note book, maintained stricty for IP documentation, contained a complete and full description of the invention, dated at least five years, before, we decided to finally design a product based on the idea, at which point I was assigned to his group to work on the project. I still have one of the two full production ready prototypes of the product, which was then once again killed, and never did see the light of day The company eventually sold off the associated business.
Virtually all of my fellow employees, were fully cognisant of the fact, I had documented it years earlier. The patent didn’t bother me. The loss of a useful and advanced product did.
So I fully understand your ire at being so Mannhandled Willis. Some people just have no shame at all.

Severian
March 30, 2013 3:13 pm

WE, you’ve got a good approach in that you want the good science to be out there but lack the overweening egotism of many, IMHO a sign of a mature person, but hard to do. I faced similar issues in engineering after, as a young just out of school kid. I slaved over an analysis to the ballistic models of a system, made some huge improvements in accuracy with minimal mods to the code, and watched as it got called the Joe Blow algorithm, with Joe Blow being ,y boss. I was similarly outraged, and in private I’m sure my vocabulary matched yours. After a while I realized that I could accomplish a lot if I injected ideas and such into other people and supported them when they pushed them and if I didn’t care if I got credit. Also is a way to avoid blame if it craters! I managed to get a lot done that way, and eventually people figured out I was a good sharp guy and a “team player” (I really was not but if it made them happy to think so) and I was pretty successful in my career. After I grew up a little I realized the important thing to me was does the system work well, I got my ego strokes out of that instead of kudos.
The fact that the climate “science” community is that insular and averse to the facts if they disagree with the consensus is the real problem. If this was some backwater theoretical physics realm it wouldn’t matter as much as it does when sloppy science is being Lysenkoized to rob people of wealth, health, and life.

James Fosser
March 30, 2013 3:18 pm

Just how many persons do not publish work that would advance Science? I never ever publish because of an incident several years ago. I was working with five other students on a simple project to examine mutations in a gene associated with Marfan Syndrome (I had never heard of it and the course was an elective for my degree).The short course was over one semester of three months and I did all the work on my own without any liaison with the rest of the ”team” and In my usual fashion. (no modesty intended), I stumbled across a simple diagnostic way to detect mutations in any gene (For the course, the university gave the supervisor $300 per student for materials despite the cost of the course per student being around $10,000) After the course (I got a lousy mark) I threw my work into a bottom draw and went onto other matters and forgot it. A year or two later a friend who knew that I had found an easy way to detect gene mutations said that she had seen something very similar in a Peoples Republic of China Scientific Journal. I looked up the paper and lo and behold! The main author was one of those fellow students on that previous course and almost every single word in the paper plus methods and materials was lifted from my Marfan Syndrome assignment hand-in! But not a mention of me! I was not angry because I realised that perhaps my low mark was because I had also questioned the supervisor-whose life work was the Marfan Syndrome- that she might be on the wrong track over other matters relating to her research (I believe that sensibilities and science are spelt differently). Anyhow, as that Chinese paper was peer reviewed and the work it contained considered worthy of being published, I was happy. I also realised that that plagiarising ex-student (who was then a Doctor and whose name appeared in other papers) inhabited a world to which I did not wish to belong. Consequently, I have now constructed a home laboratory, work completely on my own, and place all my discoveries into that same bottom draw (and several of them would revolutionise medical science) plus I have learned never to trust my fellow humans.

DR
March 30, 2013 3:20 pm
March 30, 2013 3:21 pm

Willis:
All of us who have been involved within academic circles of researchers who appear to governed by a culture of ā€œpublish or perishā€ have had ideas ripped off by supposed colleagues in the pursuit of science. The professional societies use the publication of research to demonstrate their involvement in promoting science within their constituency of subscribers for their journals and to maintain their control of the science through their editors, committees, boards of trustees, and crony reviewers. They sponsor technical meetings for the researchers to gather as a scientific community to discuss the agenda of ideas that they want promote. In this cultural environment the free discussion of ideas is not free but very guarded because researchers with new ideas might find that their ideas appear in another researcherā€™s next proposal for support. When have you ever seen a reference in a paper to idea informally suggested by someone else? Sharing ideas is not very open, rather these forums are used criticize the research work of others against the backdrop of their own research endeavors. Why would anyone want to do this? It is perceived to be the only game in town to gain recognition as a scientist.
You are a noble exception. You have openly shared your ideas on this and other web sites about the climate science. I applaud your philosophy of putting a concept out there for people to debate and to learn. Instead of limiting your audience to readers of a particular professional society, everyone can get access without joining any society and paying the very high prices for subscriptions or costs to climb a pay wall.
The absence of a peer review process for ideas is also fallacious. An idea is published in electronic print and is open to everyone to ask questions, to offer valid criticisms or comments, to transmit quickly to others who may be interested, and is a barometer that can be used to measure the writerā€™s credibility and diligence. In the current scientific publishing environment surrounding the issue of climate, waiting for journal to publish the results of research work can take months or years before the research appears in print. There is no delay on this BLOG.
As you are also painfully aware, publishing ideas on a BLOG leads to comments that are useless, misleading, and name calling. I appreciate that you try to answer, clarify, amend, or apologize in response to critical comments. On this BLOG, many of the authors of comments are responsible people with sufficient science knowledge to offer comments which are worth reading and contribute to a better understanding even when the comment is critical of your idea. Thank you for adopting a proactive posture about scientific dialog in sharing ideas rather than maintaining ownership though publication. What could be nobler? ā€œKeep on truckinā€™ā€.

March 30, 2013 3:24 pm

H –
I’d say the comon man to whom you refer is quite capable of seeing the fallacy in AGW if he is just given two simple pieces of information: (1) temps have declined overall since the 1930s (i.e., for the past 80 years, not even just the last 16); and (2) the infinitesimal-ness of man’s contribution to an infinitesimal (that is, if even identifiable) factor in climate change. Q.E.D.
The CRL (criminal reactionary left) news media should be COERCED (to use the NYT term for what should be done to climate skeptics – tit for tat! a dose of their own medicine!) to reveal these facts to the public

rogerknights
March 30, 2013 3:31 pm

PPS: The third step would be to file a formal complaint with the AGU about the behavior of its editors and peer reviewers. I don’t see how a Guilty verdict could be avoided. That would make Mann the scholarly equivalent of a “convicted felon.” More important, it would undeniably expose the “Teamwork” that goes on behind the scenes in climatology, casting all its procedures into doubt.

george e. smith
March 30, 2013 3:33 pm

“””””…..Bart says:
March 30, 2013 at 2:31 pm
DirkH says:
March 30, 2013 at 12:52 pm
ā€œNo free lunch by going to the frequency domain.ā€
Definitely not. The advocated method is a generally inferior means of low pass filtering, as doing it in the digital domain means you are not actually eliminating the entire frequency band you are trying to take out, just the components at those discrete frequencies. And, if you havenā€™t properly zero-padded the data, you are going to get aliasing from the circular convolution with the effective response…………”””””
I get the point, you and DirkH raise. Frequency domain band limiting, would be a good filtering method, specially for eliminating aliassing due to improper sampling in the first place. The problem being that the inadequacy of the raw data, means you can’t first get a correct Fourier transform from it.
I’ve never been a fan of the FFT, although it is efficient for those who have to use it, but I have always been suspicious of how you trust a spectrum derived from an often very short list of samples.
One advantage of Fourier transform filtering in the optical imaging realm, is that the optical Fourier transform is analog, and not digital, so you do get a more accurate spectrum.(If your optics are good enough)
I guess Fourier transform processing, is useful if simply looking for the presence of certain components, but has the traps you both raise.
Well that reinforces my belief that the raw data, is the most accurate information.

ferdberple
March 30, 2013 3:44 pm

Pamela Gray says:
March 30, 2013 at 11:16 am
Ferdberple, how does copyright mix with plagiarism? I think we are talking two different things here.
=======
my post was in response to those saying there was recourse thru copyright. the copyright faq says otherwise.
as to plagiarism, I do think that is an avenue that could be pursued. all that is required is a letter of complaint. however, unless one can get hold of corroborative evidence… Which could be why there there is such a battle to withhold emails.

March 30, 2013 3:45 pm

Willis, your experience is par for the course with skeptics coming into this debate when trying to publish in the scientific literature. While posting to the web is useful it will simply be used as an excuse to dismiss your work as “not scientific” or “irrelevant” (do not underestimate this).
I recommend the following;
1. Always publish your scientific work on an open access site like arxiv.org, even while you are trying to get it in a journal, since certain scientific indexes will still link it to any work you are citing and researchers are more likely to take it seriously (every skeptic having trouble getting published needs to do this). I highly encourage you to do so with this paper regardless of it’s age.
2. Always seek out alternative journals if rejected by your first choice, such as the open review ones Pielke Sr. suggested. By controlling key editorial boards alarmists have been able to keep most skeptics from publishing in the most prominent journals.
They want you to be discouraged and not try to publish again. If they have achieved this, they have won. While I am sure everyone appreciates your time devoted to posting here, a reduction in these posts in exchange for a published paper every 4-6 months would not only be welcome but likely have a greater scientific impact.
Some inspiration;
http://www.americanthinker.com/2009/12/a_climatology_conspiracy.html
http://www.cato.org/sites/cato.org/files/serials/files/regulation/2007/7/v30n2-1.pdf
http://scienceandpublicpolicy.org/images/stories/papers/reprint/Circling_the_Bandwagons_Correcting_the_IPCC.pdf

ferdberple
March 30, 2013 3:51 pm

Severian says:
March 30, 2013 at 3:13 pm
I slaved over an analysis to the ballistic models of a system, made some huge improvements in accuracy with minimal mods to the code, and watched as it got called the Joe Blow algorithm.
============
The solution is to feed Joe Blow a couple of good ones, let him take the credit, then slip him a real zinnger of a bone head. Done right he gets the door and you get his job. From then on no one will mess with you.

DirkH
March 30, 2013 3:53 pm

george e. smith says:
March 30, 2013 at 3:05 pm
“I also would make the comment, that ā€¦.Hoserā€¦.ā€™s proposition of using Fourier transform filtering, is one that I find highly meritorious. If you are going to remove information, what better way is there to know exactly what it is you are removing.
Fourier transform filtering is widely used in image processing, and there you do like to know what you are throwing away.”
That can be done in the time domain just as easily, by subtracting the averaged signal from the original signal.

John Archer
March 30, 2013 4:00 pm

Willis,
You’re suffering from a severe case of spite deficiency. The cure is to eat some greens. A Mann salad would be ideal.
But you’ll need a proper smoothing technique to reduce the gristly lumps. I suggest you put the lot through a wood chipper. Then when you’ve done that feed the mush to a sty full of pigs ā€” let them do the eating so you don’t have to chomp all that shit yourself.
See? Everything’s simple when you look at it logically.
I hope you get that slimy bastard.

JP
March 30, 2013 4:03 pm

Hey Willis, nice article, smart method. Congrats. I sympathize with how you feel, and I am really sorry about it. It happened to me in my field (unrelated to climate science) a number of times that reviewers rejected my work and then published it themselves. It sucks. If you can prove you wrote that stuff up and sumbitted it, and can show that it is plausible that Mann saw that, you can do all of us a great favor and sue the bejesus out of him.
Good luck!
JP

Joe
March 30, 2013 4:17 pm

Roy says:
March 30, 2013 at 1:19 pm
[…] to be scrupulously fair, shouldnā€™t Mann be offered to opportunity to explain in this blog (or anywhere else if he would prefer it) where exactly he got the idea from?
——————————————————————————————————————-
I would have thought the offer of that would be implicit here. Unlike on Warmist blogs, where anything sceptical is censored instantly, I’m sure Anthony would be more than happy to allow a post by Mr Mann on the subject if he wished to submit one.

Phil.
March 30, 2013 4:36 pm

Jack says:
March 30, 2013 at 2:46 pm
With the advantage of hindsight there is no scientist that I am aware of who had been remembered for being wrong. But there are times when a manā€™s name becomes an historical artifact: think Benedict Arnold, think Quisling. I am sure there are others. Names that come to symbolize a pejorative noun rather than an honored individual.

Lamarck comes to mind, and more notoriously, Lysenko, then there’s ‘cold fusion’, Pons and Fleischman. Ussher though not a scientist is only known for his erroneous calculation of the age of the Earth.

Admin
March 30, 2013 4:37 pm

I’ve noticed that alarmist climate heroes have to be “infallible”. Alarmists will not admit a single mistake. Because doubting their message is impermissible.

Nick Stokes
March 30, 2013 4:38 pm

Willis,
I think Mann’s method is useful in its place, but is probably not original. Statisticians have been looking at these issues for a long time. The idea of padding with some sort of reflection is a device for being able to then use a symmetric smoother, which avoids getting a lagged estimate. And I think it goes back at least to DeForest in 1884.
Choice of method does depend on what you are looking for. You used as your aim to get the best estimate of the final point. Mann doesn’t help you there, because the final point is unchanged.
But often you want an estimate of final trend. Minimum slope is useless for that, because it tries to set the final trend to zero. Minimum norm is worse.
The merit of Mann’s method is that, unlike the other two, it leaves a straight line unchanged. That’s the minimum requirement for getting the trend right.

Gerald Machnee
March 30, 2013 4:45 pm

Re:
Roy says:
March 30, 2013 at 1:19 pm
**From what Willis wrote it does not seem that Mann independently hit on the same idea but, to be scrupulously fair, shouldnā€™t Mann be offered to opportunity to explain in this blog (or anywhere else if he would prefer it) where exactly he got the idea from?**
We know where he got it from – he thought of it himself while watching the Superbowl.

Jimbo
March 30, 2013 4:54 pm

Look, I know most of us here are not climate scientists, but what I want to know is this: Is Michael Mann a Climate Scientist?
[*NOTE Warmists insist on telling me that geologists are not climate scientists.]
I have looked long and hard and all I get is somebody who studies wood, is a physics & math guy and a geologist. Heck, I get similar results for James Hansen the astronomer.

Alex Heyworth
March 30, 2013 5:00 pm

Willis, kudos to you for doing the honorable thing and staying clear of arguments with journals. As the saying goes, if you wrestle with a pig, you get covered in mud and the pig loves it.
Second, thanks for your wishes to all your readers. That was a lovely thought and a beautiful and succinct way to put it. Truly appreciated.

Jimbo
March 30, 2013 5:11 pm

If you really want to know about Michael Mann see below. Michael Mann has been a useful idiot who is about to be thrown under the bus. The Mann has been a tool and does not even know it.

“You see, this struggling studentā€™s career was transformed the moment Saltzman became his Ph.D adviser. Only after Saltzman applied his influence were Mannā€™s lofty credentials ā€œrushed through.ā€ Mann then turned himself into a makeshift tree ring counter, and overnight became the iconic figure in the IPCC Third Report (2001). The rest is history, as they say.ā€”
http://climaterealists.com/index.php?id=5700&linkbox=true&position=1

http://bishophill.squarespace.com/blog/2010/5/14/the-ascent-of-mann.html

Matt in Houston
March 30, 2013 5:23 pm

Mr. Mann is a charlatan and as far as I am concerned he has been caught once again with his filthy little fat hands in the proverbial cookie jar.
Nice work Willis, even if you feel like letting the rotten scum slide into the shadow. I know it is never very rewarding to try and prosecute slime for stealing IP.
Clearly Mr. Mann must have failed his numerical methods for scientists & engineers class…or perhaps he never took one.

BruceC
March 30, 2013 5:26 pm

Willis E
No, no, no. I have absolutely no interest in that. Pig-wrestling is a sport with absolutely no appeal to me. Mannā€™s position in history is quite assured. Thereā€™s no need to ā€œnailā€ him, itā€™s much better to laugh at him.
Old Confucius saying:
He who raughs rast, raughs the roudest!
šŸ™‚

david moon
March 30, 2013 5:28 pm

Re: various comments about Fourier/frequency domain analysis above:
Fourier analysis does not “assume” sinusoidal components. It will detect them if they are there. White noise will be a “flat” spectrum with no prominent components. As an EE I do this all the time when looking at noisy signals- set my oscilloscope to FFT and see what’s happening in the frequency domain.
Yes- some kind of windowing is needed when transforming time-to-frequency or frequency-to-time to avoid artifacts (Gibbs phenomenon).
Gaussian averaging in the time domain- how is the Gaussian function truncated? It has a parameter alpha or equivalently standard deviation. Is this specified or standard in the climate change field? (excuse my ignorance)
Gaussian averaging in the time domain- the frequency response as a low pass filter is not that great. If one was trying to reject signals at or above certain frequencies, there are many other filters that could be used. Methinks these guys could use some help from signal processing experts as well as statisticians.
These “moving average” type filters are Finite Impulse Response filters and necessarily have a delay 1/2 the length. Infinite Impulse Response (IIR) filters can be designed for a desired frequency response with much less delay

Felflames
March 30, 2013 5:48 pm

Jack says:
March 30, 2013 at 2:46 pm
With the advantage of hindsight there is no scientist that I am aware of who had been remembered for being wrong. But there are times when a manā€™s name becomes an historical artifact: think Benedict Arnold, think Quisling. I am sure there are others. Names that come to symbolize a pejorative noun rather than an honored individual.
What will Mannā€™s be, I wonder? It seems to me that his strategy is to delay the inevitable disgrace until he retires.
The description you are looking for is already in use.
It is called Mannian Mathematics

March 30, 2013 6:07 pm

Mails containing eschenbach
[1] “1022240460” “1071867706” “1077815565” “1080318686” “1089897080” “1093965453” “1173455973”
[8] “1175256080” “1175514499” “1175516878” “1175518086” “1175625460” “1176402767” “1176914054”
[15] “1176914058” “1176914115” “1176914126” “1176921225” “1177014768” “1177084894” “1177768674”
[22] “1177768680” “1177768685” “1177768691” “1177939146” “1179143741” “1179320358” “1179438052”
[29] “1179489629” “1179843250” “1188479890” “1191006119” “1228920976” “1228921629” “1228922050”
[36] “1232025011” “1242129707” “1242157611” “1242161854” “1250081982” “1250084021” “1250084210”
[43] “1252438429” “1252442034” “1252442659” “1252443088” “1252498172” “1254398790”
many repeats, nothing relevant to this discussion

BarryW
March 30, 2013 6:30 pm

Willis Sorry, I didn’t mean to offend. I have no problem with what you did (not that my opinion should matter to you). I didn’t mean you the person, I mean a generic “you” such as someone who attempts to pad the end of a series and guesstimates the endpoint based on data that doesn’t exist. Your minimum assumptions technique, for example, was similar to what I was trying to come up with with my own crude thinking on the matter though my only thought was using trailing averages. I keep thinking along the lines of dealing with real time data and estimating the data that will arrive in the future. Not really what you’re attempting. My apologies for annoying you or any insult you may have felt.

March 30, 2013 6:39 pm

butterworth has one or two mails from Mann

Mark T
March 30, 2013 7:24 pm

david moon said:

Fourier analysis does not ā€œassumeā€ sinusoidal components.

To hell it doesn’t. What exactly do you think a Fourier Transform (more particularly, an FFT) does? It correlates a data against an orthonormal basis set of sinusoids.

It will detect them if they are there.

Indeed, the only thing an FFT will “detect” (it doesn’t really “detect” anything) is a sinusoid, and if there are none there, it is a relatively useless tool.

White noise will be a ā€œflatā€ spectrum with no prominent components.

White noise is stationary, just so you know, and it consists of a sum of sinusoids at all frequencies.

As an EE I do this all the time when looking at noisy signals- set my oscilloscope to FFT and see whatā€™s happening in the frequency domain.

As an EE, you should have learned the above points in your first systems analysis class. You should have also learned about stationarity if you moved on to deeper signal processing concepts.

Yes- some kind of windowing is needed when transforming time-to-frequency or frequency-to-time to avoid artifacts (Gibbs phenomenon).

Do you understand why this happens, i.e., why the Gibbs phenomenon results, and hence, how a window reduces the effect? An FFT – which is what your scope does (spectrum analyzers are better at this, btw), applied to a block of data “looks” like a repeated copy of that data in the time (and frequency) domain, i.e., the waveform is your data and it has period of the length of the data. At the end of each block into the next block is a discontinuity (end of data to start of data) which appears as a wideband signal and results in ringing in the time domain and spectral leakage (scalloping) in the frequency domain. Windowing reduces the size of the discontinuity and hence, the magnitude of the effect.

Gaussian averaging in the time domain- the frequency response as a low pass filter is not that great. If one was trying to reject signals at or above certain frequencies, there are many other filters that could be used.

You and others keep harping on “frequencies.” I’m guessing none of you have ever dealt with non-stationary data. Hint: “frequency” does not mean much with non-stationary data, and thus the concept of “high-pass/low-pass” is equally meaningless.

Methinks these guys could use some help from signal processing experts as well as statisticians.

Indeed.

These ā€œmoving averageā€ type filters are Finite Impulse Response filters and necessarily have a delay 1/2 the length.

This is only a certainty if they are linear phase, which is only guaranteed with a symmetric impulse response. Gaussian filters do meet this criteria, btw. Delay is an immaterial point w.r.t. this discussion, however.
Bart’s got it right, above, not that I’m surprised. But again, even an optimal model (w.r.t. any given criteria) will fail as soon as the statistics of the data change, which is what happens with non-stationary data.
Mark

Mark T
March 30, 2013 7:31 pm

I said:

At the end of each block into the next block is a discontinuity (end of data to start of data) which appears as a wideband signal and results in ringing in the time domain and spectral leakage (scalloping) in the frequency domain.

I should note that this does not occur if the data consists only of sinusoids with an integer number of cycles. EEs know this, too, or should. This is why ADC manufacturers always pick “tones” that have frequencies that are integer multiples (er, divisors?) of the length of the FFT they are plotting – they get a nice clean line and it is easy to see the non-linearities of the sample-hold mechanism (or whatever the input circuitry looks like).
Mark

ursus augustus
March 30, 2013 7:44 pm

In response to Fred from Canuckistan re. puddles etc, if a day old puppy piddled a puddle then Michael Mann’s ethics would drown, IMHO.
I use binomial smoothing and taper it towards the ends of the data set with the last average point being the mrean of the last two data points. It gives good results I think but I have not gone back and calculated the error as you truncate the data back in time so I guess it looks like it gives good results. Lets face it these are not precision numbers, they are about trends and the bigger pictures. All this obsessing about whether a trend is established after 10, 12, 13, 15 or 20 years is all a bit cretinous really.

Seth
March 30, 2013 7:45 pm

Solomon Green says:
March 30, 2013 at 12:06 pm
As I see it Mr. Eschenbach has very publicly accused Dr. Mann of plagiarism. He has also accused Geophysical Research Letters of aiding and abetting that plagiarism. Because his accusations are so serious, if they are unfounded he should expect defamation suits from either or both Dr. Mann and Geophysical Research Letters. There is no need for Mr. Eschenbach to take any further action. The suits will not come.

————————————————————————————-
I had exactly the same thought. Even though you are not making a formal claim to GRL it reads like a public “accusation”.
Plagiarism is one of the high crimes in academia and I would gather that Mann would launch a legal attack, or at very least a twit, if this was without merit.
This seems like it may be a risky post. I would love to see if GRL passed it on to Mann, this would be a game changer and worth pursuing if you are confident this is the case. The way I read it is that both yourself and Steve McIntyre at the least were finding fault in his math so his re-visit may have been inevitable.
This could escalate.

John Pepple
March 30, 2013 7:47 pm

Welcome to life at the bottom.

Greg Goodman
March 30, 2013 8:08 pm

David Moon says: “Gaussian averaging in the time domain- how is the Gaussian function truncated? It has a parameter alpha or equivalently standard deviation. Is this specified or standard in the climate change field? (excuse my ignorance)”
I don’t think most of them would know a gaussian bell if they sat on one. !
“These ā€œmoving averageā€ type filters are Finite Impulse Response filters and necessarily have a delay 1/2 the length. Infinite Impulse Response (IIR) filters can be designed for a desired frequency response with much less delay”
Less delay, how much? Please expand. No filter can see into the future. IIR is all history so there has to be phase delay. They also take forever to spin up. If you want any kind of accuracy you have to throw away a huge amount of the result that will not have converged.

March 30, 2013 8:55 pm

Thanks, Willis. Very instructive, in more than one way.

thelastdemocrat
March 30, 2013 9:11 pm

A prediction can never claim to be ‘science.’ It can bebased on science, but the predicted future remains to e seen, and there is no certaintly that predictions will bear out – since we have never been there, we have no surety of what will happen.
A bit before 65 million years ago, I am sure evreyone was quite sure a meteorite would not appear to disrupt everything.
We seem to have been caught quite off-guard by the housing market collapse, even though many of us everyday ppl saw it coming.
I did when my mortgage guy told me he could get me 50% more house than I was asking.
Just because a scientist uses scientific methods and evidence to make a prediction does not mean it will happen.
The essence of science is skepticism, and second to that the essence is makig a prediction, declaring what would be observed if the predicton is correct, a priori, then making the oservation to see if the prediction is true.
with no observed data to compare to hypothesis, there is no hypothesis testing. Hence a prediction can never be science.

March 30, 2013 10:04 pm

Hey, it was just a trick, a clever trick.

March 30, 2013 10:07 pm

I understand Willis’s reluctance to get his hands dirty – it’s a pretty filthy pig he would be wrestling – but I would ask, at what point do we confront the liars, cheats and stealers and force them to expose themselves publicly for what they are? It’s a dilemma, for sure, but one that will have to be faced sooner or later.
One might think that Willis would have an excellent defense – truth – with the opportunity to demonstrate all of the “pig’s” and the others’ lies, cheating and stealing publicly, plus a countersuit of his own against the “pig” and the journal that wouldn’t publish Willis’s rebuttal for conspiracy to deny him his fee speech rights. Are there any lawyers posting here on WUWT who can weigh in on this? I’m certainly no authority here.
All of this said – Willis is doing a tremendous service, and I will support whatever he decides to do here..

March 30, 2013 10:33 pm

I agree with the sentiment that its better for your ideas to be thrown into the public to increase the public’s scientific knowledge. But if scientists in any way betray the trust the people give them, they should be fired and banned from life from academia and Government employ after that forever more. This is especially important in this case, which if true probably required multiple people to throw the idea to Dr. Mann. Yes, I see him as being the “sacrificial goat” where several guys who read this paper probably tossed the idea over to him simply to attempt to get him to do the work.
Perhaps FOIA requests should be made to determine if the emails can be located about this? If he is guilty of violating the trust that the people put into him as a scientist, he deserves to do the time. Like they say, don’t do the crime if you can’t handle the time. FOIA these emails and see where they take you. I am willing to bet that where there is one instance of plagiarism there are many more. Just look out for that “plausible denibility” which these guys are so good at acquiring. I wouldn’t be surprised if he told a graduate student to do the evil deed, and then took that work under his own name. Dr. Mann seems slimy enough to do something like that.

Keitho
Editor
March 30, 2013 11:12 pm

“As always, my best wishes for each of you ā€¦ and at this moment my best wish is that you follow your dream, you know the one I mean, the dream you keep putting off again and again. I wish you follow that dream because the night is coming and no one knows what time it really is ā€¦”
Thanks Willis. I am on it.

March 31, 2013 12:34 am

Technical question:
When smoothing within a time series one assumes that the system has a homogeneous behaviour (e.g. normal or Gaussian distribution). But at the edge of the dataset the smoothing transforms itself into a prediction that the system will continue to behave as during the core period.
And there is no way to prove or disprove such hypothesis because there is less data available to make the statistical test.
Can statistics be good enough to detect a step change when a systems begins to pass from one stationary state to another one? No, it is an impossible task without making some hypothesis that will most probably be proven false when it will be time to have hindsight.
If, just as an example, one assumes that climate data is erratic then any extrapolation is valid and probably as wrong ay any other one.
If, just another example, one assumes that climate passes from one stationary state to another one, then any extrapolation will miss the step change between two quasi stable periods.
if, as a last example, one assumes that the value of a financial instrument will continue to grow as in the past then you get some surprises (as in 2007 with the subprime crisis).
If, as a very last example, a temperature time period remains more or less stable for 15 years while it had shown a constant increase for some decades, then this tells that the system is not behaving as it was assumed to be. Then the scientiste must begin to look at other models and underlying physical phenomena. More research is needed…

thingodonta
March 31, 2013 1:48 am

I wouldn’t say the following is true in all fields of science, at all times, but having trained as a scientist, in that country still a British colonial outpost after more than 200 years we call Australia, it is certainly true in some fields, and in some cases.
There are certain scientists who are in the game for recognition. They don’t care what others think, or what the truth is, a person is only of use to them as a commodity they can use. And to them you only have the right to make a contribution if you rise through the establishment, pandering all the way, and then you can publish and make a difference. To them it is very much a class based system, the way society should be, and those at the top get special privelages.
I’m quite sure, having personally experienced it, that the old British class system and values still permeates in various scientific fields and strands; there are indeed still those who become scientists mostly for that reason: to relive the glory days where gentlemen were gentlemen, and everyone else towed the line.

david moon
March 31, 2013 5:08 am

Mark T- my comments were somewhat simplified and broad-brush. I just think it’s odd that all these discussions of smoothing and averaging rarely describe what effect the process has when viewed in the frequency domain. Conversely, why not design the filtering in the frequency domain, e.g., Nth order Butterworth low-pass with cut-off frequency 1/30 yr.
I once worked with some engineers that inserted a “moving average” to clean up some noisy data. I asked if they knew they had made a digital filter, and did they know the frequency response? The answer was No and No. And then it was inserted in the feedback of a closed-loop control system. The overall system operation was totally screwed.
Agreed that a symmetrical FIR filter has its peak at the center, which is where this article started- what do you do when the data runs out 20 years from the end of the data set?

Hoser
March 31, 2013 5:19 am

Mark T says:
March 30, 2013 at 9:59 am
This notion assumes a priori that the ā€œsignalā€ consists of sinusoids, which is all you will get back. Sorry, but this is not superior in any real sense of the word.

Your ignorance of mathematics is on display for all to see. Any function without discontinuities and singularities can be represented by the sum of sines. Sorry, that’s not a complete definition, but it will do here. You don’t have to use sines. Sines are merely convenient in some treatments of Fourier analysis.
Willis Eschenbach says:
March 30, 2013 at 10:12 am

Sorry, Willis, I guess I didn’t make it clear. I tried to include a verbal explanation of why I thought the method was BS. But truncation is still BS. That opinion depends on the assumption I understand your process. Let’s say you truncate data from the right side where those points have a fit slope of 1. Now you test various smoothing methods A, B, and C, and then restore the missing data. Then you check how well you did using methods A, B, and C. You find method A worked the best. Now replace the right side data with another set of points with a fit slope of -1 (or anything else you can imagine that makes sense). Does method A still do the best job of smoothing/fitting the new data set? Not necessarily. How does truncation test the intrinsic superiority of various smoothing methods? Seems too much like climate science to me.
Willis Eschenbach says:
March 30, 2013 at 10:34 am
GRAB THE DAMN DATASET AND SHOW US!!

Screaming doesn’t help, dude. Where are your precious data?

Elizabeth
March 31, 2013 5:41 am

I hereby acknowledge Willis as the instigator and discoverer of this work. Poor ol Mann doesn’t know what’s gonna hit him when MSM starts turning on AGW (as is now happening), as the realize how they have been had by this bunch of fraudsters. I hope Mann is fired by the University of Pennsylvania as he is a disgrace to science and higher education in general

McComber Boy
March 31, 2013 7:25 am

Hoser! NO WONDER WILLIS SCREAMS!
The data set is in paragraph [9] of the original paper, and no, I’m not going to put the link here and make it easier. It just proves Willis’ original contention that you are bloviating and have not read his paper. Assumed knowledge is just assumptions and not knowledge at all.
Read the paper. Do the math. Complain about specific issues. It’s easy when you think about it.
pbh

David
March 31, 2013 7:31 am

I do not think GRL will weant any involvment in this. The timeline does not support them.

March 31, 2013 8:00 am

Like I said about greening the deserts, the heck with bridges, lay a pipeline to the nearest big water and put Michael Mann at the end of it. If Mann can suck in just one tenth as much as he blows out, you’ll have it in thirty seconds. Guaranteed!
Megalomaniac!

RockyRoad
March 31, 2013 8:05 am

Hoser says:
March 31, 2013 at 5:19 am

Willis Eschenbach says:
March 30, 2013 at 10:34 am
GRAB THE DAMN DATASET AND SHOW US!!
Screaming doesnā€™t help, dude. Where are your precious data?

You can write but you can’t read?
Oh, I get it–you have a ghost writer.

Reed Coray
March 31, 2013 9:23 am

Willis, take a look at the DILBERT cartoon of 31 March 2013 (https://blu177.mail.live.com/default.aspx?id=64855#n=2080954&fid=1&mid=eb93fe7d-99c5-11e2-95fc-00215ad6a644). Who comes to mind?

David A. Evans
March 31, 2013 9:55 am

Solomon Green says:
March 30, 2013 at 12:06 pm
The coming absence of lawsuits screams guilt!
DaveE.

Eliza
March 31, 2013 10:14 am

Marcott et al have replied through Realclimate. From a qiuick read it all it seems to be a major attempt to get rid of the uptick and pretend it didn;t happen or if it did none of the stuff is “robust”anyway….Cannot wait to see SM demolish this new HS.

Eliza
March 31, 2013 10:24 am

Re willis v Mann. There is no need to do anything.. The tide is turning, even the economist is now doubting… once this starts, basically those guys are essentially finito.

TimC
March 31, 2013 11:55 am

Willis ā€“ I have essentially the same problem as BarryW: accepting that (a) that one can postulate – and test – several alternative smoothing methods to determine their behaviours leading up to the end of the data series and (b) chart each method with sensible (perhaps three sigma, as your paper) end error bars ā€“ but (other than making the obvious check that the actual end-point lies within the error bars) while this perhaps might allow a view to be taken on what is likely to occur I donā€™t see that this can be truly predictive of the future. For example, a paradigm shift (whether in absolute values, or perhaps in the first or second differential) might occur soon after the end of the series that it is just not possible to predict – other than saying (with the benefit of hindsight) that its occurrence was improbable in the light of the data previously known.
So: one must either truncate the series so as to end at the latest smoothed average point and be content with hindsight or, ultimately, acknowledge the possibility of a paradigm shift ā€“ possibly one of previously unknown or unanticipated origin which was not tested as it had never occurred previously. In plain terms ā€“ we humans canā€™t predict what the future might hold: the best we can do is make the assumption that everything will muddle along more or less in the same old way as it has always done before.
Am I missing something here?

Frank
March 31, 2013 3:26 pm

Willis: Your post left out some important details. Did the editor of GRL sent you paper out for the usual anonymous peer review, or did he decide on his own that the material wasn’t suitable for publication. Perhaps I too trusting, but even the climategate emails don’t suggest that a busy editor – who is constantly referring disputes between competing scientists – would share your paper with Mann, but there is ample precedent that a peer reviewer could have. The easiest way would have been to forward an electronic copy of your paper.
It is difficult to prove that an idea has been plagiarized, but far easier to demonstrate that text or examples have been plagiarized. Have you used software to try to detect common passages between your rejected paper(s) and Mann’s published work?
You should consider sending the editor your draft papers and Mann’s published work and ask they used Mann to peer review your work. It would be difficult to ignore serious misconduct of this type.

Bart
March 31, 2013 4:22 pm

david moon says:
March 30, 2013 at 5:28 pm
“Fourier analysis does not ā€œassumeā€ sinusoidal components. “
I agree. See comment to Mark below.
“Gaussian averaging in the time domain- the frequency response as a low pass filter is not that great.”
It depends on how it is truncated, and what you are trying to accomplish. It can have an excellent rate of roll-off, but not a very good bandwidth to length relationship. So, it is good for suppressing a high frequency disturbance, but passing other stuff through. However, a bandstop filter designed for that purpose is generally better, if you have the tools to construct one.
“Infinite Impulse Response (IIR) filters can be designed for a desired frequency response with much less delay”
But, with nonlinear phase. Generally speaking, the delay of an IIR filter within the passband will be comparable on average to the delay of a similar bandwidth FIR filter.
The great advantage of linear phase symmetric FIR filters is that all signals experience the same delay, and thus we get the marvelous clarity of sound reproduction of modern digital systems without phase distortion.
Mark T says:
March 30, 2013 at 7:24 pm
“Indeed, the only thing an FFT will ā€œdetectā€ (it doesnā€™t really ā€œdetectā€ anything) is a sinusoid, and if there are none there, it is a relatively useless tool.”
Have to disagree there. The FFT is a fast method of computing the Discrete Fourier Transform (DFT), which is a sampled frequency version of the Discrete Time Fourier Transform (DTFT), which is a continuous function of frequency. Every L2 bounded signal has a unique DTFT which, as you say, is a measure of the correlation of the signal with a sinusoidal functional basis. The DFT can be made to approach the DTFT, i.e., the grid of sampled frequencies can be made more dense, by zero-padding.
“But again, even an optimal model (w.r.t. any given criteria) will fail as soon as the statistics of the data change, which is what happens with non-stationary data.”
The signals we are looking at give every indication of having increments which are effectively wide sense stationary. The global average temperature anomaly is composed, in and beyond the past century, mostly of
an at-most lightly damped sinusoidal system, with energy concentrated near the 60 year cycle, plus a trend. The CO2 data is dominated by the integration of a function of temperature, which can be approximated to high fidelity as a constant coefficient affine function over the past 55 years.

Bart
March 31, 2013 4:24 pm

Mod: Apologies, I missed a closing tag. Could you substitute the following for the post I just submitted at 4:22 pm? Thanks.
david moon says:
March 30, 2013 at 5:28 pm
“Fourier analysis does not ā€œassumeā€ sinusoidal components. “
I agree. See comment to Mark below.
“Gaussian averaging in the time domain- the frequency response as a low pass filter is not that great.”
It depends on how it is truncated, and what you are trying to accomplish. It can have an excellent rate of roll-off, but not a very good bandwidth to length relationship. So, it is good for supressing a high frequency disturbance, but passing other stuff through. However, a bandstop filter designed for that purpose is generally better, if you have the tools to construct one.
“Infinite Impulse Response (IIR) filters can be designed for a desired frequency response with much less delay”
But, with nonlinear phase. Generally speaking, the delay of an IIR filter within the passband will be comparable to the delay of a similar bandwidth FIR filter.
The great advantage of linear phase symmetric FIR filters is that all signals experience the same delay, and thus we get the marvelous clarity of sound reproduction of modern digital systems without phase distortion.
Mark T says:
March 30, 2013 at 7:24 pm
“Indeed, the only thing an FFT will ā€œdetectā€ (it doesnā€™t really ā€œdetectā€ anything) is a sinusoid, and if there are none there, it is a relatively useless tool.”
Have to disagree there. The FFT is a fast method of computing the Discrete Fourier Transform (DFT), which is a sampled frequency version of the Discrete Time Fourier Transform (DTFT), which is a continuous function of frequency. Every L2 bounded signal has a unique DTFT which, as you say, is a measure of the correlation of the signal with a sinusoidal functional basis. The DFT can be made to approach the DTFT, i.e., the grid of sampled frequencies can be made more dense, by zero-padding.
“But again, even an optimal model (w.r.t. any given criteria) will fail as soon as the statistics of the data change, which is what happens with non-stationary data.”
The signals we are looking at give every indication of having increments which are effectively wide sense stationary. The global average temperature anomaly is composed, in and beyond the past century, mostly of an at-most lightly damped sinusoidal system, with energy concentrated near the 60 year cycle, plus a trend. The CO2 data is dominated by the integration of a function of temperature, which can be approximated to high fidelity as a constant coefficient affine function over the past 55 years.

george e smith
March 31, 2013 4:54 pm

“””””…..
david moon says:
March 30, 2013 at 5:28 pm
Re: various comments about Fourier/frequency domain analysis above:
Fourier analysis does not ā€œassumeā€ sinusoidal components. It will detect them if they are there. White noise will be a ā€œflatā€ spectrum with no prominent components. As an EE I do this all the time when looking at noisy signals- set my oscilloscope to FFT and see whatā€™s happening in the frequency domain…….”””””
Not sure who’se assertion that was, But I am wracking my brains to try and think of any other well known continuous mathematical function, for which the word “frequency” has any meaning whatsoever. I’m not saying that none exists; just that I can’t think of any.
So far as I know, any continuous function for whiich f(t – p) = f(t) for any (t) and some fixed parameter (p); which is not a sinusoid (or cosinusoid if you like), can itself be replaced by a set of sinusoids (cosinusoids) that are harmonically related in frequency.
Fourier analysis, is just one of a vast number of representations, whereby a continuous function is synthesized as a sum of other functions, so long as those functions form an orthonormal set.
Bessel functions, Legendre Polynomials, Tchebychev Polynomials, are just a few examples of orthogonal functions that can be used to synthesize any continuous function. The word “frequency” has no meaning for any of those functions.
Fourier synthesis IS limited to sinusoidal representations. I believe the continuous function must be strictly periodic ( and therefore of infinite duration) in order to get a harmonic series expansion; but finite (in time), or aperiodic functions, require the integral form (Fourier transform).
And as DirkH and others point out, the Fourier transform itself has an end point truncation problem too.
Engineers tend to be blase about the robustness of a solution. We do tend to think,if we can get a solution or answer, it must be the correct answer.
But the pure mathematicians spend a lot of time on existence theorems, and wonder whether a solution exists; instead of simply finding that solution.
Who the hell else, but a pure mathematician, would bother to prove rigorously, that an “absolutely convergent” series, converges; I mean, it has to, doesn’t it ?
Unfortunately, I have to live in both worlds

markx
March 31, 2013 6:23 pm

Hoser says: March 31, 2013 at 5:19 am
…… truncation is still BS. …….Letā€™s say you truncate data from the right side where those points have a fit slope of 1. Now you test various smoothing methods A, B, and C, and then restore the missing data. Then you check how well you did using methods A, B, and C. You find method A worked the best. Now replace the right side data with another set of points with a fit slope of -1 (or anything else you can imagine that makes sense). Does method A still do the best job of smoothing/fitting the new data set? Not necessarily. How does truncation test the intrinsic superiority of various smoothing methods? Seems too much like climate science to me.
Hoser, that may be the weirdest bit of logic I have ever read.
Basically this ‘best fit’ is trying to predict the future, (in that we would like the end point of the best fit curve to be as accurate as possible) and as baseball-playing philosopher, Yogi Berra said; “It’s tough to make predictions, especially about the future”.
You can only work with the data you do have, and Willis has demonstrated probably the most logical way to come up with a best fit smoothing curve using the data available at the time.
And sure, if future data points then depart the trend, that best fit curve will be proven wrong (and will be moved accordingly as data points accumulate) …. but it was the best available at the time.

david moon
March 31, 2013 7:00 pm

I went back to the original Mann paper in question. I had thought the issue was using an FIR filter, and how to extrapolate when the “data runs out”. From the paper:
“We first make use of a routine that we have written in
the ā€˜Matlabā€™ programming language which implements
constraints (1)ā€“(3), as described above, making use of a
10 point ā€˜ā€˜Butterworthā€™ā€™ low-pass filter for smoothing”
They further show graphs of “40 year smooth” and “20 year smooth”
The Matlab reference for the Butterworth function shows parameters “n” (order) and Wn (normalized cutoff frequency). So what is “10 point”- is that n? And was Wn changed to give the 20 or 40 year smooth?
The function is also IIR using only past samples (z^-1, z^-2, etc.). So why the need to extend the series past the end?
And then in the conclusions, to paraphrase, if we extend the data with the same slope as the few end years, our smoothed data continues to rise, which might be non-stationary, i.e. a new trend or change in the statistics. Duh- if we assume our result, then we see it.

Bart
March 31, 2013 7:07 pm

george e smith says:
March 31, 2013 at 4:54 pm
“Fourier synthesis IS limited to sinusoidal representations.”
A Fourier Series is. A Fourier Transform is much more general, and can represent any L2 bounded function.

Joe
March 31, 2013 7:32 pm

Maybe this will help you guys understand the process:
The second thought is that a consequence of Baconian, goal-oriented science is that the goal can eat up the science. When science was reoriented to the service of engineering and industry, it guaranteed that eventually there would be steady pressure on research in the direction of pre-determined goals. Call it The Revenge of the Final Causes. The [extra-scientific] goal becomes so important that the research must be cut and fit to support it, and any science (or scientist) that does not fit gets screened out by “peer review.”
Interestingly, peer review was a medieval method developed in theology to ensure orthodoxy in the writings of theologians. (It was because he did not accept the alterations suggested by the peer reviewers that William of Ockham never received his doctorate.) It is now considered “scientific” but its methods and purposes remain the same.
http://tofspot.blogspot.com/2013/03/science-in-drag.html#more

David
March 31, 2013 9:38 pm

How the heck is GRL not guilty of knowingly giving credit to Mann, for work from Willis, which they previousely rejected, but were fully aware of?. Is their official fines etc, for such acts? Mann got paid to produce work which both he and GRL knew Willis had done. Willis may not be able to prove Mann knew, but GRL appears to be caught dead to rights.
From the post…
….”Back in 2004, Michael Mann wrote a mathematically naive piece about how to smooth the ends of time series. It was called ā€œOn smoothing potentially non-stationary climate time seriesā€œ, and it was published in Geophysical Research Letters in April of 2004. When I read it, I couldnā€™t believe how bad it was. Here is his figure illustrating the problem…
….Now, here comes the story.
I wrote this, and I submitted it to Geophysical Research Letters at the end of 2005. After the usual long delays, they said I was being too hard on poor Michael Mann, so they wouldnā€™t even consider it ā€¦ and perhaps they were right, although it seemed pretty vanilla to me. In any case, I could see which way the wind was blowing. I was pointing out the feet of clay, not allowed….
….So, I pulled out everything but the direct citations to Mannā€™s paper and resubmitted it basically in the form appended below…..
…..In 2008, after Iā€™d foolishly sent my manuscript entitled ā€œA closer look at smoothing potentially non-stationary time seriesā€ to people who turned out to be friends of Michael Mann, Dr. Mann published a brand new paper in GRL. And hereā€™s the title of his study ā€¦
ā€œSmoothing of climate time series revisitedā€
….And what was Michael Mannā€™s main insight in his new 2008 paper? What method did he propose?,,,,
….In other words, his insight is that if you truncate the data, you can calculate the error for each method experimentally ā€¦ curious how that happens to be exactly the insight I wasted my time trying to publish.
Again I ask, how the hell is GRL not guilty of knowingly giving credit to Mann, for work from Willis, which they rejected, but were fully aware of?.

TimC
March 31, 2013 10:42 pm

Willis: you of course picked up my loose terminology (ā€œā€¦.truly predictive of the futureā€) in my last post – I was, perhaps unsuccessfully, seeking to differentiate between actual (true, experimental) data collection on the one hand and formulation of theory on the other.
My working definition of statistics is ā€œmathematics of the collection, organization, and interpretation of numerical dataā€. This of course refers to actual (raw, observed) data. This might customarily be averaged, as to climate for example. However, in that case the actual (true, averaged) data stops at the latest average point, at least half the averaging length behind present day. Anything past that point is (progressively degrading) guesswork not data ā€“ perhaps it is ā€œinformedā€ guesswork based on some theory or other, or on better or worse ā€œstatisticsā€ (which can truly only be the assumption that everything will muddle on much in the same ways as experienced in the past). IMHO it should always be expressly caveated, and any theory based on it has to be regarded as suspect until the actual (averaged) data is available later.

george e. smith
April 1, 2013 1:35 am

“””””…..Bart says:
March 31, 2013 at 7:07 pm
george e smith says:
March 31, 2013 at 4:54 pm
ā€œFourier synthesis IS limited to sinusoidal representations.ā€
A Fourier Series is. A Fourier Transform is much more general, and can represent any L2 bounded function…….””””””
I said nothing at all about what functions can (not) be represented by the Fourier transform..
I did say that periodic functions (unbounded in the time domain) can be synthesized as a series sum of harmonically related sinusoids.
But a time bounded function, (starts and stops) or a non-periodic function, cannot be represented as a harmonically related sum of sinusoids.
So the Fourier transform represents “any L2 bounded function” as a spectrum of what ? If not sinusoids. What non-sinusoidal function that is not itself a sum of sinusoids, is unbounded in time, and is periodic, with a defined frequency ?

Paul Vaughan
April 1, 2013 4:20 am

Minimizing SD isn’t the only important consideration. Note that the 2 methods preferred based on this narrow criterion are systematically biased (e.g. always too low on the long rise). What of minimizing bias? A consideration of both accuracy & precision is due. Precisely inaccurate estimates have been prioritized. Why? The analogy: Under systematically predictable conditions (the long rise), all the bullets are hitting very close to the exact same spot (low SD), but the spot is not the bull’s eye. It’s not only the size of the errors that matters; the distribution of the errors should show random scatter — i.e. no systematic patterns. Why not trade a bit of that precision for some more accuracy?
An interesting, worthwhile topic.

Bart
April 1, 2013 10:21 am

george e. smith says:
April 1, 2013 at 1:35 am
“But a time bounded function, (starts and stops) or a non-periodic function, cannot be represented as a harmonically related sum of sinusoids.”
The Fourier Transform is not a sum of harmonically related sinusoids. It is an integral (in essence, an infinte sum) of sinusoids over a densely packed continuum of frequencies. The Fourier Transform represents a square integrable function relative to an infinte dimensional functional basis which spans L2. When you take the limit to infinity, the representation is no longer limited to periodic functions.
“What non-sinusoidal function that is not itself a sum of sinusoids, is unbounded in time, and is periodic, with a defined frequency ?”
For example, the Fourier Transform (under the usual EE convention) of the very non-periodic exp(-t) for t >= 0 is 1/(j*omega + 1), where omega is radial frequency and j is the square root of -1. This function has a magnitude 1/sqrt(omega^2 + 1) which represents how the components of the inifine sum are scaled, and a phase atan(omega) which represents how they are shifted in time relative to one another. This is an exceedingly elementary example.
Any L2 function, whether periodic or not, can be represented by its Fourier Transform, from which the original signal can be fully recovered, so both the time series and the frequency domain representation hold the same information, and are thereby considered equivalent.
Now, we are limited in evaluating Fourier Transforms for functions which are necessarily limited in time. Over a finite time interval, any function can be represented as the sum of periodic signals. The function simply repeats itself beyond the time interval, so it has no innate predictive value.
However, because we have lots of experience with Fourier Transforms and exceedingly common functional forms which occur in nature, we can generally extend the result in continuous fashion beyond the final time with high confidence in the extrapolation. All the more so if we have a theoretical basis for the true functional form the series should take, and can parameterize the theoretical model based on spectral analysis. But, the ubiquity of complex exponential and low order polynomial functions in nature generally allows us to do this even if we do not yet have a firm theoretical basis.
Noise is a hindrance to this endeavor, but we have found ways to get around it. The FFT itself is lousy at dealing with stochastic signals. That is why we estimate power spectral densities instead, and there are many methods for producing a PSD from noisy data for the purpose of identifying the underlying system model. The easiest and least constrained generally rely on the FFT, but it requires special processing by a qualified analyst.
In the field of identifying underlying system models from noisy data, other fields of specialty are well advanced beyond the apparently meager skills of the climate science establishment. The relationships in the climate data can, in fact, be discerned by the naked eye and are quite elementary. It is very apparent that their theoretical constructs are entirely wrong as regards the dynamics of this system. To me, they look like witch doctors or voodoo practitioners, vainly trying to force their hypotheses onto the data, and it is very clear that they will ultimately fail in that endeavor, the only question being how much damage they will do to science and the public weal before they realize it.

Bart
April 1, 2013 10:45 am

Incidentally, to any who are interested, I use “L2” to describe both square integrable and square summable functions in the continuous and discrete, respectively, time domains. Conventionally, the upper case is used to designate square integrable continuous time functions, and lower case for square summable discrete time series. However, “l2” looks like “eye-two” in the standard fonts, so I am using upper case for both.
As we are dealing with a continuous system for which the data are sampled, and the tools we have at our disposal are applied in the digital domain, we have to flip back and forth between the two paradigms, but I don’t want to muck up the conversation too much explaining every detail. Anyone who follows the discussion should be able to discern which normed space I am talking about relative to the context. Anyone who doesn’t, please disregard this message and carry on.

richardscourtney
April 1, 2013 11:46 am

Joe:
In your post at March 31, 2013 at 7:32 pm you say

Interestingly, peer review was a medieval method developed in theology to ensure orthodoxy in the writings of theologians.

Well, sort of.
But you do remind of important issues which remain important and have relevance to the present day.
The practices and principles of the modern scientific method were all adopted from the methods of classical Christian theology.
This is not surprising because theology was the main subject for study in every university course at the time of the Reformation. Modern science came about when it was decided that the unassailable authority is empirical evidence and not any other authority (e.g. the Church and/or any scripture). This decision was applied, and it was a revolution in thought which was exemplified in the motto adopted by the Royal Society; i.e. nullius in verba (on the authority of nobody).
From that sprang all the benefits of science and technological advance which today we take for granted.
Two issues derive from this.
Firstly, and relatively trivially, some people attempt to pretend there is a dichotomy between science and religion. This is not and never has been true: one of the oldest astronomical observatories is in the Vatican and is operated by the Roman Catholic Church, most great scientists have been religious practitioners, etc..
The methods of theological and scientific thought are the same but acknowledge different ā€œunassailable evidenceā€ because they have different purposes. Hence, arguments about science OR religion are pointless and disrupt serious discussion (including often on WUWT).
Secondly, and much more importantly, any claim to any authority other than empirical evidence is a denial of the most fundamental scientific principle. Hence, appeals to ā€œconsensusā€ or any other authority are a denial of the scientific method which attempt to return us to pre-Reformation thought.
Peer review can be ā€“ and often has been ā€“ abused, but it is a method to determine if a scientific paper opposes the only unassailable authority of science; viz. empirical evidence. If the paper is in disagreement with empirical evidence then the only scientific decision is to reject it otherwise its publication should be allowed.
But, of course, not every paper which should be allowed publication deserves to be published. The contents of a paper determine if a paper deserves publication, and this is not affected by who presents the paper (nullius in verba).
The experience of peer review reported by Willis Eschenbach in his article not only injures him: it also injures the most fundamental of all scientific principles. It is a disgrace.
Richard

george e. smith
April 1, 2013 4:14 pm

“””””….. Bart says:
April 1, 2013 at 10:21 am
george e. smith says:
April 1, 2013 at 1:35 am
ā€œBut a time bounded function, (starts and stops) or a non-periodic function, cannot be represented as a harmonically related sum of sinusoids.ā€
The Fourier Transform is not a sum of harmonically related sinusoids. It is an integral (in essence, an infinte sum) of sinusoids over a densely packed continuum of frequencies…….”””””
Bart,
Absolutely NOWHERE have I ever stated, or suggested, that the Fourier Transform IS a sum of harmonically related sinusoids: I quote:-……….”””””………ā€œBut a time bounded function, (starts and stops) or a non-periodic function, cannot be represented as a harmonically related sum of sinusoids.ā€…………..”””””””””
There, in fact I have specifically said exactly the opposite.
Only a periodic continual (never starts, never stops) function CAN be represented as a Fourier Series of harmonically related sinusoids.
The original question being discussed, was what exactly is being used by the Fourier transform to represent a finite time (starts and stops) NON-periodic time function. I and others said the frequency spectrum consists only of sinusoidal functions; and I added they are NOT harmonically related, but some sort of continuum spectrum of frequencies; and they ARE sinusoids, at anyt non zero frequency in the transformed spectrum.
You and others have asserted they aren’t sinusoids, so I have simply asked; then WHAT are they; those time functions that are used to represent that arbitrary time function ??

Bart
April 2, 2013 12:28 am

george e. smith says:
April 1, 2013 at 4:14 pm
“You and others have asserted they arenā€™t sinusoids, so I have simply asked; then WHAT are they; those time functions that are used to represent that arbitrary time function ??”
It isn’t so straightforward because you are dealing with not mere superposition, but integration over an infinite and dense expanse of “frequency” for the representation. At any specific frequency, the Fourier spectrum of an L2 function has measure zero, so you cannot say that it is composed of any specific sinusoid at all. Strictly speaking, a persistent sinusoid isn’t even an L2 function, though a truncated sinusoid confined to a finite interval is.
So, it is difficult to provide a concrete answer to your question – it’s a little like trying to describe how a four dimensional object looks – our minds just aren’t built for it. But, the FT is still a (extremely) useful abstraction, and you can almost always gain a lot of insight into a given system by Fourier-based analysis. Especially if you have a lot of experience with such analysis, and recognize general features which commonly manifest themselves in it when dealing with natural systems.

Bart
April 2, 2013 12:33 am

Bart says:
April 1, 2013 at 10:21 am
“…the Fourier Transform … of … exp(-t) for t >= 0 is 1/(j*omega + 1), … and a phase atan(omega) “
Missed the negative sign – the phase is atan(-omega).

george e. smith
April 2, 2013 12:17 pm

Well, isn’t the whole point, that either a Fourier series, or a Fourier transform is only a mathematical fiction, and if you try to pick out a specific frequency from the spectrum, the narrower you confine the frequency, the longer must be its time of duration, so that for a single frequency (sinusoid) the signal would have to exist for all time. It simply reinforces my original assertion, that the most accurate representation of a set of experimental data values, is that set of data values itself. Truncating a continuous signal, must broaden its spectral width in the frequency domain.

Bart
April 2, 2013 12:51 pm

george e. smith says:
April 2, 2013 at 12:17 pm
But, the data values themselves do not give any insight into the underlying process. We know the form of common processes, and the signatures they create in the PSD. That allows us to infer how the process will evolve in the future based on voluminous experience with other natural systems.
We may be arguing (are we arguing?) at cross purposes. I agree that arbitrary filtering of data is … arbitrary, and is likely to lead to erroneous results. However, there are powerful tools available for identifying systems and propagating their observed characteristics into the future, as I advocated here. Nobody that I know of is doing or has done this so, if you are disparaging methods which have been used, it is likely I agree with you.

Bart
April 2, 2013 4:05 pm

Willis Eschenbach says:
April 2, 2013 at 1:14 pm
This isn’t exactly the proper venue. However, it is easy to find such applications in a web search. Here are a few that popped up in a quick search, though I make no assurances that they are all on the up-and-up (particularly the financial markets one, though it seems prima facie plausible – I had thought of doing a similar analysis at some point before competing interests pushed it out of contention for my time).
http://www.haikulabs.com/kalman2.htm
http://ir.lib.ncut.edu.tw/bitstream/987654321/2377/1/%E6%A5%8A%E5%96%84%E5%9C%8B+J12.pdf
http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1022140&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1022140
http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6287156&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6287156
http://www0.cs.ucl.ac.uk/research/researchnotes/documents/RN_04_22.pdf

Bart
April 2, 2013 8:56 pm

I just don’t have the time. My purpose is to suggest avenues of further inquiry to those who may be interested, and to point out that these guys like Mann et al. are in way over their heads.
A true professional in the field ought to be using powerful methods such as I am highlighting, methods which have been built up over many decades by some amazingly bright people designing and producing actual working systems of benefit to humanity. Instead, it’s amateur hour. And, these guys have the audacity to pass themselves off as representing “science”. It is pathetic.

george e. smith
April 3, 2013 7:05 pm

“””””…..Bart says:
April 2, 2013 at 12:51 pm
george e. smith says:
April 2, 2013 at 12:17 pm
But, the data values themselves do not give any insight into the underlying process. We know the form of common processes, and the signatures they create in the PSD. That allows us to infer how the process will evolve in the future based on voluminous experience with other natural systems……”””””
Well Bart, I’m certainly not cataloging it as arguing. For me at least it has been a constructive dialog. I’m always most happy to learn from those who clearly know much more than I do. And if in the process, others also gain some new insight, then it is doubly useful.
My somewhat limited industrial experience with filtering, has always been analog, so digital signal processing, is not my forte. Mostly, my use of filtering has been to search for real signals in the presence of lots of noise..
In the case of weather/climate data, I’m not sure, that there are signals in the “noise”. I think the “noise is the actual signal, but it doesn’t carry much of a message.

Bart
April 4, 2013 9:26 am

george e. smith says:
April 3, 2013 at 7:05 pm
“In the case of weather/climate data, Iā€™m not sure, that there are signals in the ā€œnoiseā€. I think the ā€œnoise is the actual signal, but it doesnā€™t carry much of a message.”
Always a possibility. But, there is definite structure to the signal, which can be used to predict how it is going to evolve. That is, even if the data actually represent, e.g., some artifact in the way measurements are processed, then that artifact is likely to continue as long as the measurements are collected and processed in the same way. And, even if predictions of how it is going to evolve do not represent any actual natural process, they represent, or are correlated with, how humans are going to react, and so are not entirely un-useful.

Duster
April 8, 2013 2:08 pm

pax says:
March 30, 2013 at 9:34 am
Your paper was an easy read, contained no unnecessary BS lingo, didnā€™t pretend that temperature time series is rocket science, was very easy to understand, and made a valid point. Of course it wasnā€™t accepted.

One of the worst effects of working on a degree in many sciences is that the instructors work hard to destroy your ability to compose in the simple and direct styles that are taught in Freshman English. Then of course, you start to work and you clients don’t want qualified statements, they want certainty.