Michael Mann's special purpose Hockey Stick filter has been exposed

Jean S writes at Climate Audit:

diverg33[1]

As most readers are aware, and stated in my post few hours after (ClimateGate) CG broke out, Mike’s Nature trick was first uncovered by UC here.

He was able to replicate (visually perfectly) the smooths of MBH9x thereby showing that the smooths involved padding with the instrumental data. The filter used by UC was the zero-phase Butterworth filter (an IIR filter), which has been Mann’s favourite since at least 2003. However, there was something else that I felt was odd: UC’s emulation required a very long (100 samples or so) additional zero padding. So about two years ago, I decided to take an additional look at the topic with UC.

Indeed, after digitalizing Mann’s smooths we discovered that UC’s emulation was very, very good but not perfect. After a long research, and countless hours of experimenting (I won’t bore you with the details), we managed to figure out the “filter” used by Mann before Mann (2004)-era. Mann had made his own version of the Hamming filter (windowing method, an FIR filter)! Instead of using any kind of usual estimate for the filter order, which is usually estimated from the transition bandwidth (see, e.g., Mitra: Digital Signal Processing) and has typically the length of a few dozen coefficients at maximum, he used the filter length equal to the length of the signal to be filtered! As Mann’s PCA was apparently just a “modern” convention, this must be a “modern” filter design. Anyhow, no digital signal processing expert I consulted about the matter had ever seen anything like that.

JeanS adds in a comment:

the main issue here is not which filters/smoothers are “appropriate”, but the fact Mann was using a method unknown to anyone else. This made it practically impossible to replicate his smoothings, and later, to definitely show beyond the reasonable doubt that he indeed used the trick (i.e., padded with the instrumental data).


 

Bold mine. Read more here: http://climateaudit.org/2014/08/29/mannomatic-smoothing-technical-details/

I suppose a man who designates himself as a Nobel Laureate in court proceedings, even after being chastised by the IPCC, only to have to retract the court claims later, would have no moral issues at all with making a special version of an established filter to suit his own special purposes.

 

0 0 votes
Article Rating
114 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Admad
August 30, 2014 12:10 pm

Richard P
August 30, 2014 12:35 pm

He did what?
I cannot even imagine how you can justify having the same number of coefficients as the signal to be processed. He is padding the filter run-on and run-off with that many zeroes? Was this a causal or non-causal filter in which the coefficients were centered around the middle value of an odd numbered sequence? Thus, his only valid value was the one in the middle of the data set? This means that the beginning and end of the filtered series are worse than useless, and should be considered fornicated. This guy is more pathetic than I thought. No wonder he wants to keep it secret.

rogerknights
Reply to  Richard P
August 30, 2014 12:48 pm

“No wonder he wants to keep it secret.”
Now that’s it’s been exposed, his attempt at concealment is evidence of “guilty knowledge”–and makes his sin (the cover up) twice as bad as the crime.

Nick Stokes
Reply to  Richard P
August 30, 2014 1:51 pm

Richard P August 30, 2014 at 12:35 pm
“He did what?”

It’s a real question, and the article doesn’t help. It doesn’t tell us the filter width in years (a plot would help). Mann said he used a 50-year filter, and that is what it looks like. A filter width equal to the data would virtually wipe out all variation.

Robert of Ottawa
Reply to  Nick Stokes
August 30, 2014 5:05 pm

The windowing function becomes, I suspect, dominant in this situation. It is incredibly bad logic to do this, I suspect Mann just played around to get the result he wanted, meaningless though it was.

AnonyMoose
Reply to  Nick Stokes
August 30, 2014 5:59 pm

The article also doesn’t say what was being filtered, and just what the effect on the result was. I think that this was an attempt to smooth some part of the data, because the PCA-extracted intermediate results were jumping around somewhat. I don’t know if it was applied to the numerous proxies individually or to the combined result.
Was it a filter applied before later analysis steps, or was it applied late in the process to trim out some noise?

Harold
Reply to  Nick Stokes
September 2, 2014 9:10 am

Not necessarily. An FIR can be made to do just about anything. An FIR that long has about eleventy bazillion degrees of freedom.

August 30, 2014 12:38 pm

One Ring to fool them all,
One Ring to blind them,
One Ring to rule them all,
and into poverty bind them.

August 30, 2014 1:00 pm

Oh my God. One of the most amazing tales of quackery that I have read in a long time. The fact Mann and I both studied math makes it doubly horrific to me personally. May Karma dish out to Mr. Mann what he so richly deserves.

cnxtim
Reply to  markstoval
August 30, 2014 1:21 pm

Karma, yes that cycle is eternal and transcends any country’s written laws.
However, the world in general will probably like to see another, less philosophical path ensue.

August 30, 2014 1:05 pm

Mr. Layman here. (I seem to be a layman in most areas.)
Is this little “sleight of Mannd” in addition to the “fudge factor” that would produce a hockey stick even if random numbers were entered?

jbutzi
Reply to  Gunga Din
August 30, 2014 1:15 pm

Me too. It sounds bad and I get the general idea, but the understanding of the of the terms and related math, statistics, filters, etc elude me. Anybody want to put this in ‘English’ for those of us without statistics and data processing backgrounds.

Reply to  Gunga Din
August 31, 2014 10:25 am

I’m replying to myself.
What I was referring to was not Mann’s doing. It was in the CRU code.
http://wattsupwiththat.com/2009/12/04/climategate-the-smoking-code/
But I do get the feeling he wishes he’d thought of it. 😎

August 30, 2014 1:08 pm

Just out of curiosity, what or whom is UC?

Gerald Machnee
Reply to  William McClenney
August 30, 2014 1:23 pm

UC is an occasional contributor at Climateaudit who is well versed about stats.

Reply to  Gerald Machnee
August 30, 2014 1:25 pm

Thanks!

Hoser
Reply to  William McClenney
August 30, 2014 8:15 pm

Who is?

Bill_W
Reply to  William McClenney
August 31, 2014 7:31 am

Under Cover = UC ? I have no idea but that did pop into my head when I had the same question a few days ago.

Chris Riley
August 30, 2014 1:11 pm

Do the activities described all of the elements required under the following (Wikipedia) definition of Criminal Fraud ?
In common law jurisdictions, as a criminal wrong, fraud takes many different forms, some general (e.g., theft by false pretense) and some specific to particular categories of victims or misconduct (e.g., bank fraud, insurance fraud, forgery). The elements of fraud as a crime similarly vary. The requisite elements of perhaps most general form of criminal fraud, theft by false pretense, are the intentional deception of a victim by false representation or pretense with the intent of persuading the victim to part with property and with the victim parting with property in reliance on the representation or pretense and with the perpetrator intending to keep the property from the victim.[6]

Leigh
Reply to  Chris Riley
August 30, 2014 5:44 pm

“The fraud act 2006 (C35) is an act of the parliament of the united kingdom.
The Act gives a statutory definition of the criminal offence of fraud, defining it in three classes—fraud by false representation, fraud by failing to disclose information, and fraud by abuse of position.”
In baseball terms, your out Mr Mann.
I’m assuming the United States would have similar laws that are in step with their British cousins.
There is substantially more to the legislation enacted by the English parliament but the definations in a nutshell are enough.
Fraud it is.

Rhoda R
Reply to  Leigh
September 1, 2014 8:09 pm

I’m not sure what the statute of limitations is on fraud, but even if it still has not been reached you’d need a willing prosecutor. With our current Administration going h3ll-bent-for-leather on CAGW so that their special friends can get richer I doubt that there would be any prosecution.

rogerknights
August 30, 2014 1:20 pm

The previous thread on CA contains much more interesting detective work–of Gavin’s possible ghost-writing of EPA comment-rejection statements:
http://climateaudit.org/2014/08/27/who-wrote-the-epa-documents/

Reply to  rogerknights
August 30, 2014 1:39 pm

Thanks for that link. A most interesting thread!

August 30, 2014 1:31 pm

Thanks, Jean. Impressive!
Mann up Sheet Creek without a paddle?

Eugene WR Gallun
August 30, 2014 1:44 pm

An oldie but a goodie.
THE HOCKEY STICK
There was a crooked Mann
Who played a crooked trick
And had a crooked plan
To make a crooked stick
By using crooked math
That favored crooked lines
Lysenko’s crooked path
Led thru the crooked pines
And all his crooked friends
Applaud what crooked seems
But all that crooked ends
Derives from crooked means
Eugene WR Gallun.

Chris Riley
Reply to  Eugene WR Gallun
August 30, 2014 1:52 pm

I’ve said it before and I’ll say it again , We need a Lysenko prize. We have so many scientists deserving of such recognition.

Robert of Ottawa
Reply to  Chris Riley
August 30, 2014 5:19 pm

There are the Ignoble Prizes.

Alberta Slim
Reply to  Eugene WR Gallun
August 30, 2014 2:28 pm

Good one. sesible and well written….

phlogiston
Reply to  Eugene WR Gallun
August 30, 2014 3:05 pm

Another good one from WUWT’s resident poet – thanks!

Rob Potter
August 30, 2014 1:48 pm

Although non-centered PCA could be considered naive or incompetent, surely this padding of series with zeros must have been a deliberate attempt to produce a desired result?

Wally
Reply to  Rob Potter
August 30, 2014 7:15 pm

Actually no. When using a window function in signal processing, padding on the end is needed in order to get a result running to the end of the existing time series. This is normal practice in engineering. Whats the problem here is juggling the filter and then not explaining any of the details.

Wally
Reply to  Wally
August 30, 2014 7:17 pm

I should add: the padding is normally done (for a window or filter of some width) with the last data point, not zero. Using zero introduces a bias in the last N/2 (or so) outputs.

Observer
August 30, 2014 1:48 pm

I’m not sure I completely understand your post here. I have considerable experience with DSP but I don’t understand from your description what the data processings steps are and in what order.
Regarding windows, they are often used in the design of finite-length (FIR) filters and to multiply finite-length time records when doing spectrum estimation. See for example Oppenheim and Schafer sections 5.5 and 11.4.2. Note that in the both cases, the window is a multiplier of data points — the window values are NOT used as the coefficients of an FIR filter. As a result, the window length is ALWAYS equal to the length of the input data record. So from this perspective, it is not necessarily ridiculous that he used a window length equal to the length of his data.
That said, whether it makes any sense to apply a window in this situation depends on how and where it is applied in the data processing sequence and I could not deciper that from your posting. Could you post back with more detail on the actual data processing steps and the sequence of those steps?
Out of curiosity, what is the modified Hamming window used by Mann? The “standard” Hamming window is this (I don’t know if I can put equations into posts here so this is just text):
a + (1-a) cos((2*pi*N)/(N-1))
where N is the data record length. For Hamming, a is 0.54.

DirkH
Reply to  Observer
August 30, 2014 1:56 pm

You can use the Hamming or any other window function not only as a multiplier (the classic windowing function application) but equally well as the coefficients of a FIR filter. In this case the frequency dampening achieved (the spectrum of the filter) can be derived by Fourier-transforming the Hamming function. Or in other words, the Hamming function becomes the impulse response of the FIR filter.

DesertYote
Reply to  DirkH
August 30, 2014 3:43 pm

Thanks. It was not clear from the article how the Hamming filter was used. I have only used it for windowing. This left me a bit confused.

DirkH
Reply to  Observer
August 30, 2014 1:57 pm

The FIR filter performs the convolution of the signal and the FIR filter coefficients. So, the output is the convolution of signal and Hamming function in this case.
And yes, it is absolutely insane to have a FIR filter the same length as the entire time series of the signal. All output values except the exact middle one are compromised.

Observer
Reply to  Observer
August 30, 2014 2:03 pm

Found a copy of the Matlab script for this linked on the ClimateAudit post.I’ll post back after looking it over…

Observer
Reply to  Observer
August 30, 2014 2:17 pm

So help me out here DSP folks. It looks to me like this low pass filter is being designed properly. Here’s alink to the Matlab script:
[url]http://www.climateaudit.info/data/uc/lowmann.m[/url]
There’s also the same link over at ClimateAudit. On line 76 we have the use of the window.
b(1:hn+1)=sin(wc1*s)./(pi*s).*window(1:hn+1);
He IS using the window to multiply a time sequence, not as the coefficients of an FIR filter; and that time sequence is a truncated sin(x)/x function. The sin(x)/x is one possible non-causal way to band limit an impulse to get a low pass filter.
One other note — it looks like there is a minor error in the code for the Hamming window on line 121:
window = 0.54-0.46*cos(2*pi*[0:(lw-1)]’/(lw-1));
I think he should be dividing by “lw”, not “(lw-1)” but for a length of 500 that’s probably not very signficant. Other than that, this looks to be an off-the-shelf Hamming window, not a “modified” Hamming window.
So, bottom line is that this looks like an okay way to get a low pass filter. What you do with that filter and whether it make sense or not is another question I have not looked into just yet.

Observer
Reply to  Observer
August 30, 2014 2:44 pm

Answered my own question. I was thrown off by the focus on the Hamming window. It’s the filter length that’s a problem.
This IS an okay way to design an FIR filter but the part that makes no sense is that the filter length is equal to the length of the signal being filtered. You really only get one data point of output from the filter that does not include some zero-padding of the input data. This indeed makes no sense.

DirkH
Reply to  Observer
August 30, 2014 2:56 pm

Thanks for checking.

David L. Hagen
Reply to  Observer
August 30, 2014 3:06 pm

To help Steyn, please post at ClimateAudit the “minor error” you found.

Nick Stokes
Reply to  Observer
August 30, 2014 3:09 pm

“You really only get one data point of output from the filter that does not include some zero-padding of the input data.”
Indeed. So looking at the graph at the top, does it really look like it is showing just one point?
Mann says it is a 50-year filter. Jean S seems to say it is a 600-year Hamming. Which does it look like to you?

DirkH
Reply to  Observer
August 30, 2014 3:19 pm

Nick Stokes
August 30, 2014 at 3:09 pm
““You really only get one data point of output from the filter that does not include some zero-padding of the input data.”
Indeed. So looking at the graph at the top, does it really look like it is showing just one point?”
You do not understand. You pad the time series so the convolution has enough material to work with to output as many samples as there are input samples of the original time series. Yet due to the length of the filter, every output sample except the very middle one is influenced by the padding.
Careful researchers document this; as the compromised shoulders of the filter output are to be taken with a grain of salt, the farther to the end points the more.
Careful researchers.

DirkH
Reply to  Observer
August 30, 2014 3:21 pm

“Yet due to the length of the filter, every output sample except the very middle one is influenced by the padding.”
For Mann’s Crazy Filter that is. A responsible researcher would chose a filter length that is far shorter than the input sample time series length.
A responsible researcher.
You don’t wanna have to mark ALL except the middle of your output as “take this with a grain of salt”.
Normally.

Bill H.
Reply to  Observer
August 30, 2014 3:31 pm

NIck, I share your bafflement, but in this particular case is the point being made that there’s only one point that is “uncontaminated” with zero-padding?

Nick Stokes
Reply to  Observer
August 30, 2014 3:46 pm

” in this particular case is the point being made that there’s only one point that is “uncontaminated” with zero-padding?”
It might be, but it can’t be true. A 50 year Hamming window is just that. Max 50 years affected by padding. And this one doesn’t go to the recent end anyway. There is no way that graph has been through a 600-year Hamming filter.

Reply to  Observer
August 30, 2014 4:35 pm

NIck, you wouldn’t mind convincing M. Mann to show up here and explain exactly what he did would you?? So we can clear all of this up? Nah….. I didn’t think so.

Bill H.
Reply to  Observer
August 30, 2014 5:01 pm

Alcheson, It would be nice if “UC” could turn up as well to explain the “details” that Jean doesn’t want to “bore” us with. At the moment there seems to be a distinct lack of hard evidence, and Jean’s link to Climate Audit yields only the briefest comment by “UC”

Bernie Hutchins
Reply to  Observer
August 30, 2014 5:07 pm

Responding to your August 30, 2014 at 2:17 pm
The purpose of a point-by-point multiply of a sampled sinc function by a Hamming (or other) window is to reduce passband ripple, related to the Gibbs’ phenomenon. That is, you start with a “crude” impulse response like a sinc (an infinite duration time-domain sinc being the Fourier transform of a frequency-domain rectangle – an ideal low-pass) and use the window to make the impulse response finite (FIR) AND to taper it so as to reduce transition-band ripple (Gibbs). Oldest method in the book. There are many better. The length of the FIR filter is necessarily the length of the window. There will always be end effects (“transients”) for the length of the FIR filter, although a tapered window will reduce these (perhaps by half or so). The design MIGHT be standard, although not by any means so ubiquitous that anyone would have assumed that. You still need to say what you did. Perhaps Mann sensed that using something so basic would not imply sufficient sophistication! Or perhaps simple as it is, he was not sure he got even that “right”.

DirkH
Reply to  Observer
August 30, 2014 5:22 pm

Bernie Hutchins
August 30, 2014 at 5:07 pm
“use the window to make the impulse response finite (FIR) AND to taper it so as to reduce transition-band ripple (Gibbs). Oldest method in the book. There are many better. The length of the FIR filter is necessarily the length of the window. ”
Normally the Hamming window is used to taper the time series to reduce the Gibbs phenomenon as you describe.
But Mann has then used a lowpass FIR filter to smooth the so treated time series. Making the length of that FIR filter equivalent to the window length of the Hamming function is the stupid part. It should have been much shorter.

Bernie Hutchins
Reply to  Observer
August 30, 2014 7:09 pm

Responding to DirkH August 30, 2014 at 5:22 pm
You said: “Normally the Hamming window is used to taper the time series to reduce the Gibbs phenomenon as you describe. But Mann has then used a lowpass FIR filter to smooth the so treated time series. Making the length of that FIR filter equivalent to the window length of the Hamming function is the stupid part. It should have been much shorter.”
Well – I’m not sure. What we have is Jean S’s Matlab reconstruction of what Mann supposedly did. I need to run that.
After all, an infinite-duration impulse response (like a sinc) is a time series. It is a very standard method of FIR digital filter design to use a (usually Hamming) window to make this impulse response finite and to taper the ends. That is, the Hamming window is part of the DESIGN of the FILTER. See Oppenheim-Schafer-Buck (Chapter 7) or my own humble (pages 19-23):
http://electronotes.netfirms.com/EN197.pdf
This is just design of the filter. The LENGTH matters, as does the choice of cutoff of the original sinc of course. THEN this impulse response is convolved with the DATA time series in the actual filtering. There is nothing unusual about trying to use a digital low-pass to smooth a time series.

Bernie Hutchins
Reply to  Observer
August 30, 2014 8:16 pm

Follow-Up – Responding to DirkH August 30, 2014 at 5:22 pm
Wow –
First, Jean S’s code is scary until you recognize that it is mostly comments and needs to be separated into three scripts. No problem – takes 10 minutes. Jean’s code does what I supposed it did. It designs a filter, and then convolves.
BUT – Indeed, it does give a filter length (the “b” output) that is the same length as the input x. Nuts of course!
But, to be fair, this is NOT Mann’s code (Devil’s advocate here – so to speak!). What IF the long length does not matter to the degree that Jean’s results look like Mann’s results (I think we are just matching a digitized curve?). This might be possible because the impulse response b is very very small except in the middle region. This is because (1) a sinc function already decays rapidly and (2) it is further tapered to 8% at the ends by the Hamming window.
We need to get this right.

Bernie Hutchins
Reply to  Observer
August 30, 2014 9:13 pm

Two more points –
(1) Note that in Matlab, the filter design view we have here is y = conv(x,(h.*w)) where h is the sync function, and w is the Hamming window, and all of us know h and w must be the same length for the point-by-point multiply, and all of us feel the designed filter length of h.*w should be much smaller than x. Curiously (!) if the length of w and x were the same, then we could use y =conv((x.*w),h) where h is truncated (rectangular window of length of first Hamming window) shorter and get the same result. I was guessing they were different. It’s different length of the Hamming window.
(2) All this controversy about the choice of smoothing filter is educational but corresponds to a lesser sin (Won’t make Drudge!!!) As Steve M. pointed out over on CA, this is “housekeeping” and not by any means revelation. He said: “This was a very technical housekeeping post by Jean S.”

Observer
Reply to  Observer
August 31, 2014 12:19 am

Very minor point, but in the interest of accuracy, my comment about dividing by “lw” instead of “(lw-1)” below is incorrect. Dividing by (lw-1) is correct…but again makes almost zero difference in the results.

Robert of Ottawa
Reply to  Observer
August 30, 2014 5:24 pm

FIR = finite impulse response
IIR = infinite impulse response.
Hit these filters with an impulse (spike) and the FIR response dies down to nothing, depending upon it’s # of coefficients. IIR can continue ringing like a bell for ever. In electronics, FIRs require more hardware but can handle any input signal; IIRs can be very simple but require a much more constrained input signal to remain stable.

john robertson
August 30, 2014 1:56 pm

Another nail.
I have often wondered, is there any difference between just making stuff up and refusing to publish the methodology , data and assumptions made behind a published paper?

DirkH
Reply to  john robertson
August 30, 2014 1:58 pm

He probably hid it so no DSP expert could point out his incompetence.

Bruce Cobb
August 30, 2014 2:13 pm

Like the great 60’s classis song goes;
When a Mann
Loves a Warmin’ …

Reply to  Bruce Cobb
August 31, 2014 9:55 am

Clever!

August 30, 2014 2:22 pm

Uh, can we use the “F*-word” yet?
* F = Fraud

August 30, 2014 2:36 pm

Methinks we should call this the “Mann Hammering Filter” , smooth’s anything into a hockey stick. Honestly, if this is true, then fraud it is and I want all the tax funded fraud to be exposed in courtrooms and all that funding paid back to the US treasury. Thnk you Jean S. for this informative post.

jim hogg
August 30, 2014 2:42 pm

Observer: thanks for that. I look forward to more explication.

rogerknights
August 30, 2014 2:45 pm

Thanks so much for the darker and more readable low-level typeface, Anthony. (Are you using Georgia instead of Times New Roman?)

rogerknights
Reply to  rogerknights
August 30, 2014 5:44 pm

Oh no, I jinxed it! (Briefly the typeface was a readable, dark serif.)

jim
August 30, 2014 2:52 pm

“Uh, can we use the “F*-word” yet? * F = Fraud”
Lets see:
1. He used proxies known to be inappropriate for temperature.
2. He misused PCA
3. He didn’t notice his processing over weighted hickeysticks
4. He didn’t consult statistics experts, eventhough his U had a first class statistics department.
5. Man hid a key weakness in his primary proxy: tree rings quit representing temperature in the 1960s (the “divergence problem”) Instead of discussing this problem and overcoming it in his paper, he just hid it with a splice of thermometer data. This is a key indicator of fraud because it was intentional and hid a major flaw in his paper
How many high school level mistakes do we allow a PhD before we say it cannot be mere oversight?

Reply to  jim
August 30, 2014 6:09 pm

So, I’ll take your answer as a “maybe”.
/grin

Jimbo
August 30, 2014 2:56 pm

It’s not called the fabricated hockey stick for nothing.

Jean Parisot
August 30, 2014 3:02 pm

Does this address the “flatness” of the hockey stick handle, or that a feature of long integration for the averaging?

Ursus Augustus
August 30, 2014 3:18 pm

Having a ‘filter’ with the same length as the signal data goes just a tad beyond ‘filtering’ rather it is a complete ‘re-engineering’ of the signal data. The ‘hockeyschticked’ output is to the original signal what a Dalek, a Cyberman or a Borg is to a human. Even Darth Vader would be disgusted.
I have to go to science fiction for allegories but in the wierd, nanoworld of Michael Mann’s ego, truth is far, far stranger. On ‘Laureate’ Mann’s starship, Hal the computer would be gay and would not let Dave out of the hatch because he wanted to make babies with him, somehow.
Gosh I am looking forward to the court case. The cross examination of Mann will be a treat.

Bill H.
Reply to  Ursus Augustus
August 30, 2014 3:38 pm

Ursus, and even more incredible that Mann et al. agrees within the published uncertainty, with just about every subsequent paleo-reconstruction – an exception being Loehle, but then Craig did make one or two errors, such as not realising that BP meant “Before 1950”,
But then of course everything can be explained by the appeal to “warmist conspiracy”

cirby
August 30, 2014 3:37 pm

I’m curious: we know, from various tests, that running the “warm filter” on random noise shows a hockey stick.
So what happens when you run it on a flat series? Like, for example, a series that’s 72 F across the board, with no variations? If you still get a non-flat result from that, you really have something…

Bill H.
Reply to  cirby
August 30, 2014 3:44 pm

Well,we know that Steve and Ross claim this, though replication of their “result” has yet to be performed.
(Please don’t try citing the Wegman fiasco – as blogger Deepclimate has demonstrated they just ran a program developed by Steve and Ross that …. printed out Steve and Ross’s results, and then claimed this to be an “independent verification”).

Throgmorton
Reply to  Bill H.
August 30, 2014 9:57 pm

>they just ran a program developed by Steve and Ross that …. printed out Steve and Ross’s results, and then claimed this to be an “independent verification”.
That makes absolutely no sense as a criticism. The mechanism for generating the results was transparent from the paper, and from the code itself. The output from the code would vary with each run due to randomly generated data.
Deepclimate should stick to swingin’ on a porch seat and playin’ banjo.

Joe Crawford
Reply to  Bill H.
August 31, 2014 3:39 pm

Don’t knock the banjo! It is the instrument of choice for many mathematician as well as many of us Appalachian hill folks.

Larry Cooper
August 30, 2014 3:57 pm

Ladies and Gentlemen, Three Cheers to Jean S at Climate Audit who has doggedly pursued this answer for years! No easy task…..

whiten
August 30, 2014 4:18 pm

The worse thing that M. Mann is bound to face about his Hockey Stick will be his own AGW cabal. Sooner rather than later he will realise that his AGW cabal has to sacrifice him for the greater good of the AGW.
One very serious problem the cabal is facing and which has a potential to escalate to a point of turning AGW to a laughing stock is the impossibility of making the models to keep producing projections of a significant warming in the future.
Once the lost heat and the hiatus have to be factored in as required the projected warming has to be downgreaded in accordance with the impact of the heat loss…… So longer the hiatus and more heat loss ,less of a projected warming. So, longer the hiatus persists, lower the warming projected…. and there could be a point reached where the AGW makes no sense anymore even from the point of a model that supposes to prove the AGW.
There is few ways that even in this condition can offer a line of extended life-support to the AGW.
The most effective way that I can think of and probably the only one workable is by lowering the warming of the past including the so much claimed “the observed AGW of the 20-th century”.
But for better or the worse the Mann’s Hockey Stick stands stubbornly on the way of all this…..
After all the Hockey Stick seems to be comming back with vengeance against its own patron…..
cheers

Bill H.
Reply to  whiten
August 30, 2014 4:51 pm

Whiten, is there a hiatus in global warming? Atmospheric warming perhaps, but not oceanic warming, if Willis Eschenbach’s graph is to be believed (http://wattsupwiththat.com/2013/05/10/the-layers-of-meaning-in-levitus/)
This indicates pretty steady warming of the oceans. Willis has made a brave effort (WUWT JUne10th) to show that this warming is insignificant, though, as commenter XIminyr pointed out he has only demonstrated that the rate of change of warming is insignificant.
What skeptics need to do is to destroy the reputation of this graph in the way that we have destroyed the reputation of the hockey stick. The graph is actually produced by a warmist by the name of Josh Willis. I’m sure with the expertise at WUWT and elsewhere we could debunk his “work”. With Mann and Willis (Josh) destroyed that should be the end of the AGW hoax.

DirkH
Reply to  Bill H.
August 30, 2014 5:26 pm

We don’t. If the ocean swallows the Global Warming we’re safe for a 100,000 years.
Notice that the dramatic warming of the ocean is given in 10 ^ 22 Joules, not in degrees Celsius.
Because it would be so incredibly small in degrees Celsius everyone would start laughing.
BTW there’s no physical mechanism by which allegedly increased downwelling IR could warm the oceans. It’s a travesty.

Bill H.
Reply to  Bill H.
August 31, 2014 12:07 am

Dirk,
A few points. First, where does your 100,000 years figure come from? So far the hiatus is only a few years. How many years depends on which temperature record you select. Lord Monckton chooses RSS, but he does not justify that choice leaving him open to charges of cherry-picking from the AGW-faithful, Prior to that atmospheric warming was strong. We know that oceans oscillate between storing and releasing heat (e.g. ENSO) so there is no reason to suppose that heat is just going to be trapped indefinitely.
As for changes in ocean temperature being insignificant, as Jo Nova claims, that would be true if the heat were uniformly distributed throughout the oceans, but there’s no evidence to support this claim. Temperature changes are much greater near the surface than at depth.
No, asI’ve said above, climate sceptics need to turn their fire on the warmist Josh Willis (JW). Escenbach (confusingly also a Willis!) made a brave start on WUWT but he only managed to show that the increase in the rate of CHANGE of ocean warming indicated by the graph was negligible, not the warming trend itself. In other words he showed that the second derivative of JW’s graph was not significantly different from zero rather than the first derviative.
We need to show that the graph itself is fraudulent

richard verney
Reply to  Bill H.
August 31, 2014 12:20 am

There is no quality and reliable evidence that the oceans are warming, nor is there any plausible explanation as to how the ocean can be warming at depth without seeing warming in the upper layers.
But common sense tells us that that cannot possibly be a problem. After all there has been some 4.5 billion years of solar and DWLWIR (for those who consider that DWLWIR can perform sensible work in the environ in which it finds itself, and bearing in mind the absorption characteristics of water, and that water is free to evaporation), and after the input of all that energy, over all this time, the ocean has been heated to only about 3 degC. If there is a little more DWLWIR now oing into the oceans it is not going to do much over the course of the next few hundred years, given that solar +DWLWIR has not managed to achieve much in 4.5 billion years.
If by chance more of the deep ocean comes back to the the surface or comes up qucker (and in this regard, the thermohaline circulation is measured in 1000 years+), it will cool the ocean surface not warm it. We see that on a small scale in La Nana conditions.
If the deep ocean has eaten the energy/heat then the energy is well and truly diluted and dissipated and cannot be reconcentrated in a way that it would come back to bite. AGW is certainly over, if from now onwards any energy imbalance will be absorbed in the deep ocean.

David A
Reply to  Bill H.
August 31, 2014 2:47 am

Bill, all of the above, and did you note that the claimed warming of the oceans is also only about 1/3 of what the ever wrong models project.

whiten
Reply to  Bill H.
August 31, 2014 5:15 am

Hello Bill H.
First let me tell you that I do not consider the AGW as a hoax. And I never hold the grownd of “lets totally discredit it”
At the worse I consider it an error, a scientific error……and as I been very fun of the idea that we can learn a lot by our own errors, either the accidental ones or otherwise, I have no choice but accept that ACC-AGW owns some merits and credits. You see is a point where the start of the strugle of having a better understanding of climate begins.
From my point of view the term “AGW hoax” belongs with them who favor a neverending war on the climate change subject, a way of keeping fuled the climate wars.
You say:
“What skeptics need to do is to destroy the reputation of this graph in the way that we have destroyed the reputation of the hockey stick. The graph is actually produced by a warmist by the name of Josh Willis. I’m sure with the expertise at WUWT and elsewhere we could debunk his “work”. With Mann and Willis (Josh) destroyed that should be the end of the AGW hoax. ”
——-
If you read carefully what I say in my previous comment, in principle, your above suggestion is at odds with the point I was trying a make.
Explaining my point made in a very thick line it will be something like:
“Let the King fall on his own sword, and be done with the war”
While you on the otherhand suggest the very opposite.
While was ok to debunk Mann and show to all at the very least his mediocracy, I my self see no need in doing the same with Willis Josh graph……further more if such need be that lies with the AGWers or the warmistas as you may call them.
You see, the association of the heat loss with the oceanic warming means that warmer the oceans …….bigger the heat loss…… the very meaning of a thorn to ACC-AGW.
Also according to IPCC AR4 the detection of warming in a part of a system while heat loss detected in another part of the system is preatty much considered natural and not anthropogenic.
You see…. According to IPCC, ACC-AGW stands for a warming detected while in same time there not a detected heat loss, simply because such kinda of warming will be not natural.
So I don’t see why any body should go to a fight or a “war” with the intent of preventing Willis Josh from offering the “King” another sword….:)
Sorry I do not mean or want to be impolite but I got to say this…………….you do sound to me like Nuccitelli Dana.
cheers

DirkH
Reply to  Bill H.
August 31, 2014 5:54 am

Bill H.
August 31, 2014 at 12:07 am
“Dirk,
A few points. First, where does your 100,000 years figure come from? So far the hiatus is only a few years.”
Assuming for the moment that the Global Warming models are right, and that at the same time Trenberth is right, and heat does accumulate, but into the oceans, and that Schellnhuber is right, and we must stay under 2 deg C warming to be safe (because it would be just catastrophic if Siberia became habitable, no?) – well let’s assume all this; and let us then compute the heat capacity of the ocean, then we will find out that said capacity is 1000 times larger than the heat capacity of the atmosphere.
So it will take 1000 times as long to warm up by 2 degrees.
The end.

Reply to  Bill H.
August 31, 2014 6:03 pm

Not only that, it doesn’t necessarily matter if the oceans are warming now. They almost assuredly warmed during the 1940-1970 cooling episode as well, and likely cooled and released heat into the atmosphere by increased water evaporation rates during the warm phase of the ocean cycle. If the rate of warming now is the same as the amount of warming 1940-1970, it is simply part of the natural cycle and thus it SHOULD NOT be used to claim that the earth is still warming as it is an independent, cyclic, and natural variable. What needs to be done to show that this ocean warming is important is to show that this did not happen like this during the 1940-1970 episode. However, the ocean data 1940-1970 is not accurate enough for this task, so they be up the proverbial s*creek for claiming with any real credibility that the earth is still warming.

TC
Reply to  whiten
August 31, 2014 2:44 am

Whiten: “After all the Hockey Stick seems to be comming back with vengeance against its own patron…..”
Ah, so it’s a boomerang then.

Robert of Ottawa
August 30, 2014 5:29 pm

I wonder who a certain Bill H .

Uncle Gus
Reply to  Robert of Ottawa
August 30, 2014 6:27 pm

One of Mann’s cronies finally getting smart?
He has the trick of picking on something you’ve thought of for years as thoroughly debunked, and coming up with a whole bunch of studies that you’ve never heard of, and that were never mentioned at the time (although to be honest global warmers usually depend more on pictures of polar bears than on reasoned argument when their backs are against the wall, and it is just about possible that the proof was there all along and they never thought of mentioning it!), and that you certainly haven’t got time to research.
And yet he claims to be a sceptic.

markx
Reply to  Uncle Gus
August 30, 2014 8:07 pm

It is hard to see exactly where you are coming from Gus.
This IS science… The pursuit of knowledge, the pursuit of explanations…
I, for one, am very impressed.

August 30, 2014 5:36 pm

Any old thing you do that is smoothy is a low pass filter. It’s a little weird that so much thought went into it either by Mann or his critics. There’s nothing magic in the classical filter formulas except that they may optimize some measure of smoothing vs time extent or something. That is, your resolution may be worse than it could be, or something, if you picked a better filter, but smoothing is smoothing.
The data is so awful that it hardly matters what is done here.
As for Mann, he’s going to be drummed out of climate science as a scapegoat. Climate science hopes to purify itself through the ritual.
That doesn’t make it a science, however.
It’s just a sociological move.

Reply to  rhhardin
August 30, 2014 6:24 pm

If you wanted to do an intelligent analysis of the temperature data with proxies and everything, you wouldn’t do this kind of filtering at all. The classical stuff would just be to show you sort of what the data looked like, but wouldn’t prove anything. For one thing, it’s not a stationary process, so Fourier-oriented things (low pass filters among them) are not appropriate and give you mostly artifacts.
And even with a stationary process, who knows how PCA interacts with low pass filtering. Surely that gives you artifacts too.
The right way, or one right way, is a sort of Kalman filtering, based on models of how things might change.
The would give the most probably model from among the family you offered it, and automatically adjusts to different kinds of data (proxy, instrumental) very explicitly.
So whatever your result, you’ll be able to say what choices were offered, and it could be criticized and improved from that.
As it is, you just reject the entire mess and hope that somebody with some first-principles competence turns up to do it over.

Uncle Gus
Reply to  rhhardin
August 30, 2014 6:34 pm

They haven’t even passed him the black spot yet!
Bear in mind how iconic the guy is in the climate industry, and also the fact that they *don’t do scapegoats*. Everything has to be true. Every last little thing. Mount Kilimanjaro? Got to be climate change. Polar bears? Practically extinct. Pause? What pause?
There must be not the slightest little crack or the whole wall comes tumbling down…

Throgmorton
Reply to  Uncle Gus
August 30, 2014 10:08 pm

It’s true – they can’t let go of anything!
It is just like the famous Indian monkey trap – a coconut full of goodies with a hole big enough for the monkey’s open hand to go in, but not big enough for the monkey to withdraw his closed hand. The monkey won’t drop his prize even at the cost of being caught.

August 30, 2014 6:26 pm

Love the new and improved (apparently) ads. Seem related to viewer and topic.
The previous article about Solar Engery on the Courthouse got me a local BC Hydro ad (touting something to do with their smart energy program etc etc).
This article is even better, with some pictures inviting me to click “10 something…”. But the best one (I guess reference M Mann friends) is titled “10 Sexiest Hollywood Nerds”. Very appropriate!

August 30, 2014 6:30 pm

Mann

August 30, 2014 6:31 pm

Mann took man for a fool.
Now fate has Mann.
History came.
Man now knows Mann.

jones
August 30, 2014 7:03 pm

Dr Mann, Sir,
Can you sense the vultures beginning to circle above your head yet?
I am saddened by it all in deepest truth but I’m afraid it is deserved in your case.
You will be able to repent at leisure Sir.
I await the next couple of years with great interest.
Jones

Pamela Gray
August 30, 2014 7:10 pm

Mann will likely now submit a research paper naming his filter after himself of course, and will manage to get a herd of co-authors to sign on to the paper as well. We have already witnessed enthusiastic journals prostituting themselves at the feet of climate scientists feeding at the watermelon targeted gravy train. After all, the journals deserve some of our hard earned money rapidly leaking out of our wallets too.
Lucky for him WUWT posters and commenters have done the outlining for him. All he has to do is scrub the well-deserved “what an idiot” color commentary out of the text and he is good to go with his first draft.

u.k.(us)
August 30, 2014 7:11 pm

“As most readers are aware…”
===========
Think of the new readers, and the children, that might not be aware.

Darren Potter
August 30, 2014 7:35 pm

Question: What do the results look like if Mann’s “Special Filter” is replaced with Filter using correct values (“typically the length of a few dozen coefficients at maximum”)?
Guessing the results obliterate Mann’s Hockey Stick…

Doug Proctor
August 30, 2014 9:33 pm

All the non-statisticians: whatever happened to simply looking at and interpreting the data? Ya can’t do it for climate science.
The fact that such fabulous statistical manipulations are needed says that the signal, i.e. the reality, being investigated is obscure to the point of insignificance. I say “insignificance” because it has to be teased out of the background noise. We can’t detect it otherwise. Which means we aren’t being affected by it.
What we are saying by all this is the changes CAGW uses lies in a hidden change in the max and min values, the regional changes. Nowhere is there ANY definite signal of CAGW, it is all in the mathematical grinding of numbers. Nobody can possibly see it. It is only visible as a construct. This is why the “science” is all about models. The public doesn’t understand this, the public thinks it is about data, stuff they can see and feel, like a temperature gauge or an anemometer. It isn’t.
Every time someone says you can see climate change around him, he is telling an untruth. The changes are all within normal variability but on a scale that people don’t experience, i.e we experience only a portion of the curve. Every other portion of the curve is different from what we experience because we don’t live long enough. And that involves regional changes, not global. And this is why homogenization and adjustments/corrections are so important: the signal is equal to or less than the modifications, at least invisible without the modifications.
There is far too much math going on for the understanding of people like Al Gore or David Suzuki or the general public. Actually, both Al and David don’t give a crap: their ideas of social engineering, personal legacy and making a thousand bucks are the dominant factors in what they say and promote as “the truth”. But there is too much math going on for the understanding of regular people, even those who espouse to have college education and a intelligence that allows individual thinking.
When you need a microscope to see it, it ain’t really there, leastways when it comes to weather AND climate.

Reply to  Doug Proctor
August 31, 2014 7:00 am

“All the non-statisticians: whatever happened to simply looking at and interpreting the data? Ya can’t do it for climate science.”
I’m not so sure that they don’t do it properly and then when the results do not give them the expected results, they keep adjusting, manipulating, torturing the data until it does.
Not the real Climate Scientists, but the ones that champion CAGW.

Jerry
August 30, 2014 9:36 pm

A general rule in signal processing is that the number of taps on an FIR filter shouldn’t exceed about 1/3 of the number of points in the input signal.

Alan McIntire
August 31, 2014 5:37 am

I wonder what the results would be if the method used on Diederick Stapel were applied to Michael Mann?
http://www.realclearscience.com/journal_club/2014/08/28/can_a_scientists_writing_reveal_fraud_108809.html
Does Mann have any “honest” papers to compare results?

bit chilly
Reply to  Alan McIntire
August 31, 2014 2:40 pm

may the climate science version of the scientific method now be known as “climate stapel” ,it has a certain ring to it.

Don B.
August 31, 2014 6:25 am

Now do you believe in Mann made global warming?

MattN
August 31, 2014 6:40 am

This is science?

Paul Coppin
August 31, 2014 6:54 am

Crazy engineering and statistical math aside, the takeaway here is that many people are having to spend significant resources to attempt to validate or verify scientific due process the author will not voluntarily divulge. That alone should have been sufficient to derail Mann’s work to the dustbin of time and cost him his position…

Reply to  Paul Coppin
August 31, 2014 7:35 am

+1

Reply to  Paul Coppin
August 31, 2014 1:29 pm

The competent researcher would leave the data files unchanged, and create a makefile to do what needs to be done, in his opinion. It’s organized, faster, and reproducible.
Modulo changes made by overactive graduate students to the compilers, leading to bit rot, but that plagues everybody.

gary gulrud
August 31, 2014 11:44 am

On my very first ISA bus card, a utility card for an OEM, I used a Hamming filter to encode the feature licenses. Ok, I’m no genius but the filter certainly returns deterministic outcomes..

Doubting Rich
September 1, 2014 2:15 am

“… he used the filter length equal to the length of the signal to be filtered!”
I think this deserves.emphasis. If the filter is as long as the signal then the filter can determine the output entirely,as selected at the choice of filter parameters. So that Mann did not publish the filter details makes the paper utterly meaningless.

Bernie Hutchins
Reply to  Doubting Rich
September 1, 2014 10:04 pm

“Filter Length” can be deceptive. Suppose we have a length 1000 FIR filter. Suppose further it is (as is commonly found) symmetric and clustered about the center so that the first 450 taps are very tiny relative to the center taps, as are the last 450. Effectively it is now more like length 100. And so on. Using Jean S’s Matlab code, this is found because the “b” output, the impulse response, is a sync function (already center clustered) multiplied by a Hamming window that further tapers the ends to just 8% of the center.
You are correct – if he filters, he should have fully describe the processing.

September 1, 2014 2:16 am

Can someone, in laymans terms, summarize the meaning and consequences of this finding? Broadly, what does it say about Mann and his methods? Is there any reasonable justification for doing what was done.
This airplane mechanic wants to know.

Bernie Hutchins
Reply to  notwise
September 1, 2014 10:39 pm

I think it does not tell us much about Mann that is new. Same pattern. Much as Mann should have consulted about the faulty normalization of his signals in his PCA, he should have asked more about DSP. Possibly someone can correctly draw a conclusion that he/she is using a less-familiar mathematical procedure correctly when it gives “correct” answers to STANDARD exercises. It would of course be tempting to fool yourself that you have “finally gotten the math right” when you spot what you were HOPING to see come out of new data! Nope. And withholding the procedures and data just increases suspicion. So same pattern – at minimum it looks insouciant – but no new “smoking gun” from this – in my opinion.