Why I want Mike Mann’s Emails

By Dr. David Schnare

N.B., Dr. Schnare is the lead attorney in the UVA-Mann email case.

This week Nature Magazine published an editorial suggesting that “access to personal correspondence is a freedom too far” and that Michael Mann, whom they favorably compare to Galileo, should have his emails, written and received while he was a young professor at the University of Virginia, protected from public release on the core basis that to do otherwise would “chill” the work of scientists and academics.  I note Galileo was forced to keep his work private.  Had he the opportunity, he would have published it far and wide.  Mann is quite the opposite.  He wants to keep secrets and let no one know what he did and how he did it.

Nature, unfamiliar with the facts, law and both academic and university policy as applies in this case, conflates too many issues and misunderstands the transparency questions we raise.

The facts of the case include that these emails are more than five years old; that they contain none of the email attachments, no computer code, no data, no draft papers, no draft reports; that the university has already released over 2,000 of them, some academic and some not; that when they were written Mann knew there was no expectation of privacy; that all emails sent or received by a federal addressee are subject to the federal FOIA, and many have already been released; and that nearly 200 of the emails the University refuses to release were released by a whistleblower in England.

That latter group of emails, part of the “Climategate” release, do more than merely suggest Mann engaged in academic improprieties.  They show he was a willing participant in efforts to “discriminate against or harass colleagues” and a failure to “respect and defend the free inquiry of associates, even when it leads to findings and conclusions that differ from their own.”  Other emails document Mann’s communications were not “conducted professionally and with civility.”

Thus, emails already available to the public demonstrate that Michael Mann failed to comply with the University of Virginia Code of Ethics and the American Association of University Professors Statement on Professional Ethics.

A question, not mine, but asked by many who are interested in the history of this period, is not whether Mann failed to live up to the professional code expected of him.  It is to what degree he failed to do so and to what lengths the university will go to hide this misbehavior.  If we merely sought to expose Mann’s failure to display full academic professionalism, we would not need these emails.  Those already in the public eye are more than sufficient for any such purposes.

I want those emails for a very different reason.  Our law center seeks to defend good science and proper governmental behavior, and conversely to expose the converse.  Without access to those kinds of emails, and, notably, research records themselves, it is not possible for anyone to adequately credit good behavior and expose bad behavior.  This is one of two reasons we prosecute this case.  It is the core purpose of a freedom of information act.  Because the public paid for this work and owns this university, it has not merely a right to determine whether the faculty are doing their jobs properly; it has a duty to do so.  This is not about peer review; it is about citizens’ acting as the sovereign and taking any appropriate step necessary to ensure those given stewardship over an arm of the Commonwealth are faithfully performing.

The second reason we bring this case is to defend science and the scientific process.  Anyone who has taken a high school science laboratory course knows that the research or experimental process begins with recording what was done and what was observed.  As UVA explains in its Research Policy RES-002, “The retention of accurately recorded and retrievable results is of the utmost importance in the conduct of research.”  Why?  “To enable an investigator to reproduce the steps taken.”

Currently public emails show Mann was unable to provide even his close colleagues data he used in some of his papers and could not remember which data sets he used.  A query to UVA shows the university, who owns “the data and notebooks resulting from sponsored research,” had no copy of Mann’s logbooks and never gave him permission to take them with him when he left UVA.  The university refused to inquire within Mann’s department as to whether anyone there knew whether he even kept a research logbook, so it’s impossible for me to know whether he stole the logbook or just never prepared one in the first place.

The emails ATI seeks are all that appears to be left of a history of what he did and how.  Absent access to those emails, anyone seeking to duplicate his work, using the exact same data and methods, has no way to do so.  That is in direct conflict with both good science and the UVA research policy.

Nor should access to these kind of emails “chill” the academic process.

As a former academic scientist, I understand the need and desire to keep close the research work while it is underway.  Both I and the university have a proprietary interest in that work, while it is ongoing.  Once completed, however, I have a duty to share not only the data and methods with the academic community, I also have a duty to share the mistakes, the blind alleys, the bad guesses and the work and theories abandoned.

Science advances knowledge by demonstrating that a theory is wrong.  All the mistakes, blind alleys and bad guesses are valuable, not just to the scientist himself, but to his colleagues.  By knowing what did not work, one does more than simply save time.  One gains direction.  One mistake revealed often opens a vista of other ideas and opportunities.  The communications between scientists during a period of research are the grist for the next generation of work.  Ask any doctoral candidate or post-doc how important being part of the process is on the direction of their future research.  They will tell you that these unpublished communications are as much an important scientific contribution as the final papers themselves.  Anyone who wishes to hide those thoughtful discussions hides knowledge.

If anything is “chilling” it is the thought that a neo-Galileo is hiding knowledge.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

213 Comments
Inline Feedbacks
View all comments
November 17, 2011 6:50 pm

John B says:
Here you go…
http://www.skepticalscience.com/broken-hockey-stick.htm
And if you don’t trust SkS, follow the links to the primary sources.
—————————————————
I thought McIntyre and Montford dealt with all the claims made on SkS?! Many of them are specious and weasel-worded anyway.

Venter
November 18, 2011 12:34 am

Exactly, when somebody uses Skeptical science as a source, you know that they are burnt and are of the same cloth as the deceivers out at SKS. The same website where the owner was caught cheating, amending comments years after to pretend he was right and then pretends innocence. Yeah, quote the as your source, very credible.
Every single claim of Mann hockey sticks have been blown apart at CA and by HSI. If John B and Joel Shore have the guts or capability, let them post here or at CA with facts stating why Mann’s hockey sticks are correct and defend them like capable men do.

RDCII
November 18, 2011 3:16 pm

John B.
This is a perfect example. After all the critiquing and handwaving, the blog auther NEVER SAYS that Mann’s methodolgy doesn’t turn red noise into hockey sticks. This is still unrefuted.
In fact, the blog author says several times things like ‘that correction of Mann et al’s “short-centered” PCA’, ‘In effect, “short-centered” PCA may have promoted “hockey stick” patterns in the proxy data to higher PCs’, etc., making it clear that the issue is well-understood. Furth, the author seems to have a good understanding of the issues with Steve’s code, so I feel confident that if the author could have fixed the code and rerun it and showed that the methodology no longer produces hockey sticks from red noise, he’d have done so.
Montford explains the issue, and the issue is still unchallenged, as far as I’ve seen. Do you have a ref to a Paper that proves that Mann’s original methodology doesn’t mine for Hockey Sticks?

Joel Shore
November 18, 2011 7:52 pm

RDCII: The National Academy of Sciences report on temperature reconstructions addresses this issue pretty directly http://www.nap.edu/openbook.php?record_id=11676&page=113 :

As part of their statistical methods, Mann et al. used a type of principal component analysis that tends to bias the shape of the reconstructions. A description of this effect is given in Chapter 9. In practice, this method, though not recommended, does not appear to unduly influence reconstructions of hemispheric mean temperature; reconstructions performed without using principal component analysis are qualitatively similar to the original curves presented by Mann et al. (Crowley and Lowery 2000, Huybers 2005, D’Arrigo et al. 2006, Hegerl et al. 2006, Wahl and Ammann in press).

So, in other words, this issue is not really important. It’s a good reason not to use the method in the future, which is why better methods are now used; but, no, the final result was not effected. They go on to say:

The more important aspect of this criticism is the issue of robustness with respect to the choice of proxies used in the reconstruction. For periods prior to the 16th century, the Mann et al. (1999) reconstruction that uses this particular principal component analysis technique is strongly dependent on data from the Great Basin region in the western United States. Such issues of robustness need to be taken into account in estimates of statistical uncertainties.

Of course, the eagle-eyed amongst you will know that what the NAS report is saying here is nothing that was not already basically said in slightly different words in the Mann et al. (1999) paper ( http://www.deas.harvard.edu/climate/pdf/Mann1999.pdf ):

It is furthermore found that only one of these series — PC #1 of the ITRDB data — exhibits a signifi cant correlation with the time history of the dominant temperature pattern of the 1902-1980 calibration period. Positive calibration/variance scores for the NH series cannot be obtained if this indicator is removed from the network of 12 (in contrast with post-AD 1400 reconstructions for which a variety of indicators are available which correlate against the instrumental record). Though, as discussed earlier, ITRDB PC#1 represents a vital region for resolving hemispheric temperature trends, the assumption that this relationship holds up over time nonetheless demands circumspection. Clearly, a more widespread network of quality millennial proxy climate indicators will be required for more con dent inferences.

Steve Keohane
November 19, 2011 5:09 am

Mann’s silly proxy reconstruction compared to Craig Loehle’s ‘anything but trees’ proxy. The latter looks like the reconstructions I have seen over the past 50 years. Mann’s HS is the purple line.
http://i39.tinypic.com/2q3arlw.jpg
Natural variation rules, but we were taught that ‘Mikey’ will eat anything, oh wait, that was ‘surreal’ not real ‘Life’. Sorry for the obscure, off- the-wall humor.

RDCII
November 19, 2011 4:52 pm

Joel Shore…
It is astonishing for something that “doesn’t matter” to be so vociferously defended.
It was like pulling teeth to get data from Mann. The Mann methodology was WRONG. It pulls Hockey Sticks out of Red Noise. Montford’s book explains the history and the statistics well and correctly. Nothing I have said is incorrect.
The fact that Mann’s methodology came to an almost-right answer isn’t a defense of the methodology. That’s like guessing the answer is 42, then working out mathematically that it’s 42, then saying that the methodology is ok or doesn’t matter because it got the right answer. This is why math teachers require that you show your work on tests!
It’s all this dancing around to try to avoid actually admitting that it was an error that leads me to recommend Montford’s Book. If you guys really, really cannot get yourselves to boldy say the simple truth thatt Mann’s methodology was WRONG and mines Hockey Sticks from Red Noise, you really ought to ask yourselves why.

Joe Griems
November 20, 2011 1:41 am
Joel Shore
November 20, 2011 12:44 pm

My most recent contribution to this thread, written last night has still not appeared and seems not to have been rescued from the SPAM filter.

Joel Shore
November 20, 2011 2:54 pm

Okay…I guess I have to repeat my post as I remember it. My basic point was, RDCII, that you adopt very much of a two-valued orientation: in your view, the methodology is either right or wrong. A better way to look at it is that the methodology had certain flaws and that is why scientists have now moved beyond this to better methodologies. Such is the natural progress of science. However, so far, the end result of these better methodologies has been to confirm the basic correctness of Mann’s conclusions.
But, do you really think that Montford wrote a whole book that claimed that Mann et al. got basically the right answer but with a problematic method? Why would one waste a whole book on that technical point, especially since scientists have moved beyond this method?
Finally, it is worth noting that it is not unusual at all that the pioneers of a new area of science use imperfect methods that are later improved upon by others in the field. Skeptics seem to understand this in the case of Spencer and Christy, whose pioneering work on using satellite data to deduce temperatures in the lower troposphere, was plagued by errors…and, in fact, errors that led to a completely erroneous conclusion regarding the temperature trend (that it was cooling when in fact it was warming). Skeptics seem very forgiving of the errors in this case, even though they had a significant effect on the results. Perhaps this is because the errors were in a direction that was useful for skeptics…and, in fact, allows people like Fred Singer to continue to this day to deceive about what the satellite temperature record shows.

RDCII
November 20, 2011 3:50 pm

Joel,
Why is that a “Better” way to look at it? Oddly, I was taught that when an experiment is a failure, you learn from it and try again. BUT…you don’t deny that the experiment was a failure. It’s this utter inablility of Mann and supporters to say that this first try was a failure that screams politics rather than science.
I certainly don’t think Montford’s book was about this narrow point. Here is the excerpt from my first posting about Montford’s book on this thread:
“To understand people’s feelings about Mann’s willingness to release data, you have to have been following this for almost a decade…or, as a shortcut, you can read “The Hockey Stick Illusion”, by Andrew Montford, and then you’ll not only get the history, but explanations of Mann’s early abuse of statistics, and how all of this led to Congressional intervention.”
See? I suggeset that there’s all KINDS of goodies in there worth reading. This narrow viewpoint you’re concerned about is your own contribution.
I’ve read the book…have you? If you haven’t, how can you even have an opinion about what it contains?
I wasn’t around for the pioneering work of Spencer and Christy, but…I’ve read Spencer since, and I’ve seen him publicly admit it when he’s WRONG. I think you’re mistaking a skeptical preference for honesty and openness with a political leaning…and a preference for honesty and openness is actually as opposite to politics as you can get. It’s more like…science.
I’m actually trying to help you out here…as long as what you’re saying smacks of political wordplay, as long as you truly can’t even bring yourself to say out loud that Mann’s first attempt was a Failure, you’re less likely to convince skeptics to save the world from C02. If saving the world is important to you, stop the political dancing, admit it was an experimental Failure, and then give skeptics the credit for the skill and determination to prove it in the face of rabidly politically-motivated resistance. Then we can all get back to some Science.

Joel Shore
November 20, 2011 5:24 pm

RDCII: I guess our discussion illustrates the difficulties of getting someone with a two-valued orientation to get beyond that. I tried to explain it; I’m not sure what else I can do.
So, now you are throwing out the additional goodies from Montford’s book, including the issues of Mann releasing everything that McIntyre’s heart desired from him. So, now I ask you: Where can I find as a full listing of data and code for Spencer and Christy’s algorithm? I have provided you with links to Mann; the least you can do is reciprocate…Or, maybe it is not possible for you to? What about Wegman’s willingness to answer some very basic questions about the report that he wrote?
Oh yes, I am quite sure this is all about honesty and openness and has nothing to do with the viewpoints of the people involved….Give me a freakin’ break!
I also find it interesting how so many skeptics think that Mann’s fighting e-mail disclosures means he is hiding something and yet no skeptics like Anthony Watts have stepped forward and offered to release all of their e-mails. Arguments that they are not required to because they don’t have government grants or don’t work at a public university aren’t relevant. If they had nothing to hide, don’t you think they’d welcome the opportunity to take the moral high ground?
As to Spencer and Christy’s admissions that they were wrong: Yes, they have to a certain degree admitted their mistakes but they have also tried to minimize them, sometimes with statements that are not completely enlightening. For example, they talk about how one particular correction was not larger than the errorbars on their estimate, but don’t talk about the collective effect of all of the corrections.
And, although it is exceedingly easy to do, the only attempt to study the extent to which the changes in the LT temperature trends over the years are due to the corrections of the algorithm and to what extent they are due to the longer temperature series (which is what S&C tend to emphasize) was my own! All one had to do is take the current version of their temperature series and compute the trend over the same time period that they reported back in the papers during the 1990s. It is quite a surprise that neither Spencer and Christy nor any of the people who seem so interested in “auditing” climate science results that they don’t like have showed any interest in performing this very simple audit, let alone demand the release of all of S&C’s code for a more detailed audit!

RDCII
November 21, 2011 8:37 pm

I too, have tried best to explain that throwing away the standard Scientific Success/Fail standard is a political, rather than scientific, idea, and I also don’t know what else I can do.
Let me try one last analogy…if you order a steak rare, and it is delivered burnt, do you pronounce the meal a success and congratulate the chef? Well, you might…IF it’s really important to you the restaurant stays open. The rest of us call a burnt steak a burnt steak.
It is apparently your philosophy that has allowed a bunch of people in this thread to continue to argue against me when I’ve said that Mann’s methodology pulls hockey sticks out of red noise. Perhaps the value of implementing my “two value” system is that it would finalize this little detail into peoples minds? Because your method hasn’t seemed to have accomplished that.
And please, if you’re going to discuss what I said with me, could you read what I wrote? You say “So, now you are throwing out the additional goodies from Montford’s book, including the issues of Mann releasing everything that McIntyre’s heart desired from him”. I am not “now” throwing it out; I said it in my very first posting on this thread that discussed Montford’s book. I only brought it up again because I was accused of NOT saying anything else about Montford’s book. Any subsequent narrowing of focus is due to what others, such as you, wanted to discuss. This feels to me like you came in the middle and never went back to see what I said.
You’ve dodged the question about whether you’ve read Montford’s book…from that, I’m assuming you haven’t, and are not therefore qualified to discuss the contents. I recommend the book to anyone who actually wants to know the early history and statistical errors of Mann.
I will not dodge your question about Spencer and Christy…I cannot provide a link. I actually don’t keep as set of links; this could be considered a flaw in my character. I admit it. (See? It’s not so hard to admit a failure 🙂 ) I don’t know anything about this Spencer and Christy refusing to release code issue, and I’m truly interested, if you could forward a link that describes this situation.
You don’t believe that Skeptics are about honesty and openness. You’re not a skeptic, though, so you’re not exactly a good resource. I would tell you my story, but it would take too long…the short version is I started full AGW, and ended Full Skeptic, unconvinced by the science and feeling betrayed, really, by the lack of openness and honesty and the bad behavior among a group of people that I had grown up to trust. AGW scientists worked very hard, step by step, to create my Skepticism.
The rest of what you wrote seems to expand what is already a dying thread, so I won’t be responding. It would be an interesting discussion in an appropriate thread.

Joel Shore
November 22, 2011 5:20 am

RDCII: Here is the summary of the availability of the UAH and other data and codes to analyze satellite data: http://magicjava.blogspot.com/2010/02/summary-of-aqua-satellite-data-computer.html It seems strange to me that people who are so exercised about some claimed lack of availability of every last little minutia of Mann’s code do not even bother to see if the UAH code is at all publicly available!
By the way, what I heard (and admittedly I don’t remember the source for this) is that when the RSS folks were trying to figure out the discrepancies between their results and UAH’s, they asked not for every last piece of code but for one very specific piece of code and Spencer and Christy were not very forthcoming at first although they eventually did provide this one specific piece of code. The reason you likely haven’t heard vociferous complaints regarding the lack of code release is that scientists generally “replicate” each others work by writing code themselves rather than auditing the other person’s code. Only when there is some sort of issue that they can’t resolve will possibly ask for some specific piece of code.

1 7 8 9