
Tonight, a prescient prediction made on WUWT shortly after Gleick posted his confession has come true in the form of DeSmog blog making yet another outrageous and unsupported claim in an effort to save their reputation and that of Dr. Peter Gleick as you can read here: Evaluation shows “Faked” Heartland Climate Strategy Memo is Authentic
In a desperate attempt at self vindication, the paid propagandists at DeSmog blog have become their own “verification bureau” for a document they have no way to properly verify. The source (Heartland) says it isn’t verified (and a fake) but that’s not good enough for the Smoggers and is a threat to them, so they spin it and hope the weak minded regugitators retweet it and blog it unquestioned. They didn’t even bother to get an independent opinion. It seems to be just climate news porn for the weak minded Suzuki followers upon which their blog is founded. As one WUWT commenter (Copner) put it – “triple face palm”.
Laughably, the Penn State sabbaticalized Dr. Mike Mann accepted it uncritically.
Twitter / @DeSmogBlog: Evaluation shows “Faked” H …
Evaluation shows “Faked” Heartland Climate Strategy Memo is Authentic bit.ly/y0Z7cL – Retweeted by Michael E. Mann
Tonight in comments, Russ R. brought attention to his comment with prediction from two days ago:
I just read Desmog’s most recent argument claiming that the confidential strategy document is “authentic”. I can’t resist reposting this prediction from 2 days ago:
Russ R. says:
February 20, 2012 at 8:49 pm
Predictions:
1. Desmog and other alarmist outfits will rush to support Gleick, accepting his story uncritically, and offering up plausible defenses, contorting the evidence and timeline to explain how things could have transpired. They will also continue to act as if the strategy document were authentic. They will portray him simultaneously as a hero (David standing up to Goliath), and a victim (an innocent whistleblower being harassed by evil deniers and their lawyers).
2. It will become apparent that Gleick was in contact with Desmog prior to sending them the document cache. They knew he was the source, and they probably knew that he falsified the strategy document. They also likely received the documents ahead of the other 14 recipients, which is the only way they could have had a blog post up with all the documents AND a summary hyping up their talking points within hours of receiving them.
3. This will take months, or possibly years to fully resolve.
Russ R. is spot on, except maybe for number 3, and that’s where you WUWT readers and crowdsourcing come in. Welcome to the science of stylometry / textometry.
Since DeSmog blog (which is run by a Public Relations firm backed by the David Suzuki foundation) has no scruples about calling WUWT, Heartland, and skeptics in general “anti-science”, let’s use science to show how they are wrong. Of course the hilarious thing about that is that these guys are just a bunch of PR hacks, and there isn’t a scientist among them. As Megan McArdle points out, you don’t have to be a scientist to figure out the “Climate Strategy” document is a fake, common sense will do just fine. She writes in her third story on the issue: The Most Surprising Heartland Fact: Not the Leaks, but the Leaker
… a few more questions about Gleick’s story: How did his correspondent manage to send him a memo which was so neatly corroborated by the documents he managed to phish from Heartland?How did he know that the board package he phished would contain the documents he wanted? Did he just get lucky?If Gleick obtained the other documents for the purposes of corroborating the memo, why didn’t he notice that there were substantial errors, such as saying the Kochs had donated $200,000 in 2011, when in fact that was Heartland’s target for their donation for 2012? This seems like a very strange error for a senior Heartland staffer to make. Didn’t it strike Gleick as suspicious? Didn’t any of the other math errors?
So, let’s use science to show the world what they the common sense geniuses at DeSmog haven’t been able to do themselves. Of course I could do this analysis myself, and post my results, but the usual suspects would just say the usual things like “denier, anti-science, not qualified, not a linguist, not verified,” etc. Basically as PR hacks, they’ll say anything they could dream up and throw it at us to see if it sticks. But if we have multiple people take on the task, well then, their arguments won’t have much weight (not that they do now). Besides, it will be fun and we’ll all learn something.
Full disclosure: I don’t know how this experiment will turn out. I haven’t run it completely myself. I’ve only familiarized myself enough with the software and science of stylometry / textometry to write about it. I’ll leave the actual experiment to the readers of WUWT (and we know there are people on both sides of the aisle that read WUWT every day).
Thankfully, the open-source software community provides us with a cross-platform open source tool to do this. It is called JGAAP (Java Graphical Authorship Attribution Program). It was developed for the express purpose of examining unsigned manuscripts to determine a likely author attribution. Think of it like fingerprinting via word, phrase, and punctuation usage.
From the website main page and FAQs:
JGAAP is a Java-based, modular, program for textual analysis, text categorization, and authorship attribution i.e. stylometry / textometry. JGAAP is intended to tackle two different problems, firstly to allow people unfamiliar with machine learning and quantitative analysis the ability to use cutting edge techniques on their text based stylometry / textometry problems, and secondly to act as a framework for testing and comparing the effectiveness of different analytic techniques’ performance on text analysis quickly and easily.
What is JGAAP?
JGAAP is a software package designed to allow research and development into best practices in stylometric authorship attribution.
Okay, what is “stylometric authorship attribution”?
It’s a buzzword to describe the process of analyzing a document’s writing style with an eye to determining who wrote it. As an easy and accessible example, we’d expect Professor Albus Dumbledore to use bigger words and longer sentences than Ronald Weasley. As it happens (this is where the R&D comes in), word and sentence lengths tend not to be very accurate or reliable ways of doing this kind of analysis. So we’re looking for what other types of analysis we can do that would be more accurate and more reliable.
Why would I care?
Well, maybe you’re a scholar and you found an unsigned manuscript in a dusty library that you think might be a previously unknown Shakespeare sonnet. Or maybe you’re an investigative reporter and Deep Throat sent you a document by email that you need to validate. Or maybe you’re a defense attorney and you need to prove that your client didn’t write the threatening ransom note.
Sounds like the perfect tool for the job. And, best of all, it is FREE.
So here’s the experiment and how you can participate.
1. Download, and install the JGAAP software. Pretty easy, works on Mac/PC/Linux
If your computer does not already have Java installed, download the appropriate version of the Java Runtime Environment from Sun Microsystems. JGAAP should work with any version of Java at least as recent as version 6. If you are using a Mac, you may need to use the Software Update command built into your computer instead.
You can download the JGAAP software here. The jar will be named jgaap-5.2.0.jar, once it has finished downloading simply double click on it to launch JGAAP. I recommend copying it to a folder and launching it from there.
2. Read the tutorial here. Pay attention to the workflow process and steps required to “train” the software. Full documentation is here. Demos are here
3. Run some simple tests using some known documents to get familiar with the software. For example, you might run tests using some posts from WUWT (saved as text files) from different authors, and then put in one that you know who authored as a test, and see if it can be identified. Or run some tests from authors of newspaper articles from your local newspaper.
4. Download the Heartland files from Desmog Blog’s original post here. Do it fast, because this experiment is the one thing that may actually cause them to take them offline. Save them in a folder all together. Use the “properties” section of the PDF viewer to determine authorship. I suggest appending the author names (like J.Bast) to the end of the filename to help you keep things straight during analysis.
5. Run tests on the files with known authors based on what you learned in step 3.
6. Run tests of known Heartland authors (and maybe even throw in some non-heartland authors) against the “fake” document 2012 Climate Strategy.pdf
You might also visit this thread on Lucia’s and get some of the documents Mosher used to compare visually to tag Gleick as the likely leaker/faker. Perhaps Mosher can provide a list of files he used. If he does, I’ll add them. Other Gleick authored documents can be found around the Internet and at the Pacific Institute. I won’t dictate any particular strategy, I’ll leave it up to our readers to devise their own tests for exclusion/inclusion.
7. Report your finding here in comments. Make screencaps of the results and use tinypic.com or photobucket (or any image drop web service) to leave the images in comments as URLs. Document your procedure so that others can test/replicate it.
8. I’ll then make a new post (probably this weekend) reporting the results of the experiment from readers.
As a final note, I welcome comments now in the early stages for any suggestions that may make the experiment better. The FBI and other law enforcement agencies investigating this have far better tools I’m told, but this experiment might provide some interesting results in advance of their findings.

Philemon says:
February 24, 2012 at 4:07 pm
Yup. That’s what I’m thinking….
The memo writer used the existing formatting from a previously loaded .doc or .pdf.
OK-
Might have found something.
This is the Pac Inst 2011 Funders List which, ironically, was posted this morning:
http://www.pacinst.org/about_us/financial_information/funders_2011.pdf
The heading, left hand margin, and line spacing/kerning are identical to the Strategy Memo.
However, the font is 14 pt, not 12 pt.
So if one were to load this PDF, then select/delete all text, change the font size to 12 pt., would the kerning be preserved?
(That being said, been at the ‘puter all day, my wife is p***ed. I’m out for awhile :))
The analytical problem is separating the author’s style from the subject and content which has been lifted from Heartland sources. So there are mixed DNA traces in the text and word-match tests are going to get tangled. The punctuation and letterhead clues are going to be less contaminated.
I ran an n-gram analysis of the writings of several climate-related authors and checked to see how well they matched the unknown memo; this is a fairly simplistic method, but interesting none the less. The 5 best matches were:
1. Richard Littlemore 21.08%
2. John Mashey 18.87%
3. Peter Gleick 18.63%
4. David Karoly 18.38%
5. Joe Bast 18.29%
(As a control, I included a Woody Allen short story; he scored a 6.91% match, Ha! A result! Woody Allen did not write the fake memo ….. where’s my government grant?)
All of which says this is too blunt a tool to use, and more specific algorithms (stop-word analysis, for example) might show better results.
I used JGAAP to compare the 2012 Climate Strategy document from DeSmog’s blog to a set of documents that I was reasonably confident had been authored by either Joe Bast or Peter Gleick. The set of documents that I used for training follow:
Bast 01 – Email to Judy Curry 02/24/12
Bast 02 – HI doc: 10 Bold New Projects for 2012
Bast 03 – HI Press Release 02/20/12
Bast 04 – HI Press Release 02/24/12
Gleick 01 – Forbes: Reply to Taylor 01/25/12
Gleick 02 – Forbes Article 01/05/12
Gleick 03 – Forbes Article 01/27/12
Gleick 04 Huffington Post Blog 02/20/12
The results of the analysis by Nearest Neighbor Driver with metric Kendall Correlation Distance using Character 2Grams as events are:
Gleick 02 = 0.1746
Gleick 04 = 0.2008
Bast 02 = 0.2839
Bast 01 = 0.3090
Gleick 01 = 0.3117
Bast 04 = 0.3189
Bast 03 = 0.3902
Gleick 03 = 0.3920
Which indicates that Gleick 02 is the most likely author of the 2012 Climate Strategy document. I have no insights about the relative significance of the above-listed values for each author.
I also used other event drivers (MW Function Words, Sentence Length, and Syllables per word) and analysis methods (Centroid, Linear SVM, Markov Chain) and weighting distances (Keselj-weighted, Pearson Correlation.) Gleick 02 was the most likely author in all case except Centroid w/Kendall Correlation Distance and Nearest Neighbor w/Kesedlj distance which indicated that Gleick 04 is the most likely author of the 2012 Climate Strategy document.
IMHO, based on my analysis, Peter Gleick is the most likely author of the 2012 Climate Strategy document.
As RomanM reported, I also found JGAAP easy to use but it was a bit finicky and didn’t accept *.docx files or a mixture of files. I finally converted all of the training documents and the unknown document to *.txt files. Also, I found the user documentation did not provide me much help understanding the analytical methods used in the program.
In my 02/27/2012 3:05PM comment, I said that I found the JGAAP user documentation did not provide me much help understanding the analytical methods used in the program. Subsequently, I found this document,
Authorship Attribution
by Dr. Patrick Juola , one of the lead developers of JGAAP, to be very informative. It is lengthy (~100 pages), so have a couple of glasses of wine and read slowly!!
Reblogged this on Climate Ponderings.