Folks have said that I’m far too hard on Dr. Richard Muller of the Berkeley Earth Surface Temperature (BEST Project). So let me stick to the facts. I fear I lost all respect for the man when he broke a confidentiality agreement with Anthony Watts, not just in casual conversation, but in testimony before Congress. So there’s your first fact.
[NOTE: ACTUAL FIGURE 1 PHOTO REMOVED BY THE DEPARTMENT OF FACTUALITY FOR INSUFFICIENT FACTITIOUSNESS.]
Figure 1. Actual un-retouched photo of a verified fact.
Next fact. Dr. Muller has put in motion an impressive publicity machine, including an op-ed piece in the Wall Street Journal, to draw attention to his four new papers. He did this before the papers had been through peer review. He has been criticized by many people for doing this. I among others have wondered, why release the papers to with a big PR blitz before peer review? It made no sense to me. What is his official response to these criticisms? From the Berkeley Earth Surface Temperature web site FAQ:
Why didn’t Berkeley Earth wait for peer review?
Some people think that peer review consists of submitting a paper to a journal and waiting for the anonymous comments of referees. Traditional peer review is much broader than that and much more open. In science, when you have a new result, your first step is to present it to your colleagues by giving presentations, talks at local and international conferences, colloquia, and by sending out “preprints.” In fact, every academic department in the sciences had a preprint library where people would read up on the latest results. If they found something to disagree with, they would talk to or write the authors. Preprint libraries were so popular that, if you found someone was not in the office or lab, the first place you would search would be in the preprint library. Recently these rooms have disappeared, their place taken over by the internet. The biggest preprint library in the world now is a website, arXiv.org.
Such traditional and open peer review has many advantages. It usually results in better papers in the archival journals, because the papers are widely examined prior to publication. It does have a disadvantage, however, that journalists can also pick up preprints and report on them before the traditional peer-review process is finished.
Now, that stuff about it being like traditional and open peer review among colleagues, that sounds great. Heck, it even sounds progressive, it seems to include the blogosphere, who could oppose that? It’s all logical, or at least seens possible, until you hear what Dr. Muller’s unofficial explanation is for the big PR push. Judith Curry reports it like this, as a result of talking about it with Dr. Muller:
… Second, the reason for the publicity blitz seems to be to get the attention of the IPCC. To be considered in the AR5, papers need to be submitted by Nov, which explains the timing. The publicity is so that the IPCC can’t ignore BEST. Muller shares my concerns about the IPCC process, and gatekeeping in the peer review process. SOURCE
There’s a few problems with that explanation.
• If Dr. Muller’s real reason for not waiting for peer review is so that it can get into the IPCC report … then why is he being so very much less than accurate and candid on his website?
• Dr. Muller is claiming that somehow the IPCC is not aware of the BEST project, that he needs to advertise because the IPCC scientists never heard of him … … I’m just hanging the facts out on the line here. You can decide if he needs to advertise.
• Dr. Muller is also claiming “gatekeeping” by the IPCC, presumably to keep out climate alarmists like himself … I’m just reporting here, sticking to the facts. [FACT] If there is gatekeeping in the IPCC to keep out climate alarmists, the guard at the gate post is not asleep. He is pining for the fjords.[/FACT]
• There is no IPCC deadline in November of any kind. To be eligible for assessment by WG1, the cutoff date is not until next summer.The papers have to be submitted for publication before August 2012. And even then, the papers do not have to be published until the following year, by March 15, 2013. Here’s the timetable
IPCC AR5 Timetable
CMIP5 and WG1 milestones and schedule
• February: First model output expected to be available for analysis.
• July 18-22: Second Lead Authors Meeting (LA2) • October 24-28: WCRP Open Science Conference will include a CMIP5 session (Denver, Colorado)
• December 16 – February 10, 2012: Expert Review of the First Order Draft (FOD)
• April 16-20: Third Lead Authors Meeting (LA3)
• July 31: By this date papers must be submitted for publication to be eligible for assessment by WG1.
• October 5 – November 30: Expert and Government Review of the Second Order Draft (SOD)
So why the hurry to get these papers out now? Why the sudden emphasis on the manyfold virtues of pre-prints? My best guess is that Dr. Muller wants to get his papers considered by the December-February Expert Review of the First Order Draft.
The reason I say that, is there’s an oddity about the first order draft (FOD). To be in the final IPCC Fifth Assessment Report (AR5), in theory the work must be peer-reviewed. The only exceptions seem to be for WWF opinion pieces.
But to be considered in the IPCC FOD, the rules are much more lax (op. cit.). For the FOD the bar is lower because
preprints, papers submitted, accepted, in press, and published are all eligible for consideration
Which seems to me to be the final link in the chain connecting why he is talking so much about pre-prints on his website, while at the same time telling Judith that it’s a propaganda show to convince the IPCC to notice him. (In passing, does the push for preprints mean he hasn’t submitted the paper yet? Unknown but possible …)
Disquieting conclusions from the above:
First, from Dr. Muller’s actions it seems to be considered business as usual to try to persuade the IPCC to consider your claims by putting on a huge media blitz so that they can’t “ignore” you. Presumably this is because if the New York Times prints it, it must be science.
Is this how low we’ve fallen? Is this the scientific process the IPCC really uses to select what to consider? I don’t know … but clearly Dr. Muller thinks it is the process the IPCC uses, and that it is a legitimate way to get in the door.
Second, while solid, verifiable pre-print results might be worth a look-in for a first-order draft, these four papers were released without the accompanying data. You might have thought that Dr. Muller released the data when he released the four papers … but if so, you have been fooled by Dr. Muller. I was fooled for a bit, too, I didn’t read the fine print.
Someone pointed out that the bottom of the README file released by Dr. Muller it says:
… This release is not recommended for third party research use as the known bugs may lead to erroneous conclusions due to incomplete understanding of the data set’s current limitations.
In other words, to match the pre-prints, we have pre-data. Isn’t science wonderful?
Now, recall that Dr. Muller’s explanation of putting his papers out into the world right now was to subject them to “traditional and open peer review”. Recall that he is out hyping the results of these papers to anyone who will give him some publicity. He is discussing them in the media. And he is claiming he has put them out for “traditional and open” peer review.
Perhaps Dr. Muller can explain why either we or the media should believe his results when we cannot subject them to any kind of review at all without the code and data.
To summarize, here’s what I think are facts:
• The four papers appear to have been published in pre-print to be eligible for consideration for the first order draft of the IPCC report.
• A very different explanation for that was given in public on the BEST website.
• Dr. Muller thinks that the way to get the four papers into the IPCC report is a full-on media blitz.
• Dr. Muller may be right about that.
• The four papers have been prepared from some unknown subset of a “buggy” dataset.
• The subset was determined by looking at the “current limitations” of the buggy dataset.
• We do not know what the rules for extracting the subset were.
• We do not know what the current limitations of the buggy dataset might be.
• The actual data has not been released.
• Code for the individual papers has not been released.
• Their “homogenized” dataset, containing the result of all of their scalpel slices and adjustments of all types, has not been released.
• Finally, the dataset that they did release was not even the raw data. It was processed by removing the monthly averages … but we don’t know what those averages were, or how they were constructed.
So, despite a promise of transparency, to great fanfare BEST has released four pre-prints, based on admittedly “buggy” data, without the accompanying code or data to back them up.
That’s what I think are the facts in the case. I leave you to draw any conclusions.
My regards to all.