Journals Not Enforcing Their Policies

 

Guest Post by Willis Eschenbach

From an interesting post entitled “Trust and Don’t Bother To Verify” on Judith Curry’s excellent blog , I’ve taken the following quote:

Journals’ growing insistence that at least some raw data be made available seems to count for little: a recent review by Dr Ioannidis which showed that only 143 of 351 randomly selected papers published in the world’s 50 leading journals and covered by some data-sharing policy actually complied.

I’ve written before about the data and code archiving policies of the journal Science, and how they are not enforced for certain favored papers. In this regard, consider the case of Pinsky et al. This was a study that said that fishes were moving in the direction of the “climate velocity”. As a fisherman, I’m always interested in such studies. Their results appeared too regular to me, and I wanted to check their work. However, I found that neither their data nor their code was available. So last month, I wrote to the good folk at Science to see if they would enforce their own policies.

From: Willis Eschenbach

Subject: TO: Dr. Marcia McNutt

Date: September 14, 2013 6:30:37 AM PDT

To: Science Editorial <science_editors@aaas.org>

Dear Dr. McNutt:

I have commented publicly in the past on Science magazine not following its own data archiving policy, but only for the favored few with whom the editors agree.

This issue has come up again with the recent publication of the Pinsky et al. study on the migration of fishes in response to climate velocity. Once again, it appears you have published a study without requiring archiving of the data, as is specifically required by your policies. I cannot find a public archive of their data anywhere.

Since that means that their study is not replicable or auditable, it also means their study is not science … so what is it doing in your magazine?

I assume that you will rectify this oversight as soon as possible.

Best regards,

w.

Mmmm. Upon re-reading it, in retrospect I see that I was not as polite as I might have liked … but then I’ve grown bone-weary of Science not following its own data and code archiving policies for certain climate articles. In response to my email, I got … nothing. Zero. Zip. Nada word from anyone at Science.

Undaunted, I persevered. After waiting for two weeks, I wrote again, and this time I copied it around the organization:

From: Willis Eschenbach

Subject: Fwd: TO: Dr. Marcia McNutt

Date: October 1, 2013 11:24:03 PM PDT

To: Science Editorial <science_editors@aaas.org>, science_letters <science_letters@aaas.org>, science_bookrevs@aaas.org, Science News <science_news@aaas.org>, gchin@aaas.org, hjsmith@aaas.org

Dear Friends:

I sent the following message two weeks ago to Dr. McNutt. However, it seems to have miscarried.

From: Willis Eschenbach

Subject: TO: Dr. Marcia McNutt

Date: September 14, 2013 6:30:37 AM PDT

To: Science Editorial <science_editors@aaas.org>

Dear Dr. McNutt:

I have commented publicly in the past on Science magazine not following its own data archiving policy, but only for the favored few with whom the editors agree.

This issue has come up again with the recent publication of the Pinsky et al. study on the migration of fishes in response to climate velocity. Once again, it appears you have published a study without requiring archiving of the data, as is specifically required by your policies. I cannot find a public archive of their data anywhere.

Since that means that their study is not replicable or auditable, it also means their study is not science … so what is it doing in your magazine?

I assume that you will rectify this oversight as soon as possible.

Best regards,

w.

I have not received a reply. Perhaps Dr. McNutt was not the proper person to address this to. So I am sending it to other addresses, in the hopes of getting some reply. I’m sorry to bother you, but if you could pass this to someone who could explain why you are not following your own written policies in this instance.

Many thanks,

w.

This time, I actually got a response, the very next day:

From: Andrew Sugden

Subject: Re: FW: TO: Dr. Marcia McNutt

Date: October 2, 2013 2:59:33 PM PDT

To: Willis Eschenbach

Dear Dr Eschenbach

Thank you for your message to Dr McNutt. I can assure you that we require all data supporting the conclusions of Science papers to be in the public domain; the location of the data is usually specified in the Acknowledgements of each paper, as it was in the case of the Pinsky paper. Please can you double-check the Supplementary Material to the Pinsky et al paper and then specify the data to which you have been unable to gain access? At that point we can ask the authors to provide further details if necessary.

Your sincerely

Andrew Sugden

And the following day, I replied:

From: Willis Eschenbach <willis@surfacetemps.org>

Subject: Re: TO: Dr. Marcia McNutt

Date: October 3, 2013 9:48:34 AM PDT

To: Andrew Sugden <asugden@science-int.co.uk>

Cc: Science Editorial <science_editors@aaas.org>, science_letters <science_letters@aaas.org>, science_bookrevs@aaas.org, Science News <science_news@aaas.org>, gchin@aaas.org, hjsmith@aaas.org

Dr. Sugden, thank you most kindly for your reply. However, I fear that I’ve double-checked the paper and the SI, and there is far, far too little information, either in the paper itself or in the Supplementary Information, to allow their results to be confirmed, replicated, or falsified.

Here’s an example. It just happens to be the first area on their list, their study of the Eastern Bering Sea. The source of the data is given as being the RACE survey … but other than that we know nothing.

For example. The RACE survey covers 112 species … which of these species did they actually look at, and which ones did they leave out of their survey? Then they say they didn’t look at all tows … so which individual tows did they look at, and which did they leave out of their survey? Their only information on the subject is as follows:

While surveys were conducted in a variety of seasons (Table S1), we analyze each survey separately and use season-specific temperature data to account for these differences. We restricted our analysis to tows without gear and duration problems, to taxa that were resolved at least to genus, and to taxa that were sampled at least once per year to reduce effects from changes in taxonomic recording or resolution.

Unfortunately, that is far from enough information to be able to tell if their results are real or not.

Look, Dr. Sugden, this is not rocket science. To verify if what they have reported is a real effect, what we readers of Science need is very, very simple. It is a list in plain text that looks like this:

Year   Month   Day   Tow#    Species   Catch      Lat Start    Long Start   Lat End  Long End     Depth     Temperature   Result

1998   3       12    116      capelin  17.6 kg    56.712N     176.55E     56.914N  177.25E        72-75m   11.6-11.9°C    Utilized1998   3       12    116      sculpin    1.6 kg    56.712N     176.55E     56.914N  177.25E        72-75m   11.6-11.9°C    Excluded, uncertain identification

Without that list showing exactly which data was used, and which data was excluded, and why, their results cannot be falsified … and unfalsifiable claims are not science, and not worth reporting in Science magazine

What they have done is just waved their hands and pointed at a huge pile of data, and said, We got our data from that pile … I’m sorry, but in 2013 that doesn’t cut it. To check their work, we need to know, not where they got their data, but exactly what data was used and what data was excluded. For all we know, there were transcription errors, or bugs in their computer code, or incorrectly categorized results, could be anything … but there’s no way to tell.

Nor is this an onerous requirement. The block of data representing the entire analysis would be a few megabytes. And presumably, in order to analyze the data, it’s all on the computer. So outputting a list of the data that was actually used or excluded is a few minutes work for a junior analyst.

I fear Science magazine and your Reviewers have dropped the ball on this one, Dr. Sugden. You have not done your due diligence and required the archiving of the data actually used in the study. Without that, you’re just publishing an anecdote, a charming fairy tale told by Dr. Pinsky.

It’s an interesting anecdote, to be sure … but it’s not science.

Please let me know what your magazine intends to do in this case. As it stands, you’ve published something which is totally unfalsifiable, in direct contravention of your own policies. Here are your relevant policies:

Data and materials availability

All data necessary to understand, assess, and extend the conclusions of the manuscript must be available to any reader of Science. All computer codes involved in the creation or analysis of data must also be available to any reader of Science. …

Science supports the efforts of databases that aggregate published data for the use of the scientific community. Therefore, appropriate data sets (including microarray data, protein or DNA sequences, atomic coordinates or electron microscopy maps for macromolecular structures, and climate data) must be deposited in an approved database, and an accession number or a specific access address must be included in the published paper. We encourage compliance with MIBBI guidelines (Minimum Information for Biological and Biomedical Investigations).

Details include but are not limited to:

  • Climate data. Data should be archived in the NOAA climate repository or other public databases.
  • Ecological data. We recommend deposition of data in Dryad.

Clearly, the information that they provided falls woefully short of that required by your policies. No archive of their data. And pointing at a huge pile of data is not sufficient to let me “understand, assess, and extend the conclusions” as your policies require. I don’t have a clue what in the huge pile of data they used and what they excluded, so the information they gave about the location of the huge pile of data is useless.

The requirements, your own requirement, are bozo-simple, and easy to comply with. All they need to do is archive the collection of data that they actually used or rejected, and archive the computer code that they used to analyze that data.

They have done neither one …

Please let me know your plan of action on this, both for this paper and in general. As it stands, your magazine is passing off the unverifiable, unfalsifiable anecdotes recounted by Pinsky et al. as if they were real science. This is not the first time that your magazine has done that … and I don’t think that’s good for you personally as a scientist, for the reputation of Science magazine, or for science itself. People are trusting science less and less these days … and the publication of unverified anecdotes as if they were real studies is one of the reasons.

Your requirements for data and code archiving are simple and transparent. Now … you just have to enforce them.

Thanks for your assistance in all of this,

w.

Perhaps overly verbose but I wanted them to understand the issue. I waited almost two weeks, and when I’d gotten nothing, I wrote back:

From: Willis Eschenbach

Subject: Re: TO: Dr. Marcia McNutt

Date: October 14, 2013 11:00:05 AM PDT

To: Andrew Sugden

Cc: Science Editorial <science_editors@aaas.org>, science_letters <science_letters@aaas.org>, science_bookrevs@aaas.org, Science News <science_news@aaas.org>, gchin@aaas.org, hjsmith@aaas.org

Dear Dr. Sugden;

As I detailed in my attached letter, neither the data nor the computer code for the Pinsky et al. study on the migration of fishes in response to climate velocity is available in a usable form.

While the data is publicly available, there is no detailed list or other means to identify the data actually used in the Pinsky study. Without that, in fact their data is not available—it is a needle in a haystack of needles. And without that, the study cannot be replicated, and thus it should not be published.

In addition, the computer code is nowhere to be found.

Both of these violate your express policies, as detailed below.

It’s been almost two weeks now since my attached letter was sent … I’m sorry to bother you again, but is there any progress in this matter? Or should I just submit this to the Journal of Irreproducible Results? Hey, just kidding … but it is very frustrating to try to see if there are flaws in published science, only to find out that Science itself is not following its own published policies.

My apologies for copying this around, but it may be that I’m not talking to the person in authority regarding this question. Do you have plans to rectify your omission in the Pinsky study, and require that they archive the actual data and code used? And if so, what are the plans?

Or are you going to do the Pontius Pilate?

In any case, any information that you have would be most welcome.

Many thanks for your assistance in this matter.

w.

PS—Please, do not tell me to contact the scientists directly. This is 2013. The exact data and code that the scientists used should be available at 2AM their time to a teenaged researcher in Ghana who doesn’t even speak the scientists’ language. That’s the reason you have a policy requiring the authors to archive or specifically identify their data, and to post their code. Pinsky et al. have done neither one.

That was sent on the 14th. Today’s the 21st. So I figured, at this point it’s been almost three weeks without an answer … might as well post up the story.

Now, would I have caught more flies with honey than with vinegar? Perhaps … perhaps not.

But the issue is not the quality or politeness of my asking for them to follow their own policies. Look, I know I can be abrasive at times, and that Dr. McNutt has no reason to like me, but that’s not the issue.

The issue is whether the journal Science follows their own policies regarding the archiving of data and code, or not. If you don’t like the way I’m asking them to do it, well, perhaps you might ask them yourself. I may be overly passionate, I might be going about it wrong, but at least I’m working in my own poor way to push both Science and science in the direction of more transparency through the archiving of data and code.

Sadly,

w.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
159 Comments
Inline Feedbacks
View all comments
Vieras
October 22, 2013 11:02 pm

Willis, you were way too aggressive and frustrated while writing your e-mails. I can understand it well but it’s way more productive to write in a polite and calm way. Sure, Science doesn’t follow its archiving policy and you probably knew that they wouldn’t while writing, but don’t let that affect the way you communicate. It gives your opponents an easy way of just pointing to a few sentences and accusing you of being aggressive and unreasonable. And people will buy it as they are too lazy to read through the whole matter.
When you write, do it always like Stephen McIntyre does it.

October 22, 2013 11:05 pm

Wayne
It’s important that replications be performed with identical equipment and by a different person/team. Satellites used to estimate cloud cover for example have consistently shown more clouds than their predecessors. The importance of independence is illustrated by this email from my now deceased friend, Bo Leuf:

Perhaps the clearest example of the dangers of traditional scientific belief, and the way it’s taught, came when I was studying at University. For a Physics lab, our 2nd year class had the pleasure of determining the mass of an electron. You would think this was pretty straight forward; more a demonstration than an experiment. We were I think six or seven groups, each with a vacuum pump chamber and a setup that would let us charge microscopic oil droplets and measure their movement in an oscillating electromagnetic field.
Well, we labored away and started producing results on which we could apply theory and math to determine the mass of a single electron. One group eventually realized that their values were worthless, probably due to some equipment malfunction, since the calculations gave patently absurd results. One group, which got special help from the lab assistant due to early problems, got a result close to the expected, as announced by the assistant. The rest of us found that value puzzling.
The remaining groups produced remarkably consistent results clustering around a different value, about factor 2.5 off. The lab assistant couldn’t figure out what we had done wrong, but he had forgotten the detailed solution sheet and had only brought a short checklist and answer to the lab. In the end, we derived the value again, together, from first principles, step by step. Same result. The lab assistant couldn’t fault us, even though we were so far off from the expected value proven during three separate years of labs that he had overseen.
We learned later that he had taken the result back to the professor, along with our derivation and his solution sheet. They had finally determined that the lab solution, worked out three years ago and “proven” by all the ensuing lab sessions until ours, was wrong. Ours, the first calculated when the “solution” was not immediately available, was correct within the reasonable margins of error. This, that a factor 2.5 error for a physics constant that anyone can look up is consistently proven “true” by independent laboratory experiments was an excellent demonstration of belief patterns at work. In that way, the experiment was more valuable than the original intent, but I fear few really got it.

rogerknights
October 22, 2013 11:31 pm

Like I said, whether I rub their tummies and blow in their ears isn’t the issue. It’s whether Science and the other journals follow their own policies, . . . .

LOL!

October 22, 2013 11:46 pm

I respectfully disagree with those who criticize Willis for being blunt. Scientists, science writers, and science journal editors are essentially the same as the vast majority of people who live in the real world.
In commercial settings, well outside of the scientific arena, I have gotten good results, beginning my letters of complaint with:
Can’t you idiots do anything right?
Nevertheless results are sometimes overrated. It”s the process that’s most important. Like virtue, sarcasm can be its own reward. 🙂

KNR
October 23, 2013 12:35 am

not replicable or auditable is a basic fail . as any undergraduate taking a science course should be told .
So why is it that the ‘professional ‘ cannot meet the standards they would demand of their own students ?

Geoff Sherrington
October 23, 2013 1:38 am

Larry Fields says: October 22, 2013 at 11:46 pm re Salutations
Did you ever have the pleasure of reading the Henry Root letters?
Typically, he (an Englishman) would start with “Here’s a quid” and enclose the money.
It was said that many people felt obligated to reply, if only to return the money with an explanatory letter about receipt of funds policy.
Indeed, pro forma approaches to publishers as mapped by Henry Root and other useful letter styles to address a number of situations described on WUWT, can be found at –
http://www.thehenryroot.com/
However, I recommend you read the books. They are hilarious.

Ryan
October 23, 2013 3:37 am

One would have thought that one of the benefits of having a group of skeptics against you is that you are able to consider what they say and do something about it: dot the “i’s” and cross the “t’s” until you have narrowed the base on which the skeptics stand to something small and insignificant.
It seems complaints levelled at the CAGW scientists go completely unheeded and they allow us to attack them across a broad front. Perhaps they prefer us to attack such matters as archiving of data rather than wehn we focus on central issues. Shame for them that we do both.

October 23, 2013 3:59 am

Spot on. There is an explosive article in the Economist this week about the fact that many papers claiming breakthroughs etc are never replicated by anyone as replication is not a career enhancing occupation. (It is the week’s Economist Briefing, entitled “Unreliable Research” dated 19 October 2013). Also, where replication is undertaken it often fails to replicate in a significant number of instances. This article is a must as much of what it covers sums up the climate science charade very nicely, although nowhere does the author mention it. S/he is anonymous as all Economist writers are.

j ferguson
October 23, 2013 4:00 am

Maybe more to the point is that if Science depended on publishing sound work fully documented they wouldn’t be able to fill their pages.

Steve Richards
October 23, 2013 4:16 am

M. Schneider says:
October 22, 2013 at 3:58 am
It’s high time for database of “scientific” journals rated on an ABCDF scale. A link to said database should be top, front and center..
– Why aren’t we doing this?
==============================
A very good idea.
Something a group of people with access to a number of journals could do.
Review articles, checking availability of data and producing a league table of journals, each month, here on WUWT!

Crispin in Waterloo
October 23, 2013 4:23 am

@Willis:
These days we are used to seeing ‘RoHS Compliant’ tags on things. We are used to seeing ‘CSA Approved’ and ISO 9000. All these relate to the quality of the product and are obtained by following and implementing the protocols needed to properly use the ‘qualification’.
Why should there not be a similar compliance tag for scientific papers that have (already) met the required needs of making the materials and methods available? I don’t have a snappy name for it, but Sources, Methods and Code Provided (SMCP) would be a good start.
If the sources, methods and code used to create the paper are not already available, the links provided to the paper would not be allowed to place the tag on the name. Clicking on the SNCP tag would take one not to the paper, but to the sources. If the sources do not exist, no need for the tag.
Journals could keep publishing the papers but with that tag missing, it should not be considered ‘real science’. The trick used by the Team of pointing to a heap of crap data and saying ‘it is somewhere in there’ and hiding the code would not necessarily end, but their use of the tag would be disallowed. Publicity (think: name and shame campaign) would follow any document that proclaimed something unproven or unprovable.
It is time to put an end to this quasi-scientific shite.
It is possible to have an independent body issue a certificate in the way computer communications are arranged. If a paper claims to SMCP status, the certificate is created and it is attached to the paper’s link. If there is objection that the provision is not met, the certificate is cancelled breaking the link to the data until requirements are met. A publisher (journal or not) would be responsible for monitoring compliance in the same way Science is now plus one over-seeing group. If they do not police their own publication, their right to issue certificates could be suspended (if giving out false certificates).
When you click on the link, it would first check that there is a valid certificate standing, then go to the sources, rather like TinyURL. A groundswell of support by real scientists doing real science should sweep away the junk science that infests the pages of journals. We might be surprised by what drops away, and in which fields.

André van Delft
Reply to  Crispin in Waterloo
October 23, 2013 4:39 am

“Journals could keep publishing the papers but with that tag missing, it should not be considered ‘real science’. ” – indeed. Such papers should not count for the h-index; there should be an additional j-index (j=junk) for such papers; a combined hj-index would indicate the quantity of references between real science papers and junk papers.

Crispin in Waterloo
October 23, 2013 4:54 am

@André
Heh heh! Very good. I don’t think that would fly in the hallowed halls of academia, but a tag showing ‘this is real science’ would. The whole point is to get those who live in the Kingdom of Names to aspire to an additional Name.
CV’s are digital these days with live links (i.e. a PDF version). Imagine a CV with a long list of papers none of which have a compliance tag. Compare that with one that has 100% compliance.

j ferguson
October 23, 2013 5:08 am

Crispin, that seems an excellent idea. Maybe the professional societies could perform this service. It might be interesting to have a member propose this and then see if it goes anywhere.

October 23, 2013 6:54 am

Do we really need an “SMCP” compliancy statement? If the journal would enforce it’s own rules, published articles would mean sources, methods and code are provided.
A journal can not really be a science journal otherwise, can it?

October 23, 2013 7:07 am

Want to get the attention of journals like Science Magazine? I suggest all one would need to do is start a serious grass roots effort in the part of the public who are highly focused on publically funded science. The grass root movement suggested is ‘a more verifiable science process in the electronic era without the generically unaccountable journals’. A central theme of the movement could be unrestricted peer review in the social media. Bang, no more journals. Bang, open access of non-anonymous peers to review a paper would indeed raise the demand for data and method and code availability prior to acceptance. It would be open and transparent, no secrecy.
The journals, by the grassroots effort, would be shown to be an archaic arbitrary convention blocking rigorous verifiability.
John

Alan Robertson
October 23, 2013 8:25 am

John Whitman says:
October 23, 2013 at 7:07 am
___________________
Good points, John. A massive political roadblock stands in your way. A website is needed…

Momsthebest
October 23, 2013 9:57 am

The “policies” of journals like Science appear to be more for show than for reality, similar to the “policies” that I worked under for a large multinational company. “We follow all laws. We are ethical. We don’t lie.” The policies were written by lawyers, for lawyers and regulators, and they made stockholders and investors feel safe. We even had a “Compliance Department” that you could anonymously report policy infractions to. The only thing was that the “Compliance Department” was more like the data police, destroying any and all evidence of wrong doing, and not really stopping the wrong doing. When confronted with accusations that the company was involved in wrong doing, the official response was, “No, that is not accurate. We have a policy against that.” Having a policy and actually following it are two very separate issues. My observation is that the purpose of having the policy is to convince everyone on the outside that everything is being done by the rules, but actually following the rules on the inside of the organization is a far different matter. THAT is an inconvenient truth.

October 23, 2013 11:54 am

It seems that scientists are worried about the quality of science and some are doing something about it:

Ask a scientist—any scientist—what irks them most about publishing and they are sure to mention peer review. The process has been blamed for everything from slowing down the communication of new discoveries to introducing woeful biases to the literature. Perhaps most troubling is that few believe peer review is capable of accomplishing what it purports to do—ensuring the quality of published science.
Indeed, several studies have shown that, in actuality, peer review does not elevate the quality of published science and that many published research findings are later shown to be false. In response, a growing number of scientists are working to impose a new vision of the scientific process through post-publication review, the process of critiquing science after it has become part of the literature.
Reviewing published work is, of course, nothing new. Scientists have always been welcome to publish contradictory findings, for example, contact the papers’ authors directly, or write a letter to the journal’s editor. However, because all are lengthy processes that likely will never be heard or seen by the majority of scientists, most scientists do not participate in formal reviews.
A small number of scholarly journals have launched online fora for scientists to comment on published materials. Uptake, however, has been slow for a number of reasons—chief of which is the inconvenience of commenting journal by journal.
“If you want to comment on a Nature paper, you have to go to the Nature site, find that paper, and comment. If you want to comment on a PLOS paper, you have to go to a different website, and so forth,” said Stanford University’s Rob Tibshirani, professor of health research and policy and statistics. “It’s a major time investment, particularly when people may never see the comments.”
Likewise, social media platforms, blogs, and other websites—such as Zotero, CiteULike, and Mendeley, to name a few—have also seen only scattershot commenting activities, at best.
Frustrated by these inefficiencies, Tibshirani is one of several scientists behind the development of PubMed Commons, a new post post-publication peer review system housed on the oft-accessed National Center for Biotechnology Information (NCBI) biomedical research database. The Commons, announced today (October 22), allows users to comment directly on any of PubMed’s 23 million indexed research articles, much in the way people review films on Rotten Tomatoes, evaluate restaurant service on Yelp, or grade purchases made on Amazon.
Tibishiri said an organized post-publication peer review system could help “clarify experiments, suggest avenues for follow-up work and even catch errors.” If used by a critical mass of scientists, he added, “it could strengthen the scientific process.”

Source: http://www.the-scientist.com//?articles.view/articleNo/37969/title/Post-Publication-Peer-Review-Mainstreamed/

October 23, 2013 2:37 pm

Alan Robertson says:
October 23, 2013 at 8:25 am
John Whitman says:
October 23, 2013 at 7:07 am
___________________
Good points, John. A massive political roadblock stands in your way. A website is needed…

– – – – – – – – –
Alan Robertson,
Appreciate your comment.
It would.
I think the interest in the effort is broad enough. A spark is needed . . . .
John

Duster
October 23, 2013 3:15 pm

: James says:
October 22, 2013 at 4:06 am
I disagree.
They have made the raw data available that is all you should need. An important part of science is reproducibility of results. However, in order to make the reproductions useful they shouldn’t just be a rework of the original methodology. It is the result that is important. If they were to give you a step by step walkthrough. It would very likely bias the person trying to reproduce the result to apply the same methodology and that isn’t as useful as someone who thinks for themselves about how they’d analysis the data.

Methodology has to be reproducible as well. One of the problems with many studies is that Type 1 errors can occur regardless of the statistical likelihood of the event. So, repeated analyses using the identical methodology and new data sets are the most desirable approach to reproducibility. Simply analyzing the same data with the same methods should reveal potential errors in procedures or calculation – for instance, I’ve found errors the way two big-name stat packages calculated Fisher’s Exact Test (since fixed).
Also if you are just looking in detail at what someone else has done. The likely result is “I wouldn’t have done it that way I think you are wrong”. This is not useful. The most useful thing Willis could do is take the raw data do his own analysis and present the result. If it’s different then this prompts the debate about why and which way is better. If Willis does indeed have the better solution then we end up with a new piece of science that is an improvement on the previous version and this is what we want in the end.

Much of what you suggest is a good idea here. What you need to consider to broaden its validity is that both “audits” – reworking the the same analysis using the same data and the same methods (possibly through a different software system for instance) helps to screen the validity of the original analysis. That is, no apparent errors in how the original data was processed or in how the resultant numbers were used, should free up the decks for a discussion of science rather than methodology. Once the debate turns to science, that is regular patterned or “lawful” behavior in nature, then collecting new data from the same source areas becomes important. This is actual reproduced research (not merely an analysis). Ideally new tree rings from Yamal should be collected to compare with the older sets for example.
Gandrud gives a fairly detailed argument and explanation of the importance of what Willis asks for:
Gandrud, Christopher (2013) Reproducible Research with R and RStudio. (Chapman & Hall/CRC The R Series)

October 23, 2013 11:08 pm

Git;
Who would “qualify” the commenting scientists with access to the site?

October 24, 2013 12:31 am

Absolutely no idea. Perhaps you could ask Prof Tibshirani. I’m just reporting what I read in The Scientist.

Bill Jamison
October 24, 2013 1:51 am

As Dale Carnegie said when talking about how to get someone to do what you want: “Arouse an eager want”. Get the person to want to help you. Willis’ letter doesn’t do that IMO. Certainly the journal should enforce their requirements and Willis shouldn’t have to ask for the data and code. But if they haven’t then the question is “How do you get the person you are communicating with to want to help you get the data and code archived?”
The two recommendations I have are be polite and be concise. Using language such as “Once again you have…” immediately puts the person on the defensive and that is unlikely to make them want to be helpful. I doubt they will want to read several paragraphs when the entire request could easily fit in one or two paragraphs at most. Simple, concise, and polite.

J.H.
October 24, 2013 6:12 am

ferd berple says:
October 22, 2013 at 6:45 am
Fred Berple is spot on…… If the results have not been replicated…… Then the paper and the Journal that prints it. Is worthless to the advancement of knowledge.