Journals Not Enforcing Their Policies

 

Guest Post by Willis Eschenbach

From an interesting post entitled “Trust and Don’t Bother To Verify” on Judith Curry’s excellent blog , I’ve taken the following quote:

Journals’ growing insistence that at least some raw data be made available seems to count for little: a recent review by Dr Ioannidis which showed that only 143 of 351 randomly selected papers published in the world’s 50 leading journals and covered by some data-sharing policy actually complied.

I’ve written before about the data and code archiving policies of the journal Science, and how they are not enforced for certain favored papers. In this regard, consider the case of Pinsky et al. This was a study that said that fishes were moving in the direction of the “climate velocity”. As a fisherman, I’m always interested in such studies. Their results appeared too regular to me, and I wanted to check their work. However, I found that neither their data nor their code was available. So last month, I wrote to the good folk at Science to see if they would enforce their own policies.

From: Willis Eschenbach

Subject: TO: Dr. Marcia McNutt

Date: September 14, 2013 6:30:37 AM PDT

To: Science Editorial <science_editors@aaas.org>

Dear Dr. McNutt:

I have commented publicly in the past on Science magazine not following its own data archiving policy, but only for the favored few with whom the editors agree.

This issue has come up again with the recent publication of the Pinsky et al. study on the migration of fishes in response to climate velocity. Once again, it appears you have published a study without requiring archiving of the data, as is specifically required by your policies. I cannot find a public archive of their data anywhere.

Since that means that their study is not replicable or auditable, it also means their study is not science … so what is it doing in your magazine?

I assume that you will rectify this oversight as soon as possible.

Best regards,

w.

Mmmm. Upon re-reading it, in retrospect I see that I was not as polite as I might have liked … but then I’ve grown bone-weary of Science not following its own data and code archiving policies for certain climate articles. In response to my email, I got … nothing. Zero. Zip. Nada word from anyone at Science.

Undaunted, I persevered. After waiting for two weeks, I wrote again, and this time I copied it around the organization:

From: Willis Eschenbach

Subject: Fwd: TO: Dr. Marcia McNutt

Date: October 1, 2013 11:24:03 PM PDT

To: Science Editorial <science_editors@aaas.org>, science_letters <science_letters@aaas.org>, science_bookrevs@aaas.org, Science News <science_news@aaas.org>, gchin@aaas.org, hjsmith@aaas.org

Dear Friends:

I sent the following message two weeks ago to Dr. McNutt. However, it seems to have miscarried.

From: Willis Eschenbach

Subject: TO: Dr. Marcia McNutt

Date: September 14, 2013 6:30:37 AM PDT

To: Science Editorial <science_editors@aaas.org>

Dear Dr. McNutt:

I have commented publicly in the past on Science magazine not following its own data archiving policy, but only for the favored few with whom the editors agree.

This issue has come up again with the recent publication of the Pinsky et al. study on the migration of fishes in response to climate velocity. Once again, it appears you have published a study without requiring archiving of the data, as is specifically required by your policies. I cannot find a public archive of their data anywhere.

Since that means that their study is not replicable or auditable, it also means their study is not science … so what is it doing in your magazine?

I assume that you will rectify this oversight as soon as possible.

Best regards,

w.

I have not received a reply. Perhaps Dr. McNutt was not the proper person to address this to. So I am sending it to other addresses, in the hopes of getting some reply. I’m sorry to bother you, but if you could pass this to someone who could explain why you are not following your own written policies in this instance.

Many thanks,

w.

This time, I actually got a response, the very next day:

From: Andrew Sugden

Subject: Re: FW: TO: Dr. Marcia McNutt

Date: October 2, 2013 2:59:33 PM PDT

To: Willis Eschenbach

Dear Dr Eschenbach

Thank you for your message to Dr McNutt. I can assure you that we require all data supporting the conclusions of Science papers to be in the public domain; the location of the data is usually specified in the Acknowledgements of each paper, as it was in the case of the Pinsky paper. Please can you double-check the Supplementary Material to the Pinsky et al paper and then specify the data to which you have been unable to gain access? At that point we can ask the authors to provide further details if necessary.

Your sincerely

Andrew Sugden

And the following day, I replied:

From: Willis Eschenbach <willis@surfacetemps.org>

Subject: Re: TO: Dr. Marcia McNutt

Date: October 3, 2013 9:48:34 AM PDT

To: Andrew Sugden <asugden@science-int.co.uk>

Cc: Science Editorial <science_editors@aaas.org>, science_letters <science_letters@aaas.org>, science_bookrevs@aaas.org, Science News <science_news@aaas.org>, gchin@aaas.org, hjsmith@aaas.org

Dr. Sugden, thank you most kindly for your reply. However, I fear that I’ve double-checked the paper and the SI, and there is far, far too little information, either in the paper itself or in the Supplementary Information, to allow their results to be confirmed, replicated, or falsified.

Here’s an example. It just happens to be the first area on their list, their study of the Eastern Bering Sea. The source of the data is given as being the RACE survey … but other than that we know nothing.

For example. The RACE survey covers 112 species … which of these species did they actually look at, and which ones did they leave out of their survey? Then they say they didn’t look at all tows … so which individual tows did they look at, and which did they leave out of their survey? Their only information on the subject is as follows:

While surveys were conducted in a variety of seasons (Table S1), we analyze each survey separately and use season-specific temperature data to account for these differences. We restricted our analysis to tows without gear and duration problems, to taxa that were resolved at least to genus, and to taxa that were sampled at least once per year to reduce effects from changes in taxonomic recording or resolution.

Unfortunately, that is far from enough information to be able to tell if their results are real or not.

Look, Dr. Sugden, this is not rocket science. To verify if what they have reported is a real effect, what we readers of Science need is very, very simple. It is a list in plain text that looks like this:

Year   Month   Day   Tow#    Species   Catch      Lat Start    Long Start   Lat End  Long End     Depth     Temperature   Result

1998   3       12    116      capelin  17.6 kg    56.712N     176.55E     56.914N  177.25E        72-75m   11.6-11.9°C    Utilized1998   3       12    116      sculpin    1.6 kg    56.712N     176.55E     56.914N  177.25E        72-75m   11.6-11.9°C    Excluded, uncertain identification

Without that list showing exactly which data was used, and which data was excluded, and why, their results cannot be falsified … and unfalsifiable claims are not science, and not worth reporting in Science magazine

What they have done is just waved their hands and pointed at a huge pile of data, and said, We got our data from that pile … I’m sorry, but in 2013 that doesn’t cut it. To check their work, we need to know, not where they got their data, but exactly what data was used and what data was excluded. For all we know, there were transcription errors, or bugs in their computer code, or incorrectly categorized results, could be anything … but there’s no way to tell.

Nor is this an onerous requirement. The block of data representing the entire analysis would be a few megabytes. And presumably, in order to analyze the data, it’s all on the computer. So outputting a list of the data that was actually used or excluded is a few minutes work for a junior analyst.

I fear Science magazine and your Reviewers have dropped the ball on this one, Dr. Sugden. You have not done your due diligence and required the archiving of the data actually used in the study. Without that, you’re just publishing an anecdote, a charming fairy tale told by Dr. Pinsky.

It’s an interesting anecdote, to be sure … but it’s not science.

Please let me know what your magazine intends to do in this case. As it stands, you’ve published something which is totally unfalsifiable, in direct contravention of your own policies. Here are your relevant policies:

Data and materials availability

All data necessary to understand, assess, and extend the conclusions of the manuscript must be available to any reader of Science. All computer codes involved in the creation or analysis of data must also be available to any reader of Science. …

Science supports the efforts of databases that aggregate published data for the use of the scientific community. Therefore, appropriate data sets (including microarray data, protein or DNA sequences, atomic coordinates or electron microscopy maps for macromolecular structures, and climate data) must be deposited in an approved database, and an accession number or a specific access address must be included in the published paper. We encourage compliance with MIBBI guidelines (Minimum Information for Biological and Biomedical Investigations).

Details include but are not limited to:

  • Climate data. Data should be archived in the NOAA climate repository or other public databases.
  • Ecological data. We recommend deposition of data in Dryad.

Clearly, the information that they provided falls woefully short of that required by your policies. No archive of their data. And pointing at a huge pile of data is not sufficient to let me “understand, assess, and extend the conclusions” as your policies require. I don’t have a clue what in the huge pile of data they used and what they excluded, so the information they gave about the location of the huge pile of data is useless.

The requirements, your own requirement, are bozo-simple, and easy to comply with. All they need to do is archive the collection of data that they actually used or rejected, and archive the computer code that they used to analyze that data.

They have done neither one …

Please let me know your plan of action on this, both for this paper and in general. As it stands, your magazine is passing off the unverifiable, unfalsifiable anecdotes recounted by Pinsky et al. as if they were real science. This is not the first time that your magazine has done that … and I don’t think that’s good for you personally as a scientist, for the reputation of Science magazine, or for science itself. People are trusting science less and less these days … and the publication of unverified anecdotes as if they were real studies is one of the reasons.

Your requirements for data and code archiving are simple and transparent. Now … you just have to enforce them.

Thanks for your assistance in all of this,

w.

Perhaps overly verbose but I wanted them to understand the issue. I waited almost two weeks, and when I’d gotten nothing, I wrote back:

From: Willis Eschenbach

Subject: Re: TO: Dr. Marcia McNutt

Date: October 14, 2013 11:00:05 AM PDT

To: Andrew Sugden

Cc: Science Editorial <science_editors@aaas.org>, science_letters <science_letters@aaas.org>, science_bookrevs@aaas.org, Science News <science_news@aaas.org>, gchin@aaas.org, hjsmith@aaas.org

Dear Dr. Sugden;

As I detailed in my attached letter, neither the data nor the computer code for the Pinsky et al. study on the migration of fishes in response to climate velocity is available in a usable form.

While the data is publicly available, there is no detailed list or other means to identify the data actually used in the Pinsky study. Without that, in fact their data is not available—it is a needle in a haystack of needles. And without that, the study cannot be replicated, and thus it should not be published.

In addition, the computer code is nowhere to be found.

Both of these violate your express policies, as detailed below.

It’s been almost two weeks now since my attached letter was sent … I’m sorry to bother you again, but is there any progress in this matter? Or should I just submit this to the Journal of Irreproducible Results? Hey, just kidding … but it is very frustrating to try to see if there are flaws in published science, only to find out that Science itself is not following its own published policies.

My apologies for copying this around, but it may be that I’m not talking to the person in authority regarding this question. Do you have plans to rectify your omission in the Pinsky study, and require that they archive the actual data and code used? And if so, what are the plans?

Or are you going to do the Pontius Pilate?

In any case, any information that you have would be most welcome.

Many thanks for your assistance in this matter.

w.

PS—Please, do not tell me to contact the scientists directly. This is 2013. The exact data and code that the scientists used should be available at 2AM their time to a teenaged researcher in Ghana who doesn’t even speak the scientists’ language. That’s the reason you have a policy requiring the authors to archive or specifically identify their data, and to post their code. Pinsky et al. have done neither one.

That was sent on the 14th. Today’s the 21st. So I figured, at this point it’s been almost three weeks without an answer … might as well post up the story.

Now, would I have caught more flies with honey than with vinegar? Perhaps … perhaps not.

But the issue is not the quality or politeness of my asking for them to follow their own policies. Look, I know I can be abrasive at times, and that Dr. McNutt has no reason to like me, but that’s not the issue.

The issue is whether the journal Science follows their own policies regarding the archiving of data and code, or not. If you don’t like the way I’m asking them to do it, well, perhaps you might ask them yourself. I may be overly passionate, I might be going about it wrong, but at least I’m working in my own poor way to push both Science and science in the direction of more transparency through the archiving of data and code.

Sadly,

w.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
159 Comments
Inline Feedbacks
View all comments
OssQss
October 22, 2013 12:50 pm

I have always found one tends to get better results from another if you don’t piss them off from the beginning. These people don’t owe anything to anyone without a court order at this point.
Just sayin, how would you feel if a similar communication was directed to you for some reason?
“You” being everyone who read my comment……

October 22, 2013 1:00 pm

OssQss said October 22, 2013 at 12:50 pm

I have always found one tends to get better results from another if you don’t piss them off from the beginning. These people don’t owe anything to anyone without a court order at this point.
Just sayin, how would you feel if a similar communication was directed to you for some reason?
“You” being everyone who read my comment……

The Git shamefacedly admits to spending the last six years of his life as a public servant. In that role, he was frequently required to supply information on request. Such requests varied from the polite through to far, far ruder than Willis’s relatively mild tone on this occasion.
I suspect that I would have been considered ill-mannered, or insane had I refused to do what I was being paid to do. It appears that McNutt, for whatever reason, is not required to implement AAAS policy. This in itself is telling. Willis’s lack of politesse has nothing whatsoever to do with this.

Stuart Lynne
October 22, 2013 1:13 pm

Reproducible means that someone can reproduce the results of an experiment.
First to verify that the methodology is fully described and therefore CAN be reproduced.
Second to verify the ORIGINAL experimenters did not make any mistakes (intentional or accidental).
As a side effect it also means that other researchers may look more closely at how the experiment was performed and have more or less confidence in the actual result.

cms
October 22, 2013 1:21 pm

In the early 80’s, I begin to think that I needed to pay serious attention to Climate Change. Towards this end I look around for Journals which I thought were paying serious attention to the question. The best looked to be Science and I subscribed. At that time in Science because a majority of the articles were promoters of the CO2 theory, however many of the best were undecided or plainly skeptical. Now I would say not more than 1/3 were of the second variety. My opinion at the time was that science was still debating the issue and I would watch over the coming months and years as the issues were clarified and investigated. I read those articles religiously for almost 3 years, then all of the sudden all the skeptical or semi skeptical articles totally disappeared. I was curious so I did what research one could in those days. I found an interview with Philip Abelson who had just left the position as Editor of the Journal. He stated that one of the reasons he was ushered out was his willingness to publish skeptical articles. This interview is very hard to find and is vociferously denied by groups like Wikipedia. He also published an editorial opinion in Science which was reported in the LA times as following:
‘Philip H. Abelson, editor of Science magazine and one of the most respected men in the fields of engineering and applied sciences, wrote an editorial in the March 30, 1990, issue titled: “Uncertainties About Global Warming.”
He reported that 14 groups are now computer modeling the atmosphere and when they examine the doubling of “greenhouse gases,” they see an effect on their computer screens. However, none of these computer program models works well enough to predict anything beyond sunrise; not even one rainstorm.’
The article itself can be found on Science’s list of publications, but there is no easy way to buy it. http://www.sciencemag.org/content/247/4950/1529
Anyway that was one of my turning points from sympathetic to skeptic and I think indicative of the Journal’s stance

wsbriggs
October 22, 2013 1:30 pm

It’s time to start a campaign for separation of Science and The State. It will eliminate most of the rent seekers, and focus scientists on real problems. It’s funny how the need to have real results makes better scientists.

Matthew R Marler
October 22, 2013 1:32 pm

Greg: Well that is one argument but it’s not the written policy of Science, which requires code.
This is correct. The written policy of Science is unambiguous. They require that all data and code be made available. Not to be too sharp about it, but “code” includes the lines in the programs that read the raw data and select some while rejecting others; sorting and merging different files, and all of that stuff. You can’t tell whether a minor procedural difference accounts for an apparently different result unless you can reproduce the original study exactly.
As recounted by Willis, you can not tell from the Pinsky paper which subsets of the full data file they downloaded they analyzed and reported. Thus, you can not tell whether their result is a result of a selection bias, either a bias in selecting data, or a bias in selecting which results to write up for publication. Those are both well-documented and non-negligible sources of error.

October 22, 2013 2:06 pm

wsbriggs says:
October 22, 2013 at 1:30 pm
It’s time to start a campaign for separation of Science and The State. It will eliminate most of the rent seekers, and focus scientists on real problems. It’s funny how the need to have real results makes better scientists.

– – – – – – – –
wsbriggs,
I agree.
To undertake the campaign I consider that an open discussion in the open marketplace of ideas is needed on a basic plan and strategy development. By open I mean a broad public forum based process.
I am in if the effort is transparent.
John

Jimmi_the_dalek
October 22, 2013 2:16 pm

Data and materials availability. All data necessary to understand, assess, and extend the conclusions of the manuscript must be available to any reader of Science. All computer codes involved in the creation or analysis of data must also be available to any reader of Science. After publication, all reasonable requests for data and materials must be fulfilled. Any restrictions on the availability of data, codes, or materials, including fees and original data obtained from other sources (Materials Transfer Agreements), must be disclosed to the editors upon submission.
From Science web pages.
Also,
“As a condition of publication, authors must agree to make available all data necessary to understand and assess the conclusions of the manuscript to any reader of Science. Data must be included in the body of the paper or in the supplementary materials, where they can be viewed free of charge by all visitors to the site. Certain types of data must be deposited in an approved online database, including DNA and protein sequences, microarray data, crystal structures, and climate records.”
Now, that does not state that computer codes should be available as part of the article or supplementary data , unlike the raw data. I think they could argue that it would be enough for the authors to supply any codes to anyone who asked.

Jimbo
October 22, 2013 2:18 pm

My thinking on all this is very simple. If your work is good and your science strong you would leap at the chance to not only give Willis what he wants but even more! It’s easy in today’s digital world with large servers and email attachments.
If I wrote a paper that I thought was robust and Willis asked to see my workings and code I would give him EVERYTHING – my workings, data, code the whole lot. Why not? Instead Science would rather have an air of doubt hanging over this paper. Imagine if a sceptic behaved in this way. I wouldn’t support such behaviour.
If you have nothing to hide you have nothing to fear. If your work is found to be bad then be thankful. I thought that was the point of science advancement!!
Just my 2 Cents.

Momsthebest
October 22, 2013 2:39 pm

I enjoy Willis’ posts and his vigorous pursuit of understanding and advancing science. Hurray for the citizen scientists who are absolutely needed in our society. For a good read on the contributions of a very important citizen scientist, read the book “Tuxedo Park : A Wall Street Tycoon and the Secret Palace of Science That Changed the Course of World War II.”

October 22, 2013 3:34 pm

I referred the other day to this article in The Scientist:
http://www.the-scientist.com//?articles.view/articleNo/37843/title/Mislabeled-Microbes-Cause-Two-Retractions/
Pamela Ronald always sets new recruits to her team replicating earlier research in her lab. When two new recruits found two errors in some work she had done in 1995, she immediately retratced the paper and told all and sundry at the conferences she attended. Faulty research is not just a problem for the researcher, but any who rely on that research.
As someone who promoted the concept of applying science to organic agriculture with some vigour to anyone who would listen back in the 80s and 90s, I am particularly pleased to note that this was in the very area that was then labelled “muck and mystery”.

Richard G
October 22, 2013 3:52 pm

What, Willis abrasive at times? Awww, he is just soft an’ cuddly like a hammerhead shark is all.
Kudos on the cuddly.

Jquip
October 22, 2013 3:52 pm

The Pompous Git: “Sadly, Ferd, very little replication happens in science these days. There’s no funding for it.”
Recently, and perhaps here or at Curry’s, on the point of replication was an argument that is loosely stated: “If the experiment is replicated, and comes to the same result, then what was the point of replication?” There seems to be an unwavering Faith that there are never any differences in what a man says he did, what he thinks he did, and what he actually did. Which rather reminds of a quote from Charles Babbage:
“On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”

October 22, 2013 4:47 pm

Willis Eschenbach on October 22, 2013 at 3:11 pm says,

Whitman on October 22, 2013 at 7:16 am

– – – – – – – –
Willis,
I was gently touched by your reply. That said, we are not in Gilead here on this thread and I have no balm to sooth your self-inflicted communication contusions caused by your lack of discipline when addressing Dr. Marcia McNutt.
Please Willis, a respectfully considerate kind of circumspection is reasonable and warranted in formal communications to reasonably well-established scientific professionals like Dr. Marcia McNutt.
John

October 22, 2013 5:03 pm

Jquip
One of the major problems of replication is that of cost. Who can afford to put another satellite into space, assemble a team of 30 technical experts, purchase a new supercomputer, or build a Large Hadron Collider to perform a mere replication? Science is at a crossroads. Interesting times…

Geoff Sherrington
October 22, 2013 5:06 pm

For those who do not know the history –
“On 18th February 2005 Professor P. D. Jones of CRU replied to my emailed requests for his land station data by including, “Why should I make the data available to you, when your aim is to try and find something wrong with it.” Author, Warwick Hughes, Australian geologist. This extract from
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc3502.htm
Warwick owns a blog named “Errors in IPCC climate science” (and others).
Warwick brought information on the work of Phil Jones, relating to estimation of UHI and adjustments to temperature data, to a think tank named Tasman Institute around 1992. We who were advisors to recommend funding for projects brought to Tasman failed to see the future implications of his visionary work. Despite our failure, it was Warwick’s work that catalysed me to become involved in the matter, starting with emails to Phil Jones about year 2005.
Some of the issues current at the time were difficult, until it was realised that a way to fix inconvenient data was to change it and call it an ‘adjustment’.

wayne
October 22, 2013 6:14 pm

Git, I don’t see it that way after following science over the decades.
You say: “One of the major problems of replication is that of cost. Who can afford to put another satellite into space, assemble a team of 30 technical experts, purchase a new supercomputer, or build a Large Hadron Collider to perform a mere replication? Science is at a crossroads. Interesting times…”
Doesn’t really work that way as I have seen it. New machines and satellites are usually justified to answer new questions but the fact that they then have available platforms to also perform replication of older yet-to-be-verified science this is piggy-backed onto the new instruments payload and costs. And that includes CERN and the LHC. Replication is a large part of particle physics and since each machine extends the capabilities of the former generations with more precision the cost is quite relatively low per verification or refutation. You do want these discoveries and concepts to be verified by replication don’t you? I do, but this cycle of advancement/verification has been going on for decades, that is how it is done. (and please exclude climate science from the context used, that is the problem with that branch)

October 22, 2013 6:16 pm

Willis, you wrote at 2:47 PM:

So they HAVE to give us what you call a “step by step walkthrough” in the form of the computer code that they actually used.

Maybe the authors performed their data selection from the raw data in a spreadsheet, by manually deleting records, so without computer code. Still, they would be obliged to show exactly what records were in their selection, as the policy says:

All data necessary to understand, assess, and extend the conclusions of the manuscript must be available to any reader of Science.

To repeat a posting of mine half a year ago on ClimateAudit:

With a little effort the authors could have made their paper easy to verify: they could have published a Virtual Machine (Mac, Linux or Windows) with
– all original source data, intermediate data and final results
– all source code and compiled programs used for the analysis
– all free third party software installed that would be needed to reproduce the results
– a list of non-free software that still needs to be installed (e.g. MS-Excel)
– a description of the manual operations that are applied to the data
– a console window with a complete history of all computational steps used to get the final results from the source data, with a clear indication where manual operations occurred
Journals should require this for any publication, if only to prevent new disasters.

Latimer Alder then responded the following:

What you propose looks suspiciously like what the outside world calls ‘an audit trail’.
Expect hordes of ‘professional climatologists’ to have apoplexy at the mere suggestion that such an alien concept should apply to them as much as to any other professional person – like an engineer or an accountant or a physician or an IT guy or financial adviser or a whole host of other equally qualified people.
You must also remember that the core goal of academe is no longer to publish work that is correct. The act of publishing is now the key point. The correctness of the content is of only secondary or tertiary importance. And producing an audit trail will reduce the productivity (papers per climatologist per annum) considerably.
Your eminently sensible proposition will be fought tooth and nail. If concealed data, dodgy practice and personal feuds were good enough for Isaac Newton, then they’re good enough for today’s climatologists. Ned Ludd is alive and kicking!

Jquip
October 22, 2013 7:31 pm

wayne: “You do want these discoveries and concepts to be verified by replication don’t you? I do,”
I’m pretty ambivalent. If I can pick it up at Walmart, I could hardly care less whether it was based on theory A, which posits a multiverse of denied realities, or theory B, that describes quantum tunneling from a conceptual framework of 15th century goat herding practices. No matter the case, someone engineered something on some basis.
And if it becomes a policy debate question of the Capitol One sort (“What’s in your wallet?”) then I remain completely ambivalent until people can show results. If they can and have, then it’s interesting for discussion. If they have not, then they can go pound sand about their pet hysterics until they can come back to the table with at least what’s required to manufacture shower curtains for Bed, Bath, and Beyond.
Because until there’s an ability to engineer with it, nothing has been ‘discovered.’ Only hyperventilated.

October 22, 2013 9:28 pm

It seems to me that they have an explicit policy that is intended to suggest we should trust the articles because the data and code are available.
They charge money for their publication.
The data and code are not available.
This seems like a simple case of fraud.
Maybe someone should point this out to Science, and if they can’t rectify the situation, forward this to the appropriate authorities.