Science magazine: Demanding reproducibility up front

Lance Wallace writes

Science magazine has instituted a new policy requiring authors of preclinical studies to state their statistical plans (sample size estimation, treatment of outliers, etc.).  See editorial by the new Editor in chief, Marcia McNutt (p. 229, volume 343, 17 Jan 2014).

This reads as though it were written by McIntyre, Montford, Wegman….

“Because reviewers who are chosen for their expertise in subject matter may not be authorities in statistics as well, statistical errors in manuscripts may slip through. For that reason…we are adding new members to our Board of Reviewing Editors from the statistical community to ensure that manuscripts receive appropriate scrutiny in their methods of data analysis.”

That is, an audit!

Take a bow, gentlemen!

=============================================================

The article is here: http://www.sciencemagazinedigital.org/sciencemagazine/17_january_2014?pg=9#pg9

Now if we can just get publications like JGR to make the same demands of their authors, we’ll really be getting somewhere – Anthony

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

66 Comments
Inline Feedbacks
View all comments
Manfred
January 26, 2014 10:01 pm

Requiring real science should be a competitive advantage against the Holtzbrinck press (Nature, Scientific American).

george e. smith
January 26, 2014 10:47 pm

“””””…..rabbit says:
January 26, 2014 at 2:28 pm
George E. Smith:
There’s only one way to do statistics.
Nearly snorted coffee through my nose over that one.
I write papers using statistics. There is endless numbers of ways — all of them at least somewhat justifiable — to statistically analyze a data set and reach conclusions about the underlyng population……”””””
Well you can drop all the weasel words you want to; I’m impressed. I guess you missed my simple analogy to a tool box, and my observation that you don’t have to use every tool on every problem. Well from your examples that you mentioned, it would seem that you agree with that.
I guess there’s a gazillion ways to do differential equations too. I’m not going to give a list of appropriate buzz words. The tools used depend on the problem; what you want to accomplish; just like with statistics.
My point was (and still is) that any of the tools are applied to some set of numbers; numbers that are already known. You don’t do statistical analysis on x, y, z or a,b,c, or uU1,U2,U3.
And the end results of applying the tool correctly, are quite independent of the numbers themselves; the result is equally valid for any set of already known numbers, whether those numbers are exactly calculated from some closed form formula, or whether no two of the numbers in the set are related in any way, as to origin. The result calculated by correct application of the appropriate algorithm, is an intrinsic property of that unique set of numbers. It doesn’t apply to any different set of numbers; they exhibit their own properties.
So the results of the analysis, can tell you nothing at all about any number, that is not a member of the set that was analysed.
All the twisting in the wind, can not imbue the result with any relativity or meaning outside that set.
And I said nothing about justifiability. Getting paid to do it, is as good a rationalization as any.
It’s over half a century since my last sojourn in academia, since then, excepting the last four months, I have operated in Industrial jobs, as a practicing physicist. Not once in that 50 plus years, was it ever necessary for me to do any statistics; well maybe I might have calculated an average, a couple of times.
All my design tasks required guaranteed operation, by design; never by statistical “betting.”
Despite the statistical improbability, some body seems to win the lottery, even if they only bought one ticket. You can’t convince them of statistical significance.

Mindert Eiting
January 27, 2014 12:38 am

If I were the chief editor of Science or Nature, I would appoint Quentin Crisp, who once said that there is no need to do any housework at all. After the first four years the dirt doesn’t get any worse.

January 27, 2014 2:16 am

Mindert Eiting said January 27, 2014 at 12:38 am

If I were the chief editor of Science or Nature, I would appoint Quentin Crisp, who once said that there is no need to do any housework at all. After the first four years the dirt doesn’t get any worse.

He also claimed to be “One of the stately homos of England” in The Naked Civil Servant.

David Bailey
January 27, 2014 3:27 am

I seriously wonder if the age of science is coming to an end. As science has progressed, the easily demonstrated effects have been discovered and recorded, and in so many areas science now seems to be scrabbling in experimental noise – hoping that statistics can make up for the lack of signal, and the lack of decent starting data, and then grossly exaggerating the significance of what they ‘discover’.
When the data tells you nothing, personal prejudice and concerns about funding supply the signal!
People still think science is advancing, but a lot (not all) of that comes from the exploitation of science from long ago – Quantum Theory is now about 100 years old!
Can anyone seriously claim that the various global land temperature records are fit for purpose, given all the adjustments that Anthony has helped to expose. Why would anyone torture such a dubious data-set to try to extract a signal of any sort!
The internet is awash with other science-based scandals – usually documented by knowledgeable people who used to be part of the science establishment. Having seen the way climate scientists work, I’ll bet at least a few of them are true:
1) There are claims that the evidence that saturated fat consumption is based on a cherry picked graph by a man called Ancel Keys, and that there is no reliable evidence that fat is harmful. Indeed, it is suggested that the obesity crisis is at least partly the result of people abandoning fatty food in favour of carbohydrates.
2) There are claims that the benefits of statins have been grossly overstated, while the risk of side effects has been downplayed. I myself got a bad reaction to simvastatin, which could have been that I was just unlucky, but I was amazed that when I mentioned this to others, many had statin horror stories of their own.
3) There are claims that AIDS is not caused by the HIV virus. If true, this is a terrible scandal.
4) Peter Woit runs a blog site devoted to the fact that despite enormous effort over 30 years, string theory has not been tested in any positive way. The LHC delivered the Higgs, which is part of the Standard Model, but nothing else! He bemoans the fact that despite this, theorists constantly push string theory based ideas such as the multiverse, etc.
The thing that unites all these stories, is that the scientific establishment seems to try to rubbish all dissent, and never – absolutely never – engage with dissenters in open debate of the scientific issues.

ferdberple
January 27, 2014 6:30 am

rabbit says:
January 26, 2014 at 2:28 pm
There is endless numbers of ways — all of them at least somewhat justifiable — to statistically analyze a data set and reach conclusions about the underlyng population
==============
That is the problem with applying statistics to analyze data. If you pick and choose your method AFTER you have seen the data, then you have biased the result, but your statistics will not reveal this. Either to you or your audience.
Picking your method to analyze the data after the data has been viewed is cherry picking. Not cherry picking of data, rather cherry picking of method. It is a largely unrecognized form of bias in scientific reporting, that result sin large numbers of false positives
Medicine it appears if finally starting to understand the problem, and Science magazine is now adopting new standards for medical studies. But the rest of the scientific fields are still having problems understanding why their statistical studies are largely full of false positives.

jakee308
January 27, 2014 9:27 am

Reproducibility should always have been the touch stone. There are ways to protect proprietary information while also allowing others to attempt reproducibility of the results.
I learned as a child how the scientific method was supposedly done. It was very surprising to me to find that Science Magazines were quite comfortable in not demanding that papers submitted to them were not required to demonstrate reproducibility and even more surprising to find that the scientists themselves either could not or would not supply the necessary information for their peers or skeptics to do so and get similar results thus proving their theories.
How much time and frustration and “inconvenient truths” we all would’ve been spared if they had just once done things the way it was supposed to have been done.
Of course they would not then have created reputations and positions of renown and authority that they had until lately if they had done.

mpainter
January 27, 2014 10:28 am

This is many decades overdue, but better late than never. This is an improvement that we have the global warmers to thank for, because their it is their egregious science that finally stirred the editors at Science to make this improvement.

January 27, 2014 10:40 am

How can anyone be expected to do alarming, grant attracting [science] if we have to stick to the truth?

Richard of NZ
January 27, 2014 12:59 pm

Perhaps I am becoming even more cynical in my old age, but I more and more get the feeling that the only statistics techniques that should be used are:-
Sample size
Mean
Median
Standard deviation
Coefficient of variation
Any statistical manipulations beyond these should be taken as conclusive evidence of cheating.

rabbit
January 27, 2014 1:56 pm

Richard of NZ:
Much statistical analysis is multivariate, meaning there are many dependent variables to be analyzed simultaneously. This requires far more sophisticated techniques than what you have listed.

KNR
January 27, 2014 3:06 pm

Hear is an idea that the professionals publishing research should be set minim standard that their work as to meet the same requirements as the work one of their own students does when handing in a essay . I admit on reflection its a low standard put its still a far higher standard than many in climate ‘science’ achieve a regular basis in their published work.

January 27, 2014 7:19 pm

Interestingly, in studies of the fluctuations in global temperatures, the sample size is nil. Climatologists have the wrong idea about the methodology of scientific research.

Janice Moore
January 27, 2014 9:00 pm

Thank you, eyes on you (at 7:19pm, on Jan. 26th): “It was written and promoted by those above. Apparently it was read by … Marcia … .”
NOW, I get this:
Take a bow, gentlemen!
I was kind of ticked off for a minute, there… .
WHY??
Well! If some jerk had said it, I wouldn’t care (as if). But, it was, (sad eyes) An-tho-ny… . Glad I can keep on smiling!
#(:))

negrum
January 28, 2014 4:55 am

Janice Moore says:
January 27, 2014 at 9:00 pm
” … Well! If some jerk had said it, I wouldn’t care (as if). But, it was, (sad eyes) An-tho-ny… . Glad I can keep on smiling!”
—-l
I thought it was Lance Wallace saying: “Take a bow, gentlemen!” to McIntyre, Montford, Wegman and others, to commend them for influencing Marcia (the editor.)

Janice Moore
January 28, 2014 12:53 pm

Thank you, Negrum, for correcting my mistake re: who said what above.