Lance Wallace writes
Science magazine has instituted a new policy requiring authors of preclinical studies to state their statistical plans (sample size estimation, treatment of outliers, etc.). See editorial by the new Editor in chief, Marcia McNutt (p. 229, volume 343, 17 Jan 2014).
This reads as though it were written by McIntyre, Montford, Wegman….
“Because reviewers who are chosen for their expertise in subject matter may not be authorities in statistics as well, statistical errors in manuscripts may slip through. For that reason…we are adding new members to our Board of Reviewing Editors from the statistical community to ensure that manuscripts receive appropriate scrutiny in their methods of data analysis.”
That is, an audit!
Take a bow, gentlemen!
=============================================================
The article is here: http://www.sciencemagazinedigital.org/sciencemagazine/17_january_2014?pg=9#pg9
Now if we can just get publications like JGR to make the same demands of their authors, we’ll really be getting somewhere – Anthony
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
I’ve been around long enough to remember when Mosh’s primary demand was “code and data”
Now it seems to be “models are wonderful…”
I think that’s in part due to the decline in quality of medical studies. My daughter is an infectious disease DR who does research into AIDS. She is a reviewer for a couple of medical journals and commented that most of the ‘studies’ she reviews are atrocious – simple math errors, horrible grammatical construction, cited papers that are misrepresented (she actually looks at the reference papers to see if they support the studies findings – she said most of the time they do not), poor experimental design, ridiculously small sample sizes, and on and on. She says sometimes she’s embarrassed for the submitting researcher.
You mean they’ve going to demand – gasp! – science in their scientific papers?
The horror!
Let us hope that this re-sets the standards appropriate for Science.
Combined with the last post, it seems if a nadir has been reached (I’ve personally retired the term “tipping point”); was it the “shipoffools”?
I wouldn’t be so quick to applaud. It sounds as though Science is only applying this to medical papers, not as a general policy.
Still not reproducible.
All scientific studies that involve the application of statistics should define beforehand how the statistical analysis will be done and how conclusions will be gleaned from that analysis.
Why? Because there is no one way to do statistics. Without a predefined methodology, scientists can go shopping where they try out one statistical method after another until they get a result that matches what they hope or expect.
It would be surprising if you were unable to supply the code and date, You just need to not lose it. Sadly we are often surprise. The code eg detailed methodology and data lets other see what you did. The science policy is about saying what you will do and what are the Null hypothesis: it’s intended to limit cherry picking, post justifcations. sharp practices and old fashioned incompetence. One is speed cameras, and the other is drivers ed. Both can lower road deaths,
That’s an excellent policy. Let’s see if they actually enforce it — especially in the Climate-science area. Climate Auditors, take note!
—
“A bad feeling crawled up my trouser leg.” — Jay Russell, Celestial Dogs, 1996
But will they enforce these and what if authors break the rules?
“Putting the science in climate science since… well since 2014.”
From what I understand, studies show that over 80% of observational studies in medicine are incorrect.
For example Dr John Ioannidis study is referenced in this article.
http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/
Good news, for once.
see rabbit run up there.
There’s only one way to do statistics. Apply the rules and algorithms of that discipline called statistical mathematics, and then make sure you understand how to do arithmetic accurately as well.
Like any kit of tools , you don’t usually have to use every single tool in the box on every task. Just the ones that fit the nuts and bolts in your system and perform the operations you need done.
But there are plenty of ways to misuse statistics, because it is like projective geometry, or string theory. You can’t really do anything with what the algorithms come up with.
Astrology teaches you how to count to 12, and also how to create totally random number sets, that have higher than even chance of correlating with virtually anything. Try reading the horoscope for your “sign” to see how well it pegs you. Then read the other eleven, and see if any of those fail to peg you as well.
So statistics determines well defined attributes of ANY set of numbers (data sets), that need not relate to each other in any way. The color of the first 100 cars you see today.
But the results are precisely defined by the algorithms of statistics, and qite valid no matter what the numbers in the set are.
You can do a statistical analysis of the numbers in your local telephone directory (if you still have a printed one.) You can either do just the valid telephone numbers, or include the street numbers as well; page numbers too if you like; ALL in the single data set. The statistics will be quite valid; I’m presuming the arithmetic is correct.
The results will be as meaningful as the GISS anomalies statistics. Quite valid but not usable for anything; specially predicting anything.
My university professor who taught me projective geometry, was quite proud of the fact that it isn’t of any use. What use is a discipline that says that all circles are hyperbolas, and they all intersect each other at exactly the same two points.
Statistics isn’t of any use either. Well it simply explains how surprised you will be when you learn what actually happens next.
Would it be far fetched to suggest that Willis’s letter to McNutt had some small influence?
http://wattsupwiththat.com/?s=McNutt+Willis
Willis: “The first issue is that despite repeated requests, past Science Magazine editors have flouted your own guidelines for normal scientific transparency. You continue to publish articles about climate science without requiring that the authors archive their data and code as required by your own rules.
The second issue is that in climate science, far too often Science magazine editors have substituted pal review for peer review. … seriously, you have published some really risible, really shabby, grade-school level studies in climate science. It’s embarrassing.”
That would be encouraging if such communication works. I think all of science is widely aware of the “pause” and there will be an avalanche of journal policy and practice repairs to follow since Science is the premier journal (certainly after the poli-sci transformation of the best European journals that have certainly been surpassed now by China’s top scientific journal).
Although directed at medical studies, I believe they will also apply to papers in other disciplines where conclusions drawn from statistical manipulation of data are central. I don’t believe Science has published proportionately as much on climate science as the other, ever more discredited and perhaps irreedemable journals like Nature and JGR.
Hmmm, I dunno…
I can’t see it catching on!
Added note. In a world of few real coincidences, probably the well-publicized egregious Copernicus PRP fiasco, was a tipping point for Science magazine and others to come. Maybe the perps did do some real good afterall.
“Science self-corrects”…Publishing a paper on the danger of tumors from GMOs, and then subsequently having to retract it, can become furtive ground for activists to insert a big conspiracy by Mansanto, as Elesevier found out. Elesevier should have taken responsibility for publishing a scientific study that had an incredibly low sample size with a particularly tumor prone rat.
How many studies have been conducted regarding the safety of GMOs? 1783 studies. Not settled science, but not exactly virgin territory either.
refs: http://wattsupwiththat.com/2013/11/28/science-self-corrects-bogus-study-claiming-roundup-tolerant-gmo-corn-causes-cancer-to-be-retracted/
The Scientific Journals are seeing not only their reputations go away but their income as well. When Blog’s such as this one are where scientists come to vet their work and discus it, it leaves those journals holding an empty bag.
As their integrity has been shown a farce, they are dying from loss of income and loss of reputation.
Many thanks to people Like Anthony Watt’s who give an open venue for open discussion of differing ideas and theory’s. This alone is taking money from these journals. One need only look at the View count to see what is really making a dent in incomes and bad science publications.
It seems the author of this post didn’t take notice of this relevant part :” authors of preclinical studies ”
All the other authors can go as before…
Beware of statistical significance with poorly designed methodology and bad or fudged data. Reproducibility in climate science (models) would need to include accurate predictions. Like Yogi said, ‘predictions, especially (sic) about the future, are difficult’.
Too little too late, to have me renew my subscription.
In recent years, the FDA has tightened up significantly in their demands on the research plans and statistical models. I have managed several programs requiring such modeling subject to review by the FDA.
The generation of these models are very difficult to implement for a host of reasons.
1) Ethics.
Whether the experimentation is done on human beings, cadavers, large animals or small rodents there are serious pressures to reduce the numbers of individual creatures exposed to the experimental process. Most of you know that these kinds of experiments (in vivo) must get through an ethics review board.
2) Availability of appropriate cases
In some cases a clinical champion of a method/device/treatment/intervention may simple not be able to attract enough people, if people are required, who would need the treatment. There has to be enough people to meet the statistical models for the study to be mathematically significant. If the condition/treatment under study is rare, then the numbers may not be there.
3) Interest of the lead investigator of record to enforce compliance
Lead investigators of clinical studies must be interested in doing the math. Often they are not. because of the reality of 2) above, MDs often are willing to be satisfied with lower number and forgoe the statistical model precision. Often they live in the world of maybes.
4) Cost
Doing preclinical studies is expensive and something executing the numbers kills the program. A drug takes about 1 Billion dollars to get to market. A device takes 10-20 million dollars.
5) Variable selection flaws
Any good model is only as good as the variable and the model selected. How exactly does one determine the rate of absorption of a bioabsorbale implantable without cutting it out? That is something that can’t be done on a living treated person with any degree of precision. . It would be unethical. The error in the measurements of the dependent variables often cloud the result notwithstanding the numbers in the sample.
6) Time
Big experiments take time. Time= money and it is a money driven world.
7) Lack of statistical training among clinicians
Some doctors are very good with statistics. Some are terrible and tink the data say thing that the data does not say. Some look for singular events to validate their work… It happens.
8) Only the sick get treated
The truth is only the sick get treated. It is unethical to treat healthy people. Often sick people are compromised is number of ways that fog the post treatment analysis. Imaging only being allowed to test a new medular nail on terminal patients with fully involved cancer of the femur. How do you determine of the nail worked properly? So much for stats in this case.
There are more but I wanted to illustrate the reality of clinical work.
Getting the stats model correct is very important, but often it is impossible and accommodations are made provided that there is a potential net benefit to the patient.