Science magazine: Demanding reproducibility up front

Lance Wallace writes

Science magazine has instituted a new policy requiring authors of preclinical studies to state their statistical plans (sample size estimation, treatment of outliers, etc.).  See editorial by the new Editor in chief, Marcia McNutt (p. 229, volume 343, 17 Jan 2014).

This reads as though it were written by McIntyre, Montford, Wegman….

“Because reviewers who are chosen for their expertise in subject matter may not be authorities in statistics as well, statistical errors in manuscripts may slip through. For that reason…we are adding new members to our Board of Reviewing Editors from the statistical community to ensure that manuscripts receive appropriate scrutiny in their methods of data analysis.”

That is, an audit!

Take a bow, gentlemen!


The article is here:

Now if we can just get publications like JGR to make the same demands of their authors, we’ll really be getting somewhere – Anthony

0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
David Jay
January 26, 2014 11:47 am

I’ve been around long enough to remember when Mosh’s primary demand was “code and data”

David Jay
January 26, 2014 11:48 am

Now it seems to be “models are wonderful…”

Bill Marsh
January 26, 2014 11:53 am

I think that’s in part due to the decline in quality of medical studies. My daughter is an infectious disease DR who does research into AIDS. She is a reviewer for a couple of medical journals and commented that most of the ‘studies’ she reviews are atrocious – simple math errors, horrible grammatical construction, cited papers that are misrepresented (she actually looks at the reference papers to see if they support the studies findings – she said most of the time they do not), poor experimental design, ridiculously small sample sizes, and on and on. She says sometimes she’s embarrassed for the submitting researcher.

Barbara Skolaut
January 26, 2014 11:55 am

You mean they’ve going to demand – gasp! – science in their scientific papers?
The horror!

Doug UK
January 26, 2014 11:58 am

Let us hope that this re-sets the standards appropriate for Science.

January 26, 2014 12:01 pm

Combined with the last post, it seems if a nadir has been reached (I’ve personally retired the term “tipping point”); was it the “shipoffools”?

January 26, 2014 12:02 pm

I wouldn’t be so quick to applaud. It sounds as though Science is only applying this to medical papers, not as a general policy.

January 26, 2014 12:06 pm

Still not reproducible.

January 26, 2014 12:09 pm

All scientific studies that involve the application of statistics should define beforehand how the statistical analysis will be done and how conclusions will be gleaned from that analysis.
Why? Because there is no one way to do statistics. Without a predefined methodology, scientists can go shopping where they try out one statistical method after another until they get a result that matches what they hope or expect.
January 26, 2014 12:25 pm

It would be surprising if you were unable to supply the code and date, You just need to not lose it. Sadly we are often surprise. The code eg detailed methodology and data lets other see what you did. The science policy is about saying what you will do and what are the Null hypothesis: it’s intended to limit cherry picking, post justifcations. sharp practices and old fashioned incompetence. One is speed cameras, and the other is drivers ed. Both can lower road deaths,

January 26, 2014 12:30 pm

That’s an excellent policy. Let’s see if they actually enforce it — especially in the Climate-science area. Climate Auditors, take note!

“A bad feeling crawled up my trouser leg.” — Jay Russell, Celestial Dogs, 1996

Adam Gallon
January 26, 2014 12:46 pm

But will they enforce these and what if authors break the rules?

Rob Dawg
January 26, 2014 12:49 pm

“Putting the science in climate science since… well since 2014.”

January 26, 2014 12:54 pm

From what I understand, studies show that over 80% of observational studies in medicine are incorrect.
For example Dr John Ioannidis study is referenced in this article.

January 26, 2014 12:55 pm

Good news, for once.

george e. smith
January 26, 2014 12:56 pm

see rabbit run up there.
There’s only one way to do statistics. Apply the rules and algorithms of that discipline called statistical mathematics, and then make sure you understand how to do arithmetic accurately as well.
Like any kit of tools , you don’t usually have to use every single tool in the box on every task. Just the ones that fit the nuts and bolts in your system and perform the operations you need done.
But there are plenty of ways to misuse statistics, because it is like projective geometry, or string theory. You can’t really do anything with what the algorithms come up with.
Astrology teaches you how to count to 12, and also how to create totally random number sets, that have higher than even chance of correlating with virtually anything. Try reading the horoscope for your “sign” to see how well it pegs you. Then read the other eleven, and see if any of those fail to peg you as well.
So statistics determines well defined attributes of ANY set of numbers (data sets), that need not relate to each other in any way. The color of the first 100 cars you see today.
But the results are precisely defined by the algorithms of statistics, and qite valid no matter what the numbers in the set are.
You can do a statistical analysis of the numbers in your local telephone directory (if you still have a printed one.) You can either do just the valid telephone numbers, or include the street numbers as well; page numbers too if you like; ALL in the single data set. The statistics will be quite valid; I’m presuming the arithmetic is correct.
The results will be as meaningful as the GISS anomalies statistics. Quite valid but not usable for anything; specially predicting anything.
My university professor who taught me projective geometry, was quite proud of the fact that it isn’t of any use. What use is a discipline that says that all circles are hyperbolas, and they all intersect each other at exactly the same two points.
Statistics isn’t of any use either. Well it simply explains how surprised you will be when you learn what actually happens next.

Gary Pearse
January 26, 2014 12:59 pm

Would it be far fetched to suggest that Willis’s letter to McNutt had some small influence?
Willis: “The first issue is that despite repeated requests, past Science Magazine editors have flouted your own guidelines for normal scientific transparency. You continue to publish articles about climate science without requiring that the authors archive their data and code as required by your own rules.
The second issue is that in climate science, far too often Science magazine editors have substituted pal review for peer review. … seriously, you have published some really risible, really shabby, grade-school level studies in climate science. It’s embarrassing.”
That would be encouraging if such communication works. I think all of science is widely aware of the “pause” and there will be an avalanche of journal policy and practice repairs to follow since Science is the premier journal (certainly after the poli-sci transformation of the best European journals that have certainly been surpassed now by China’s top scientific journal).
Although directed at medical studies, I believe they will also apply to papers in other disciplines where conclusions drawn from statistical manipulation of data are central. I don’t believe Science has published proportionately as much on climate science as the other, ever more discredited and perhaps irreedemable journals like Nature and JGR.

January 26, 2014 1:05 pm

Hmmm, I dunno…
I can’t see it catching on!

Gary Pearse
January 26, 2014 1:11 pm

Added note. In a world of few real coincidences, probably the well-publicized egregious Copernicus PRP fiasco, was a tipping point for Science magazine and others to come. Maybe the perps did do some real good afterall.

January 26, 2014 1:25 pm

“Science self-corrects”…Publishing a paper on the danger of tumors from GMOs, and then subsequently having to retract it, can become furtive ground for activists to insert a big conspiracy by Mansanto, as Elesevier found out. Elesevier should have taken responsibility for publishing a scientific study that had an incredibly low sample size with a particularly tumor prone rat.

“However, there is legitimate cause for concern regarding both the number of animals in each study group and the particular strain selected. The low number of animals had been identified as a cause for concern during the initial review process, but the peer-review decision ultimately weighed that the work still had merit despite this limitation. A more in-depth look at the raw data revealed that no definitive conclusions can be reached with this small sample size regarding the role of either NK603 or glyphosate in regards to overall mortality or tumor incidence. Given the known high incidence of tumors in the Sprague-Dawley rat, normal variability cannot be excluded as the cause of the higher mortality and incidence observed in the treated groups.”

How many studies have been conducted regarding the safety of GMOs? 1783 studies. Not settled science, but not exactly virgin territory either.

Bill H
January 26, 2014 1:27 pm

The Scientific Journals are seeing not only their reputations go away but their income as well. When Blog’s such as this one are where scientists come to vet their work and discus it, it leaves those journals holding an empty bag.
As their integrity has been shown a farce, they are dying from loss of income and loss of reputation.
Many thanks to people Like Anthony Watt’s who give an open venue for open discussion of differing ideas and theory’s. This alone is taking money from these journals. One need only look at the View count to see what is really making a dent in incomes and bad science publications.

January 26, 2014 1:29 pm

It seems the author of this post didn’t take notice of this relevant part :” authors of preclinical studies ”
All the other authors can go as before…

Jim G
January 26, 2014 1:37 pm

Beware of statistical significance with poorly designed methodology and bad or fudged data. Reproducibility in climate science (models) would need to include accurate predictions. Like Yogi said, ‘predictions, especially (sic) about the future, are difficult’.

john robertson
January 26, 2014 1:40 pm

Too little too late, to have me renew my subscription.

Paul Westhaver
January 26, 2014 1:40 pm

In recent years, the FDA has tightened up significantly in their demands on the research plans and statistical models. I have managed several programs requiring such modeling subject to review by the FDA.
The generation of these models are very difficult to implement for a host of reasons.
1) Ethics.
Whether the experimentation is done on human beings, cadavers, large animals or small rodents there are serious pressures to reduce the numbers of individual creatures exposed to the experimental process. Most of you know that these kinds of experiments (in vivo) must get through an ethics review board.
2) Availability of appropriate cases
In some cases a clinical champion of a method/device/treatment/intervention may simple not be able to attract enough people, if people are required, who would need the treatment. There has to be enough people to meet the statistical models for the study to be mathematically significant. If the condition/treatment under study is rare, then the numbers may not be there.
3) Interest of the lead investigator of record to enforce compliance
Lead investigators of clinical studies must be interested in doing the math. Often they are not. because of the reality of 2) above, MDs often are willing to be satisfied with lower number and forgoe the statistical model precision. Often they live in the world of maybes.
4) Cost
Doing preclinical studies is expensive and something executing the numbers kills the program. A drug takes about 1 Billion dollars to get to market. A device takes 10-20 million dollars.
5) Variable selection flaws
Any good model is only as good as the variable and the model selected. How exactly does one determine the rate of absorption of a bioabsorbale implantable without cutting it out? That is something that can’t be done on a living treated person with any degree of precision. . It would be unethical. The error in the measurements of the dependent variables often cloud the result notwithstanding the numbers in the sample.
6) Time
Big experiments take time. Time= money and it is a money driven world.
7) Lack of statistical training among clinicians
Some doctors are very good with statistics. Some are terrible and tink the data say thing that the data does not say. Some look for singular events to validate their work… It happens.
8) Only the sick get treated
The truth is only the sick get treated. It is unethical to treat healthy people. Often sick people are compromised is number of ways that fog the post treatment analysis. Imaging only being allowed to test a new medular nail on terminal patients with fully involved cancer of the femur. How do you determine of the nail worked properly? So much for stats in this case.
There are more but I wanted to illustrate the reality of clinical work.
Getting the stats model correct is very important, but often it is impossible and accommodations are made provided that there is a potential net benefit to the patient.

January 26, 2014 1:42 pm

Bill Marsh,
Good on your daughter. When I was a reviewer for an NIH journal, I also looked at cited refs to see if they really said what was claimed in the submitted paper and was appalled at how things were twisted. I’ve also witnessed games editors play to bias and favor authors with whom they agree. It’s just appalling, the whole peer review system. Way too subject to gaming and is really meaningless.
I, personally, think it ought to be junked in favor of open publication and a renewed appreciation that replication, not publication, is what is important. Until it’s been replicated by others outside the initial circle, it’s just assertion, pure and simple, no matter how pompous and prestigious (or even rigorous) the reviewers.

January 26, 2014 1:52 pm

In my opinion, all climate science should be published in the Journal of Irreproducible Results. The most prestigious scientific journal in the world.

Leon Brozyna
January 26, 2014 1:53 pm

Only time will tell …

January 26, 2014 2:18 pm

When I was an educator I had ‘Science’ delivered as one of the school magazines. I dropped it in the 90’s for various increasingly lack of quality reasons (as I recall).
I expect the growing disenchantment from many sources (such as WUWT) has been a stimulus for this change.
Lets hope this change is good kindling!

January 26, 2014 2:28 pm

George E. Smith:

There’s only one way to do statistics.

Nearly snorted coffee through my nose over that one.
I write papers using statistics. There is endless numbers of ways — all of them at least somewhat justifiable — to statistically analyze a data set and reach conclusions about the underlyng population.
Will you be using parametric or non-parametric analysis? Bayesian or frequentists? Robust or non-robust? What tests will you invoke (there are endless numbers of them)? What numerical algorithms will you use, given that it’s not always easy to compute and different algorithms can give different answers? How about trying eigen methods like Principal Component or Singular Spectrum analysis to extract the dominant modes? The variations and options go on and on.

January 26, 2014 2:30 pm

I may have mentioned this before. But when I was getting my MS, with a “heat transfer” emphasis, one assignment was to take one of about 30 sample “journal papers” which the professor handed out. NOW I know he was VERY selective on most of the papers. The one I took used a “variational method” to solve a transient conduction heat transfer problem for an arbitrary geometry. ONE example was given. The “basic” math was laid out in the paper. (About 6 pages long.) I showed up the night of our presentations (4 hour classes, once a week, extension grad school, U of Lincoln, NE) with 40 pages of “overheads” I took almost an hour, and expanded all the “math” into USABLE equations, with numbers. I DID NOT DO AN APPLIED EXAMPLE, but showed where the author got all his “numerical results”. (6 by 6 matrixes on a TI-59(??) helped.
I thought Dr. Lu would “ding” me for not doing a unique application. The mathematics were so compact, and took SO MUCH EXPOSITION (multiple uses of the fact that log (1.0000XXX) = .0000XXX pretty much, that sin(small angle in radians) = small angle in radians…and several other tricks) that it was enough in two weeks to work that out. Probably 20 to 30 man hours (I was young and foolish, single, etc.) ….Dr. Lu gave me an A for that presentation. AND he gave a 10 minute talk, pointing out: “Journal Papers, because of limited space…have to condense much information, and it is ALWAYS a struggle to apply the mathematics and figure out how the author did HIS work. Sometimes you have to contact them, get a copy of a graduate student’s thesis, or a textbook they are writing, and so on..”
Well, that was THEN, this is NOW. NOW the researchers CAN provide the DATA, the MATH and the CODEs, and well the DARNED SHOULD, if it’s PUBLIC MONEY????!!!!!!! If it’s PURE SCIENCE…If they work for a PUBLIC INSTITUTION (University, not private one…U of Chicago, Boston U, Brigahm Young, for example…could demand some OWNERSHIP and rightly so..) To quote Yoda: “Dragged, Kicking, SCREAMING they will be, 21st Century, if LIVE to see, they will..”

January 26, 2014 2:47 pm

This is not on topic, but the bankruptcy of the German wind farm company Prokon is worthy of a thread.
[Reply: These suggestions should be posted in Tips & Notes. ~mod.]

January 26, 2014 2:52 pm

In her editorial, Marcia McNutt writes, “For preclinical studies … we will be adopting the recommendation of the U.S. National Institute of Neurological Disorders and Stroke (NINDS) for increasing transparency.” However, she adds later that “we are adding new members to our Board of Reviewing Editors from the statistics community to ensure that manuscripts receive appropriate scrutiny in their methods of data analysis.”
So, as several earlier commenters have mentioned, the periodical Science will only be applying strict standards to preclinical medical studies. And I would argue that the inclusion of a few statisticians* in the Board of Reviewing Editors will likely have little impact on the quality of papers published outside the medical discipline. Also, nothing in McNutt’s editorial pertains directly on phenomenological or physics-based models. . . so Science’s new policy will do little, if anything, to improve the state of model validation or associated uncertainty quantification for papers that address climatology or other non-medical disciplines.
*Note: It has been my experience that statisticians who lack domain knowledge can actually be more harmful than helpful in the quest for truth.

January 26, 2014 3:12 pm

One choice I have had, beginning as recently as two years ago, when I peer-review a paper, has been to note whether a statistician should be brought in to examine the statistical methods. So, one way to address this is to allow the peer-reviewers to nominate those papers where the stats knowledge of content experts is not sufficient.

January 26, 2014 3:14 pm

So, a manuscript reporting the results of a trial is supposed to include the sample size analysis? This is often quite wordy, since it takes a lot to explain the argument for your sample size analysis plan, and explain the outcome from which you are extrapolating your expected clinically meaningful difference.

Adrian O
January 26, 2014 3:37 pm

All fine and dandy. But I’ll believe it when I see its effects.
What if “climate science is special?”
As in “specially abled.”
Strong on the moneyed side, weak on the sciency side…

January 26, 2014 4:12 pm

I’m skeptical of the motives of Science Magazine including the current Editor-n-Chief, Board and the Reviewers.
The posted change makes me think a (another) lawsuit (perhaps like the 2004 one or the current complaint against the Nobel Assembly at the Karolinska Institute in Sweden, or the lawsuit regarding a retraction of a paper in ‘Food and Chemical Toxicology – Elsevier’) is playing in the background and “money” i.e. cash, patents, IP and corporate donors to the clinic or institute are involved.
As for AGU, i.e. GRL and JGR et al., with Mann and Trenberth et al. as the “go to” reviewers and controlling the AGU (new fee increases to fund new propaganda awards, Climate Researchers Legal Defense Fund, bogus “journals” like ‘Earth’s Future’ and the shenanigans [Emperor Hansen Has A Cold] at the Fall Meeting [or was it tickets to the Seahawks vs 49’s game that moved the ice-breaker to Monday] and the HQ in DC with Wiley and Co. [for profit publisher needs more profit]), why bother submitting a paper.
Jeez and it is only Sunday.

wayne Job
January 26, 2014 4:13 pm

Statistics in all the variations and esoteric maths can be used to prove just about anything you want. Truth lays only in original unfudged data, one only has to look at all the temperature series that have been manipulated to prove global warming.
Now with the eyes of the internet firmly looking over their shoulder, statistical manipulation and fudges are becoming harder. Now even with their best fudging the the temperature graphs show cooling. Warming through data fudging has now painted these keepers into a corner, as people are now finding old data series that show the extent of the manipulation.
Statistics is proving to be the weapon of choice of the charlatan. The chickens are coming home to roost.

January 26, 2014 4:16 pm

One is speed cameras, and the other is drivers ed. Both can lower road deaths,
And even if they don’t lower deaths they can raise revenue.

jim Steele
January 26, 2014 4:40 pm

There are so many retractions and irreproducible claims that are showing up in medical research. Read New Truths That Only One Can See
One can only imagine how wrong many climate hypotheses are that cant be reproduced nor tested for decades. LIke the failure to predict growing Antarctic sea ice, instead of acknowledging their failures, they devise a new models to explain the failure away.

January 26, 2014 4:44 pm

“She…..commented that most of the studies she reviews are atrocious.”
Many years ago I taught a graduate class in basic research. Each of the students was required to read 5 journal articles of their choosing and to evaluate the articles according to the information presented in class about research design, randomization, selection of n, control of extraneous variables, etc., etc. They then had to decide, according to their criteria, whether each article was worthy of publication. I expected that about half would be thrown out, but was shocked to discover along with the students that 90 percent should have been rejected.
That was the state of science several years ago. I’d hate to see what the same activity would now reveal.

January 26, 2014 4:44 pm

What a bizarre confession from Science. If the new policy requires “reproducibility” and this will be out of reach for “expert reviewers”, what criteria have these “experts” been using all along when the authors could hide methodology and data at will?
In other words, if reviewers are incompetent with data/methodology, what on God’s green earth are they now? Mystics? Clairvoyants?

Leonard Jones
January 26, 2014 4:47 pm

I am rereading Galileo’s Revenge: Junk Science In The Courtroom by Peter Huber. This is
outstanding news. Most of the “Expert witnesses” who make junk science never publish
their works, and are later proven frauds (See Bendectin case.)
Or, they only share their works with other like minded true believers (like AGW supporters.)
Even if Science Magazine is doing the right thing here, I noticed about 10 years ago that
Scientific American had already drank the AGW Kool-Aid. I loved this magazine as a kid,
If we cannot force Michael Mann to publish his works, maybe Mark Steyn’s defense lawyers
can force through discovery what used to be done by real scientists in the peer review
My fear is that as long as Scientific American has been compromised, and that if other
journals are not as discriminating as Science, nothing will change.

January 26, 2014 4:59 pm

I think that people may not have appreciated the significance of what Rabbit said (January 26, 2014 at 12:09 pm). It is nearly always possible to choose a statistical method which will give the result you want – if you torture the data enough (see Darrel Huff’s ‘How to Lie with Statistics’)
For example, “Eating chocolate is good for your teeth”
Who said so? — Independent Research
Who Commissioned it? – The British Sugar Corporation.
What did they find? – Children who eat chocolate have fewer cavities.
How many juries did you have to poll before you found that one?
Although it is not exactly what Rabbit meant, you can see that the Global Warming narrative is supported by something like this:
In the beginning, I show that temperatures are increasing year by year. No doubt about it, a global warming catastrophe looms. But around the turn of the century warming stops. So instead, I show temperatures filtered by a 21 year wide Gaussian function which has the effect of projecting the warming of the 1990s into the 21st century. Thus the running mean still gives the impression of increasing global temperatures (see the Met.Office charts) . – but eventually even that starts to show no warming (because there has been no warming for more than half of the filter width) So instead I show a chart of temperature levels for each decade, each decade being warmer than the other. This manages to give the impression that temperatures are still rising even though they are not ( see latest IPPC report for this latest piece of sophistry)

Mac the Knife
January 26, 2014 7:15 pm

It is a small step forward….. One can hope that the rest of Science submittals will be subjected to the same criteria in the coming months. It is certainly worth sending them a short note to laud this step forward to more transparency and encourage them to apply it universally!
Be sure to reference the editorial Reproducibility by the new Editor in chief, Marcia McNutt (p. 229, volume 343, 17 Jan 2014)

January 26, 2014 7:19 pm

“This reads as though it were written by McIntyre, Montford, Wegman….” etc.
It was written and promoted by those above. Apparently it was read by the new Editor in chief, Marcia McNutt

January 26, 2014 8:30 pm

There’s no way the Team will permit such apostasy to stand.

January 26, 2014 8:58 pm

The ONLY question remaining here is whether or not top-level science even recognizes when it might very well exist……perhaps at the near end of the most recent interglacial? If not, please provide a cogent explanation as to how and why the Holocene may extend beyond about half a precession cycle. With or without anthropogenic influence. This is all I ask……
Otherwise this is all a silly buggers game, isn’t it?

John Robertson
January 26, 2014 9:01 pm

Great news!
Now to next get JIR on board with proper article review processes.
I can’t believe what they print!
(The Journal of Irreproducible Results)

Lloyd Martin Hendaye
January 26, 2014 9:11 pm

When Standard & Poor’s first started evaluating railroad bonds in the late 1860s, proprietors from Commodore Vanderbilt on down simply refused to supply valid data. Within 18 – 24 months, however, no operating railroad could maintain share prices without supplying verifiable statistics.
As Erie Gang scandals surfaced –Gould, Fisk, and Drew were printing railroad bonds in Wall Street basements– S&P’s nascent discipline of “securities analysis” spread to other industries, making honest women of many a brokerage house.
Why should Erie Gang successors –credentialed academic hustlers calling themselves “scientists”– be any different?

January 26, 2014 10:01 pm

Requiring real science should be a competitive advantage against the Holtzbrinck press (Nature, Scientific American).

george e. smith
January 26, 2014 10:47 pm

“””””…..rabbit says:
January 26, 2014 at 2:28 pm
George E. Smith:
There’s only one way to do statistics.
Nearly snorted coffee through my nose over that one.
I write papers using statistics. There is endless numbers of ways — all of them at least somewhat justifiable — to statistically analyze a data set and reach conclusions about the underlyng population……”””””
Well you can drop all the weasel words you want to; I’m impressed. I guess you missed my simple analogy to a tool box, and my observation that you don’t have to use every tool on every problem. Well from your examples that you mentioned, it would seem that you agree with that.
I guess there’s a gazillion ways to do differential equations too. I’m not going to give a list of appropriate buzz words. The tools used depend on the problem; what you want to accomplish; just like with statistics.
My point was (and still is) that any of the tools are applied to some set of numbers; numbers that are already known. You don’t do statistical analysis on x, y, z or a,b,c, or uU1,U2,U3.
And the end results of applying the tool correctly, are quite independent of the numbers themselves; the result is equally valid for any set of already known numbers, whether those numbers are exactly calculated from some closed form formula, or whether no two of the numbers in the set are related in any way, as to origin. The result calculated by correct application of the appropriate algorithm, is an intrinsic property of that unique set of numbers. It doesn’t apply to any different set of numbers; they exhibit their own properties.
So the results of the analysis, can tell you nothing at all about any number, that is not a member of the set that was analysed.
All the twisting in the wind, can not imbue the result with any relativity or meaning outside that set.
And I said nothing about justifiability. Getting paid to do it, is as good a rationalization as any.
It’s over half a century since my last sojourn in academia, since then, excepting the last four months, I have operated in Industrial jobs, as a practicing physicist. Not once in that 50 plus years, was it ever necessary for me to do any statistics; well maybe I might have calculated an average, a couple of times.
All my design tasks required guaranteed operation, by design; never by statistical “betting.”
Despite the statistical improbability, some body seems to win the lottery, even if they only bought one ticket. You can’t convince them of statistical significance.

Mindert Eiting
January 27, 2014 12:38 am

If I were the chief editor of Science or Nature, I would appoint Quentin Crisp, who once said that there is no need to do any housework at all. After the first four years the dirt doesn’t get any worse.

January 27, 2014 2:16 am

Mindert Eiting said @ January 27, 2014 at 12:38 am

If I were the chief editor of Science or Nature, I would appoint Quentin Crisp, who once said that there is no need to do any housework at all. After the first four years the dirt doesn’t get any worse.

He also claimed to be “One of the stately homos of England” in The Naked Civil Servant.

David Bailey
January 27, 2014 3:27 am

I seriously wonder if the age of science is coming to an end. As science has progressed, the easily demonstrated effects have been discovered and recorded, and in so many areas science now seems to be scrabbling in experimental noise – hoping that statistics can make up for the lack of signal, and the lack of decent starting data, and then grossly exaggerating the significance of what they ‘discover’.
When the data tells you nothing, personal prejudice and concerns about funding supply the signal!
People still think science is advancing, but a lot (not all) of that comes from the exploitation of science from long ago – Quantum Theory is now about 100 years old!
Can anyone seriously claim that the various global land temperature records are fit for purpose, given all the adjustments that Anthony has helped to expose. Why would anyone torture such a dubious data-set to try to extract a signal of any sort!
The internet is awash with other science-based scandals – usually documented by knowledgeable people who used to be part of the science establishment. Having seen the way climate scientists work, I’ll bet at least a few of them are true:
1) There are claims that the evidence that saturated fat consumption is based on a cherry picked graph by a man called Ancel Keys, and that there is no reliable evidence that fat is harmful. Indeed, it is suggested that the obesity crisis is at least partly the result of people abandoning fatty food in favour of carbohydrates.
2) There are claims that the benefits of statins have been grossly overstated, while the risk of side effects has been downplayed. I myself got a bad reaction to simvastatin, which could have been that I was just unlucky, but I was amazed that when I mentioned this to others, many had statin horror stories of their own.
3) There are claims that AIDS is not caused by the HIV virus. If true, this is a terrible scandal.
4) Peter Woit runs a blog site devoted to the fact that despite enormous effort over 30 years, string theory has not been tested in any positive way. The LHC delivered the Higgs, which is part of the Standard Model, but nothing else! He bemoans the fact that despite this, theorists constantly push string theory based ideas such as the multiverse, etc.
The thing that unites all these stories, is that the scientific establishment seems to try to rubbish all dissent, and never – absolutely never – engage with dissenters in open debate of the scientific issues.

January 27, 2014 6:30 am

rabbit says:
January 26, 2014 at 2:28 pm
There is endless numbers of ways — all of them at least somewhat justifiable — to statistically analyze a data set and reach conclusions about the underlyng population
That is the problem with applying statistics to analyze data. If you pick and choose your method AFTER you have seen the data, then you have biased the result, but your statistics will not reveal this. Either to you or your audience.
Picking your method to analyze the data after the data has been viewed is cherry picking. Not cherry picking of data, rather cherry picking of method. It is a largely unrecognized form of bias in scientific reporting, that result sin large numbers of false positives
Medicine it appears if finally starting to understand the problem, and Science magazine is now adopting new standards for medical studies. But the rest of the scientific fields are still having problems understanding why their statistical studies are largely full of false positives.

January 27, 2014 9:27 am

Reproducibility should always have been the touch stone. There are ways to protect proprietary information while also allowing others to attempt reproducibility of the results.
I learned as a child how the scientific method was supposedly done. It was very surprising to me to find that Science Magazines were quite comfortable in not demanding that papers submitted to them were not required to demonstrate reproducibility and even more surprising to find that the scientists themselves either could not or would not supply the necessary information for their peers or skeptics to do so and get similar results thus proving their theories.
How much time and frustration and “inconvenient truths” we all would’ve been spared if they had just once done things the way it was supposed to have been done.
Of course they would not then have created reputations and positions of renown and authority that they had until lately if they had done.

January 27, 2014 10:28 am

This is many decades overdue, but better late than never. This is an improvement that we have the global warmers to thank for, because their it is their egregious science that finally stirred the editors at Science to make this improvement.

January 27, 2014 10:40 am

How can anyone be expected to do alarming, grant attracting [science] if we have to stick to the truth?

Richard of NZ
January 27, 2014 12:59 pm

Perhaps I am becoming even more cynical in my old age, but I more and more get the feeling that the only statistics techniques that should be used are:-
Sample size
Standard deviation
Coefficient of variation
Any statistical manipulations beyond these should be taken as conclusive evidence of cheating.

January 27, 2014 1:56 pm

Richard of NZ:
Much statistical analysis is multivariate, meaning there are many dependent variables to be analyzed simultaneously. This requires far more sophisticated techniques than what you have listed.

January 27, 2014 3:06 pm

Hear is an idea that the professionals publishing research should be set minim standard that their work as to meet the same requirements as the work one of their own students does when handing in a essay . I admit on reflection its a low standard put its still a far higher standard than many in climate ‘science’ achieve a regular basis in their published work.

January 27, 2014 7:19 pm

Interestingly, in studies of the fluctuations in global temperatures, the sample size is nil. Climatologists have the wrong idea about the methodology of scientific research.

Janice Moore
January 27, 2014 9:00 pm

Thank you, eyes on you (at 7:19pm, on Jan. 26th): “It was written and promoted by those above. Apparently it was read by … Marcia … .”
NOW, I get this:
Take a bow, gentlemen!
I was kind of ticked off for a minute, there… .
Well! If some jerk had said it, I wouldn’t care (as if). But, it was, (sad eyes) An-tho-ny… . Glad I can keep on smiling!

January 28, 2014 4:55 am

Janice Moore says:
January 27, 2014 at 9:00 pm
” … Well! If some jerk had said it, I wouldn’t care (as if). But, it was, (sad eyes) An-tho-ny… . Glad I can keep on smiling!”
I thought it was Lance Wallace saying: “Take a bow, gentlemen!” to McIntyre, Montford, Wegman and others, to commend them for influencing Marcia (the editor.)

Janice Moore
January 28, 2014 12:53 pm

Thank you, Negrum, for correcting my mistake re: who said what above.

Verified by MonsterInsights