Guest post by S. Stanley Young and Warren Kindzierski
A 2015 meta-analysis published in the journal PLOS One claimed that “…exposure to potentially anti-estrogenic and immunotoxic, dioxin-like congeners & phenobarbital, CYP1A and CYP2B inducers might contribute to the risk of breast cancer”. We used p-value plots to evaluate the statistical reliability of this claim. One of us (Young) had email correspondence with PLOS One editorial staff over four years. Our p-value plots show that the claimed PCB−breast cancer risk is false. PLOS One editorial staff indicated the statistical approach and methods used in the meta-analysis are considered acceptable.
The business model of many journals is that the author pays a publishing fee. Further, the incentives of the author and publisher can work against sound science and interest of the public. Part of the problem with PLOS One editors showing no real concern for publishing a false PCB−breast cancer risk claim (or any other exotic claim) based on meta-analysis may be that the publishing process is lucrative for them. Overall, this experience informs us that it is time for the public to view any meta-analysis published in the journal PLOS One as untrustworthy until proven otherwise.
False findings in medical research are far too common. John Ioannidis and others noted back in 2011 that “ traditional areas of epidemiologic research more closely reflect the performance settings and practices of early human genome epidemiology …showing at least 20 false-positive results for every one true-positive result”. We recently reported that published estimates of irreproducible medical research range anywhere from 51–100% depending upon the discipline.
A meta-analysis is a method used to analyze evidence that answers a specific research question, such as whether a particular risk factor causes a disease. It combines test statistics from multiple individual studies found in the literature that all asked the same question. Meta-analysis is considered by many, perhaps mistakenly, to be the cream of the cream of the crop in methodologies for synthesizing evidence in published research (e.g., see here, here).
However, we have noted elsewhere – really, anywhere we care to look – that findings of meta-analysis studies in the environmental epidemiology field are without sound statistical proof, mostly false. For example, see here, here, here. Why is this? Well among other things, it is due to routine use of questionable research practices (such as analysis manipulation, p-hacking, HARKing, etc.).
Here we describe our experience of how journals and their editors work to preserve false findings in meta-analysis studies they publish. We show this using a 2015 meta-analysis published in the journal PLOS One.
PLOS One meta-analysis
Back in 2018, one of us, Young, looked at a meta-analysis published in the journal PLOS One… “Environmental polychlorinated biphenyl exposure and breast cancer risk: A meta-analysis of observational studies” (Zhang et al. 2015). The meta-analysis claimed that “…exposure to potentially anti-estrogenic and immunotoxic, dioxin-like congeners & phenobarbital, CYP1A and CYP2B inducers might contribute to the risk of breast cancer”. This claim seemed rather fantastic given that it was based on environmental epidemiology observational studies.
We have reported on how to independently evaluate the statistical reliability of meta-analysis studies using p-value plots (see here). A p-value plot is straightforward to construct and it is interpreted in the following way… if p-values roughly fall on a 45-degree line in the plot, they support randomness (no real effect). If the p-values are mostly smaller than 0.05, they support a real effect. A bilinear, hockey stick, shaped p-value plot indicates ambivalence (uncertainty) in an effect.
Young emailed a Zhang et al. co-author from China and cc’d a PLOS One editor from the US asking for further information about their Figure 2 (Forest plot describing the association between total PCB exposure and breast cancer risk). Young constructed a p-value plot from their Figure 2 data. It is shown below. The plot clearly shows a near 45-degree line or no real effect between total PCB exposure and breast cancer risk!
P-value plot for base studies describing the association between total PCB exposure and breast cancer risk (Zheng et al. Figure 2):
Young then emailed a publications assistant at PLOS One, attached the p-value plot, and stated that the Zhang et al. PCB−breast cancer claim was not supported by the p-value plot. By that time PLOS One had opened a case file on the issue. The publications assistant passed along Young’s concern to the Academic Editor who had originally handled the manuscript.
Now fast forward four years, to April of this year. A PLOS One staff editor finally emailed Young back. The staff editor indicated that the PLOS One Editorial Board with expertise in meta-analysis had looked further into Young’s concern. The staff editor stated … “Based on this assessment, we consider that the authors do not imply an effect of total PCB exposure on breast cancer, based on the data in Figure 2. In light of this, we will not be pursuing this case further at this time.”
Young then emailed with the staff editor back and explained that the multiple testing that Zhang et al. did increase the chances of them getting a statistically significant, but false finding among their results. Several days later a different staff editor responded to Young by email and stated … “PLOS ONE abides by guidelines set forth by the Committee on Publication Ethics (COPE), of which this journal is a member. We have followed up on these additional concerns in consultation with the Editorial Board, which assessed that the statistical approach and meta-analytical methods used are considered acceptable. Therefore, no editorial action will be taken on the published article.”
Four years and a couple of vague follow-up emails from two PLOS One staff editors and no correspondence from someone with actual statistical knowledge of problems associated with multiple testing. Now we really wanted to know how deep the statistical problems went in the Zhang et al. study.
We constructed a p-value plot from their Figure 4 (Forest plot describing the association between potentially antiestrogenic and immunotoxic, dioxin-like PCBs exposure and breast cancer risk). We also did a plot from their Figure 5 (Forest plot describing the association between phenobarbital, CYP1A and CYP2B inducers, biologically persistent PCBs exposure and breast cancer risk). Figures 4 and 5 represent the key evidence used by Zhang et al. to make their claim. Our p-value plots are shown below.
P-value plot for base studies describing the association between potentially antiestrogenic and immunotoxic, dioxin-like PCBs exposure and breast cancer risk (Zheng et al. Figure 4):
P-value plot for base studies describing the association between phenobarbital, CYP1A and CYP2B inducers, biologically persistent PCBs exposure and breast cancer risk (Zheng et al. Figure 5):
Both of these plots show near 45-degree lines or no real effects! We just independently proved that the Zhang et al. PCB−breast cancer risk claim is false. So much for the PLOS One Editorial Board with expertise in meta-analysis being able to recognize this. Perhaps their Board expertise is thin in the area of statistics or perhaps they do not want to admit that these problems exist in their published meta-analysis studies?
Statistics are an important contributor to false (irreproducible) research. Douglas Altman – one of the most highly cited researchers in any scientific discipline (see here) and a long-time chief statistical adviser for the British Medical Journal – noted way back in 1998 that… “The main reason for the plethora of statistical errors [in research] is that the majority of statistical analyses are performed by people with an inadequate understanding of statistical methods” and “…they are then peer reviewed by people who are generally no more knowledgeable”. It appears not much has changed.
Big money business of publishing meta-analysis studies
We know most published research is false; but just how motivated are journals (and their editors) to fix this? Let’s look at the case of meta-analysis. We used the Advanced Search Builder capabilities of freely available PubMed to estimate the number of systematic review and meta-analysis studies published in the journal PLOS One from 2012 to present (3 May 2022). We used the exact terms (“PLOS One”[Journal]) AND ((systematic review[Title/Abstract]) AND (meta-analysis[Title/Abstract])).
Our search returned 2,484 articles (240 articles per year; 20 per month). PLOS One currently levies a fee of $1,805 USD to publish a meta-analysis original research article. This amounts to ~$36K per month (~$430K annually) of revenue publishing meta-analysis studies going forward – a cash cow!
We know the journal peer review process is broken and there is little incentive to fix it. We know the business model of journals depends on publishing, preferably lots of studies as cheaply as possible. We also know that journals seek novelty in research in part because of the competition for impact factor and prestige. In fact, editors are often rewarded for actions that increase the prestige of their journal.
Given all this, what journal would want to mess with ~$430K annual revenue publishing meta-analyses with claims that are mostly false? The answer is obvious… journals and their editors will do what is needed to maintain the status quo (even it if means repeatedly publishing false research claims). It is far too lucrative a game to want to change.
Richard Smith, a long-time editor of the British Medical Journal, was a cofounder of the Committee on Medical Ethics (COPE), for many years the chair of the Cochrane Library Oversight Committee, and a member of the board of the UK Research Integrity Office. Last year he best summarized how we should treat medical research… “It may be time to move from assuming that research has been honestly conducted and reported to assuming it to be untrustworthy until there is some evidence to the contrary”.
The business model of many journals promotes incentives for the author and publisher to work against sound science and interest of the public. Our position is that it is time for the public to view any meta-analysis published in the journal PLOS One as untrustworthy until proven otherwise.
S. Stanley Young is with CGStat in Raleigh, North Carolina and is the Director of the National Association of Scholars’ Shifting Sands Project. Warren Kindzierski is a retired professor in St Albert, Alberta.
“It may be time to move from assuming that research has been honestly conducted and reported to assuming it to be untrustworthy until there is some evidence to the contrary”.”
“…We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try to find something wrong with it…” —Dr. Phil Jones, Director of the Climate Research Unit at East Anglia University, email to Warwick Hughes, 2004
““I’m getting hassled by a couple of people to release the CRU station temperature data. Don’t any of you three tell anybody that the UK has a Freedom of Information Act.” —Dr. Phil Jones, Director of the Climatic Research Unit, disclosed Climategate e-mail, Feb. 21, 2005
Little research these days can be called honestly conducted
“I received an astonishing email from a major researcher in the area of climate change. He said, ‘We have to get rid of the Medieval Warm Period!…In 1999, Michael Mann and his colleagues published a reconstruction of past temperatures in which the MWP simply vanished…” —Dr. David Deming, testimony before the Senate Committee on Environment and Public Works, Dec. 6, 2006
Am I wrong? Have you seen retraction watch lately?
“Have you seen retraction watch lately?”
Maybe we should have Nina Jankowicz and the Ministry of Truth look into this, ya think?
Luckily, I have no idea who Nina Jankowicz is….
She’s an American version of Stasi agent Victoria…
Maybe she could Tik Tok us a song about it. She has quite the nice singing voice!
Since I was one of those people asking Phil Jones about the data he was using in his publications, I have first hand knowledge of what was really going on.
Put simply, people at University of East Anglia were doing high school level dabbling in a few diverse historic temperature records, then publishing.
Some data involved Australia. It was immediately obvious that there had been selective adoption of which data to use and that was leading to a non-reliable conclusion.
Complaining should have been from those like us who preferred better work. Having this UEA mob complain about pressures to do grownup science is rather hard to take seriously. Geoff S
The giant 1993 EPA meta study of Second Hand Smoke comes to mind.
The CATO institute is hardly objective, but then neither was the EPA’s study.
Well anyway here’s the CATO’s Take down:
The Second‐Hand Smoke Charade
At the time I remember reading that there were 30 some odd studies involved, and more than one said 2nd hand smoke was good for you.
Anyway, at the time I wasn’t buying the notion that a whiff of tobacco smoke in the air was going to induce cancer in anyone.
Here’s another link:
Passive Smoke The EPA’s betrayal of Science and Policy
Virtually all of EPA’s Air Quality regulations are based, at least in part, on statistically invalid research studies. That is why EPA has fought tooth and nail against all calls to make research study data public. EPA has a history of contracting with researchers that will provide the findings they want and rejecting independent research that contradicts their predetermined position. I’ve personally had detailed research findings submitted to EPA in rule making proceedings dismissed with no more justification than “we don’t agree with the comment”.
EPA also uses dose extrapolation models extensively. They have no scientific legitimacy.
Recently did a bit of research on the viability of Ivermectin to treat COVID. I found many studies and some showed a positive outcome, some did not. I did find two meta analysis studies that used two different statistical methods. One showed Ivermectin was a viable treatment, one showed no statistical evidence to support Ivermectin was effective. I don’t have the knowledge to determine which method of meta analysis was the correct fit. In short, I still don’t know if Ivermectin is effective beyond the anecdotal evidence of people that I now that have taken Ivermectin for COVID.
Also interesting is the meta analysis on second hand cigarette smoke and the WHO meta analysis on glyphosate.
What I know is that there are a quite a few aged people in my circle that have been using Ivermectin prophylactically, most purchased from sources overseas but two that raise livestock use the topical ointment for years now and not a single one of them have contracted COVID to their knowledge. They all say they are also taking vitamins to elevate their levels of Vitamin D and C and Zinc.
Anecdotal evidence? Sure! But I judge that better evidence of what works than the confusing and contradictory crap that I have read on the subject since COVID emerged.
Vitamin D and zinc definitely helps keep COVID at bay. I have no idea if Ivermectin would add anything to that but if they like it, by all means carry on.
Which is better, ivermectin or plaquenil? How do you know?
Don’t know. Just told you what I do know. I don’t even know if it’s the vitamins or the Ivermectin or both that have prevented them from getting it.
What I am also sure of is that this truck driver had Covid several months before it became a thing in the news and recovered. I refused the jabs. Have been taking a multivitamin for some time and have not contracted COVID again.
Ivermectin appears to be better but some physicians combine them. When we caught Covid we had Ivermectin on hand and felt better the day after the first dose as did our friends who include three doctors and two RNs. The one friend who died, also a doctor, refused to consider HCQ or Ivermectin and ‘followed the science’ to his grave.
That laboratory refused to share the data they used to reach that opinion.
PLOSONE does not advertise “Impact Factors” as much as most journals, but they admit to a “Business Model” for Open Access, 238,752 articles published, 200,000 since 2004. They guarantee “fast and thorough” publication. “Rather than relying exclusively on journal-level metrics such as the Impact Factor, PLOS offers individualized Article-Level Metrics reflecting the viewership, download rates, social sharing, and citations for each article we publish in real time, helping you illustrate the impact of your research.”
While there is increasing pushback many journals still immediately advertise their factor, one example Marine Pollution Bulletin–
Latest Published Top Cited Most Downloaded Most Popular
Authors ==> If Zheng Figs 4 & 5 ar supposed to show above, they do not, at least for me.
Same for me.
These missing figures show the same trend (near-45 degree lines) as the first figure. I emailed Charles to ask if the article can be fixed up to include them.
A related anecdote. I convincingly showed in a post at Climate Etc at the time that Marcott’s 2013 Hockey stick paper in Science was simply academic misconduct. This was easy to do by comparing his thesis version with his Science version. There was even a ‘smoking gun’ comparison figure.
I emailed McNutt, then Science chief editor, a copy and a request for retraction. Her assistant acknowledged receipt. Nothing ever happened. So I added the post as essay ‘A High Stick Foul’ in ebook Blowing Smoke, and this anecdote as a scathing footnote.
There is a lot of really bad stuff out there in the medical and climate literature.
Blowing Smoke identified no less than 5 separate cases of clear academic misconduct in peer reviewed climate papers, and three more where the conclusions of large scale ‘studies’ were just laughably stupid when you dug into the details.
Of course she wouldn’t retract that, as she is a leading climate change promoter. It is then a noble lie.
It’s not just medical and climate, didn’t save the link, but recall marine journal retractions, mostly from foreign sources. There are a number of questionable papers that get “boilerplate” use in introduction sections, the old necessity of doing your homework gets replaced by punching computer keys. Posted this elsewhere, worth repeating–
Rose, K. A. 2012. End-to-end models for marine ecosystems: Are we on the precipice of a significant advance or just putting lipstick on a pig? Scientia Marina 76(1):195-201.
End-to-end models are the [impossible?] physical and biological totality of an ecosystem.
I wish it were PLOS the only One. It’s not. Here’s a delightful commentary on Nature:
If the report is good and the author has confidence in it, critical review should be welcome. As Edison said “I have not failed, I have found 10,000 ways that don’t work.
Better to have criticism and let the chips fly where they may.
How is publishing false information not illegal? I don’t understand this. Of course I don’t know squat about the laws but I can tell the difference between right and wrong. It is wrong to print false information especially under the pretext that the information is scientifically proven. That is a double crime and these people should be punished. Most of us don’t know a thing about how and why things are done in the scientific community, I always assumed that since the published work was peer reviewed that I could trust it. That clearly is not the truth. I assumed that peer review had a dual purpose. Number one to help the writers/researchers find any mistakes they may have made and give them a chance to defend or correct them. Number two to give readers and consumers of this knowledge confidence that what they are reading is reasonably accurate. The current state of scientific publishing is a disgrace and needs to be fixed, by that I mean researchers and publishers need to be held to account.
“How is publishing false information not illegal?”
In the US at least, First Amendment.
That’s a dangerous path to tread – who determines what information is false? The Government?
Not so, they knowingly printed false information, that information was easily proven wrong. Those responsible were shown it was, given an opportunity to right the wrong and refused. What part of that don’t you understand. I didn’t say a thing about having the government police our speech. People who know far more than you or me easily proved it false. I don’t know about you but that is good enough for me and damn sure enough for action and those accused will be able to defend their actions. Nothing improper, everything above board and visible. Yes the government has to be involved because the courts are a part of the government. That is the avenue we must use and that is as it should be.
I didn’t say a thing about having the government police our speech.
Perhaps not, but you DID say “How is publishing false information not illegal?”
“Illegal” means “against the law” which suggested to me that you were saying government should prohibit it.
I have a major problem with government prohibiting any speech, false or not. It sets a very dangerous precedent. (i.e. “Disinformation Governance Board”)
So if that is not what you intended, then perhaps you can clarify exactly what you mean by “illegal”?
That you have a problem with government prohibiting any speech is not a problem for me. That’s fine. You have the right to say or do as you please but be mindful you are responsible for your actions and your speech. If you willfully lie about someone you should be held accountable. Taken to the cleaners.
Cui Bono. All the well-known journals are horrendously expensive for authors, so only the well-funded authors can get published. Consequently, the journals do not reflect actual research as much as they reflect funding. Research is bought.
Lesser journals may or may not have a less-rigorous review process, but consider how motivated the major journals would be to approve papers funded by wealthy groups or individuals. Remember the “pal-review” accusations of a few years ago? I doubt that anything has changed much since then. One answer is not to put false faith in peer-review, to be prepared to read papers from any journal, and to treat each paper on its merits.
My recent paper was published by an open-access journal with author charge way below the $1-5,000 cost of the better-known journals and the >9,000 Euros for Nature open access. I only found it thanks to another WUWT reader. Hopefully my next paper will be accepted by the same journal.
This is your paper, right, Jonas?
Correct. I did an article on it on WUWT on 28 Mar: https://wattsupwiththat.com/2022/03/28/the-100000-year-problem-and-earths-chaotic-non-linear-climate/
There is one change: A commenter challenged the affiliation, so I contacted the university and was advised “I have been advised that you should please tell the journal that you have no affiliation.”. Consequently, the affiliation has changed. There is a discussion in the WUWT comments, which closed before I could update with the university response. It’s sad if people go for ad hominem instead of arguing the maths or the science, but that’s the reality.
IIRC there was a WUWT article a few years ago about the eventual controlling ownership of the major science journals. Can anyone remember the link? I have looked but failed to find. Geoff S
As a non-scientist type, I am highly suspect of any articles or reported findings where the author must pay to have them published in a journal. Seems like its dishonest, or like an advertisement, or something.
Maybe, that’s just me.
Research nonsense is an old problem (look at Irving Langmuir’s “pathological ” science lecture in 1953) made quite a bit worse today by it being used so often for public policy and law. The only way to combat it is for people to become more skeptical and to stay on top of their elected representatives when it shows up as justification for regulation.
I think metastudies are given such weight simply because of the name — studies are scientific, metastudies are beyond science in some way. Prior to Sars-COV-2 there were a number of studies, some were even randomized trials, regarding the efficacy of masking. No one ever found significant benefit for masking, even high quality masks, in the general public against spread of respiratory viruses. One RCT I read didn’t find value to them when used in the operating theater. With the Covid epidemic becoming so extremely political people devoted to masking began assembling metastudies of these earlier studies and on occasion would report some efficacy. I mean it was an effort of just keep on metaing until we find a result we want. But on close inspection there were problems. The metastudies would mix results of very different quality together — unpublished data mixed with published, casual studies with RCT, reliance on modeling, contingency tables with zero value entries, etc. One even managed to mix up the control and treatments arms of a study. None of it weighted for quality.
The very worst “study” imagineable was what Deborah Birx suggested was the most convincing evidence of the value of masks – the beauty saloon study. Look it up. It’s evidence that Public Health has slowly evolved over time to become a Public Menace.
This is yet another symptom of the sickness of our society today in which maintaining appearances is more important that discovering truth. Collectively we are becoming more venal and childish every year. Sometimes when I think of the future (that seemed so bright when I was young) I feel deep despair. Then I remember some of the best advice I ever received: Don’t let the bastards (and the stupid) grind you down.