Guest post by S. Stanley Young and Warren Kindzierski
A 2015 meta-analysis published in the journal PLOS One claimed that “…exposure to potentially anti-estrogenic and immunotoxic, dioxin-like congeners & phenobarbital, CYP1A and CYP2B inducers might contribute to the risk of breast cancer”. We used p-value plots to evaluate the statistical reliability of this claim. One of us (Young) had email correspondence with PLOS One editorial staff over four years. Our p-value plots show that the claimed PCB−breast cancer risk is false. PLOS One editorial staff indicated the statistical approach and methods used in the meta-analysis are considered acceptable.
The business model of many journals is that the author pays a publishing fee. Further, the incentives of the author and publisher can work against sound science and interest of the public. Part of the problem with PLOS One editors showing no real concern for publishing a false PCB−breast cancer risk claim (or any other exotic claim) based on meta-analysis may be that the publishing process is lucrative for them. Overall, this experience informs us that it is time for the public to view any meta-analysis published in the journal PLOS One as untrustworthy until proven otherwise.
False findings in medical research are far too common. John Ioannidis and others noted back in 2011 that “ traditional areas of epidemiologic research more closely reflect the performance settings and practices of early human genome epidemiology …showing at least 20 false-positive results for every one true-positive result”. We recently reported that published estimates of irreproducible medical research range anywhere from 51–100% depending upon the discipline.
A meta-analysis is a method used to analyze evidence that answers a specific research question, such as whether a particular risk factor causes a disease. It combines test statistics from multiple individual studies found in the literature that all asked the same question. Meta-analysis is considered by many, perhaps mistakenly, to be the cream of the cream of the crop in methodologies for synthesizing evidence in published research (e.g., see here, here).
However, we have noted elsewhere – really, anywhere we care to look – that findings of meta-analysis studies in the environmental epidemiology field are without sound statistical proof, mostly false. For example, see here, here, here. Why is this? Well among other things, it is due to routine use of questionable research practices (such as analysis manipulation, p-hacking, HARKing, etc.).
Here we describe our experience of how journals and their editors work to preserve false findings in meta-analysis studies they publish. We show this using a 2015 meta-analysis published in the journal PLOS One.
PLOS One meta-analysis
Back in 2018, one of us, Young, looked at a meta-analysis published in the journal PLOS One… “Environmental polychlorinated biphenyl exposure and breast cancer risk: A meta-analysis of observational studies” (Zhang et al. 2015). The meta-analysis claimed that “…exposure to potentially anti-estrogenic and immunotoxic, dioxin-like congeners & phenobarbital, CYP1A and CYP2B inducers might contribute to the risk of breast cancer”. This claim seemed rather fantastic given that it was based on environmental epidemiology observational studies.
We have reported on how to independently evaluate the statistical reliability of meta-analysis studies using p-value plots (see here). A p-value plot is straightforward to construct and it is interpreted in the following way… if p-values roughly fall on a 45-degree line in the plot, they support randomness (no real effect). If the p-values are mostly smaller than 0.05, they support a real effect. A bilinear, hockey stick, shaped p-value plot indicates ambivalence (uncertainty) in an effect.
Young emailed a Zhang et al. co-author from China and cc’d a PLOS One editor from the US asking for further information about their Figure 2 (Forest plot describing the association between total PCB exposure and breast cancer risk). Young constructed a p-value plot from their Figure 2 data. It is shown below. The plot clearly shows a near 45-degree line or no real effect between total PCB exposure and breast cancer risk!
P-value plot for base studies describing the association between total PCB exposure and breast cancer risk (Zheng et al. Figure 2):
Young then emailed a publications assistant at PLOS One, attached the p-value plot, and stated that the Zhang et al. PCB−breast cancer claim was not supported by the p-value plot. By that time PLOS One had opened a case file on the issue. The publications assistant passed along Young’s concern to the Academic Editor who had originally handled the manuscript.
Now fast forward four years, to April of this year. A PLOS One staff editor finally emailed Young back. The staff editor indicated that the PLOS One Editorial Board with expertise in meta-analysis had looked further into Young’s concern. The staff editor stated … “Based on this assessment, we consider that the authors do not imply an effect of total PCB exposure on breast cancer, based on the data in Figure 2. In light of this, we will not be pursuing this case further at this time.”
Young then emailed with the staff editor back and explained that the multiple testing that Zhang et al. did increase the chances of them getting a statistically significant, but false finding among their results. Several days later a different staff editor responded to Young by email and stated … “PLOS ONE abides by guidelines set forth by the Committee on Publication Ethics (COPE), of which this journal is a member. We have followed up on these additional concerns in consultation with the Editorial Board, which assessed that the statistical approach and meta-analytical methods used are considered acceptable. Therefore, no editorial action will be taken on the published article.”
Four years and a couple of vague follow-up emails from two PLOS One staff editors and no correspondence from someone with actual statistical knowledge of problems associated with multiple testing. Now we really wanted to know how deep the statistical problems went in the Zhang et al. study.
We constructed a p-value plot from their Figure 4 (Forest plot describing the association between potentially antiestrogenic and immunotoxic, dioxin-like PCBs exposure and breast cancer risk). We also did a plot from their Figure 5 (Forest plot describing the association between phenobarbital, CYP1A and CYP2B inducers, biologically persistent PCBs exposure and breast cancer risk). Figures 4 and 5 represent the key evidence used by Zhang et al. to make their claim. Our p-value plots are shown below.
P-value plot for base studies describing the association between phenobarbital, CYP1A and CYP2B inducers, biologically persistent PCBs exposure and breast cancer risk (Zheng et al. Figure 5):
Both of these plots show near 45-degree lines or no real effects! We just independently proved that the Zhang et al. PCB−breast cancer risk claim is false. So much for the PLOS One Editorial Board with expertise in meta-analysis being able to recognize this. Perhaps their Board expertise is thin in the area of statistics or perhaps they do not want to admit that these problems exist in their published meta-analysis studies?
Statistics are an important contributor to false (irreproducible) research. Douglas Altman – one of the most highly cited researchers in any scientific discipline (see here) and a long-time chief statistical adviser for the British Medical Journal – noted way back in 1998 that… “The main reason for the plethora of statistical errors [in research] is that the majority of statistical analyses are performed by people with an inadequate understanding of statistical methods” and “…they are then peer reviewed by people who are generally no more knowledgeable”. It appears not much has changed.
Big money business of publishing meta-analysis studies
We know most published research is false; but just how motivated are journals (and their editors) to fix this? Let’s look at the case of meta-analysis. We used the Advanced Search Builder capabilities of freely available PubMed to estimate the number of systematic review and meta-analysis studies published in the journal PLOS One from 2012 to present (3 May 2022). We used the exact terms (“PLOS One”[Journal]) AND ((systematic review[Title/Abstract]) AND (meta-analysis[Title/Abstract])).
Our search returned 2,484 articles (240 articles per year; 20 per month). PLOS One currently levies a fee of $1,805 USD to publish a meta-analysis original research article. This amounts to ~$36K per month (~$430K annually) of revenue publishing meta-analysis studies going forward – a cash cow!
We know the journal peer review process is broken and there is little incentive to fix it. We know the business model of journals depends on publishing, preferably lots of studies as cheaply as possible. We also know that journals seek novelty in research in part because of the competition for impact factor and prestige. In fact, editors are often rewarded for actions that increase the prestige of their journal.
Given all this, what journal would want to mess with ~$430K annual revenue publishing meta-analyses with claims that are mostly false? The answer is obvious… journals and their editors will do what is needed to maintain the status quo (even it if means repeatedly publishing false research claims). It is far too lucrative a game to want to change.
Richard Smith, a long-time editor of the British Medical Journal, was a cofounder of the Committee on Medical Ethics (COPE), for many years the chair of the Cochrane Library Oversight Committee, and a member of the board of the UK Research Integrity Office. Last year he best summarized how we should treat medical research… “It may be time to move from assuming that research has been honestly conducted and reported to assuming it to be untrustworthy until there is some evidence to the contrary”.
The business model of many journals promotes incentives for the author and publisher to work against sound science and interest of the public. Our position is that it is time for the public to view any meta-analysis published in the journal PLOS One as untrustworthy until proven otherwise.
S. Stanley Young is with CGStat in Raleigh, North Carolina and is the Director of the National Association of Scholars’ Shifting Sands Project. Warren Kindzierski is a retired professor in St Albert, Alberta.