Guest Essay by Kip Hansen
We recently saw on this blog a guest essay from Joel O’Bryan regarding a paper from Fernbach et al. titled “Extreme opponents of genetically modified foods know the least but think they know the most” which appears in the form of a “Letter” in the journal Nature – Human Behavior. The paper itself is yet another social science study of “How can anyone fail to support/believe a proffered scientific consensus?”
My interest was piqued when I read the abstract of the study quoted in Joel O’Bryan’s essay. As it has been a couple of weeks, here’s the abstract to refresh your memory:
“There is widespread agreement among scientists that genetically modified foods are safe to consume and have the potential to provide substantial benefits to humankind. However, many people still harbour concerns about them or oppose their use. In a nationally representative sample of US adults, we find that as extremity of opposition to and concern about genetically modified foods increases, objective knowledge about science and genetics decreases, but perceived understanding of genetically modified foods increases. Extreme opponents know the least, but think they know the most. Moreover, the relationship between self-assessed and objective knowledge shifts from positive to negative at high levels of opposition. Similar results were obtained in a parallel study with representative samples from the United States, France and Germany, and in a study testing attitudes about a medical application of genetic engineering technology (gene therapy). This pattern did not emerge, however, for attitudes and beliefs about climate change.” [bold emphasis mine: kh]
Like Joel, I was surprised by the last sentence in the Abstract — it seemed a total non-sequitur. The abstract is about attitudes, beliefs and knowledge regarding GMOs (and mentions other studies that found a similar result concerning gene therapies)….and there at the end is this: “This pattern did not emerge, however, for attitudes and beliefs about climate change.”
Yes, I’m sure you see the same thing I do: The paper’s title and the rest of the abstract are all about GMOs yet the last sentence of the abstract tells us that they studied attitudes, beliefs and knowledge about Climate Change and the results were different than all-the-above found for GMOs. That’s all they give us in the abstract on the Climate Change part of the study.
So what’s the story here?
First, kudos go to Philip Fernbach and his team for seeing that all the study data are available online at a data depository, the Center for Open Science. Here is the link to the full paper and here is a link to the Supplemental Information. Even the code used in the analyses is available.
Reading the full paper, one discovers that they have carried out a two-forked study: one fork concerning GMOs and one fork concerning Climate Change. But the majority of the paper as written focuses on and discusses the findings as related to GMOs — and the Climate Change portion of the study is given short shrift.
Why? Quite simply, the Climate Change part of the study had a null result.
There has been a lot written recently about biases in scientific literature and one of these biases is Publication Bias which is brought about by the fact that researchers and journals are much less likely to publish a paper that shows a study had a null result — “In science, a null result is a result without the expected content: that is, the proposed result is absent.”
To verify that this finding about Climate Change is a case of a null result (one half of the study having had a null result in this case), it is necessary to look at the Study Design. Yes, that all important document that is prepared before any data is collected. Joe Lief Uri explains the what, why and how of pre-registering study designs here. The study design lays out carefully how the study is to be conducted, and contains all of the information below:
We don’t very often see such documents and, because of that, studies often not only suffer from p-hacking and post hoc analysis but the authors get away with it. I can’t remember seeing a single published study design document for any climate change research but this just may be a personal knowledge deficit. [If any reader knows of published, pre-registered study designs in climate change, please give me a link in comments.]
The pre-registering of a study design is becoming required by such agencies as the National Institutes of Health. An example of an NIH pre-registered study design can be found here for “Melatonin Use for Sleep Problems in Alcohol Dependent Patients”.
Again, Fernbach et al. are to be commended for pre-registering their research for this study. It has been registered at the site AsPredicted . For this particular study, the study design is found here (pdf). From the study design document we find:
“2) What’s the main question being asked or hypothesis being tested in this study?
The key prediction is that the gap between objective and subjective knowledge widens as extremity of anti-scientific consensus beliefs increase. We further predict that as extremity increases, subjective knowledge increases, but objective knowledge, as measured by a battery of scientific literacy questions, decreases. We will test this in two domains: Genetically modified foods and climate change.”
What are they expecting to find?
They are expecting: “We will conduct all analyses separately for each question and after averaging the two questions for each issue. We expect the pattern to be similar across all analyses.”
So, they do the study — in the United States, France and Germany. Lo and behold, they find that on the topic of GMOs, they are spot on: “opponents of genetically modified foods know the least but think they know the most” — and this pattern is the strongest in the United States. The paper goes into a lot of detail on how they arrive at this conclusion (which turns out to be exactly what they expected to find).
Things did not go so well with the Climate Change arm of the study. Fernbach et al. don’t give us the nice graphs of the data for Climate Change — because for Climate Change, their expected pattern did not appear — they had a null result.
The researchers summarize the Climate Change finding thus: “Unlike beliefs about GM foods, climate change beliefs were highly polarized by political identification, with conservatives much more likely to oppose the scientific consensus than liberals.” “For climate change, the direction of the effects was the same [as was found for GMOs], but the results were not statistically significant. The lack of a relationship between scientific literacy and extremity of anti-scientific-consensus climate change beliefs is consistent with previous findings and we believe that this is attributable to the polarized nature of the climate change issue.”
Unfortunately, the authors do just what pre-registration of a study design is meant to help prevent: they perform an analysis that was not called for (meaning not designed into the original study) and they engaged in some light HARKing and/or JARKing –the two italicized sentences above.
To their credit, they admit to the null result in the abstract — the last sentence which was quoted earlier: “This pattern did not emerge, however, for attitudes and beliefs about climate change.” A good, clear statement of the actual results on Climate Change.
But what they don’t do is do the same exact analysis on this half of the study as they did on the GMO half of the study (or, if they did, it does not appear in the published paper or its SI.) It would have been nice to see the Climate Change statistics illustrated in the same manner as those for GMOs and the Overall (combined) findings. This, simply put, is publication bias writ-small. Perhaps this is the result of the simple need to keep the paper within the length bounds of the publisher, and thus maybe we shouldn’t make much of this lack. But then I would have expected at least to see these missing graphics in the SI.
What the authors do is try to explain [explain away?] why their hypothesis did not hold for the Climate Science half of the study.
Remember, their pre-registered hypothesis was basically as bluntly stated in the abstract “Extreme opponents know the least, but think they know the most.“ (The full wording was previously quoted.)
The authors preform an analysis that is not called for in the Study Design [“We will also collect the following demographic variables for each participant: age, gender, income, education, political ideology, and political party. We do this for completeness, but don’t plan to analyze any of these variables with respect to our hypotheses.“] — they analyze the climate change results by political leaning (liberal → conservative) and build a just-so-story explanation based on the findings of others [not their own study] when they say “The lack of a relationship between scientific literacy and extremity of anti-scientific-consensus climate change beliefs is consistent with previous findings and we believe that this is attributable to the polarized nature of the climate change issue.”
They say when writing this paper “we believe…” but that was not what they believed before the study was done.
That’s the value of pre-registration of study designs: researchers get caught out in these little attempts to trick or fool themselves. What they believed before collecting the data was: “We expect the pattern to be similar across all analyses.”
When the results were in, the authors were stuck with a null result for the Climate Change half of the study. They then fall back on the work of others to try to explain or justify why their hypothesis did not hold for Climate Change.
They did not include in this study anything about polarization of issues. There is no data generated in this study showing political polarization of this issue (there is no doubt that Climate Science is polarized along political lines, especially in the United States but the authors knew this prior to the study and still thought they would find the same pattern in both halves of the study.) The authors present a new hypothesis after the results are in: “… attributable to the polarized nature of the climate change issue” and base their new post hoc hypothesis on the work of others — not on data generated by their own work.
My purpose in discussing this paper is not to protest or celebrate their findings but simply to point out the value of Pre-Registering study designs and placing all the data and code in a public repository. Doing so allows for a much more nuanced evaluation by others of the findings and exposes where authors have stepped outside of the proper protocols of research and engaged in a wee bit of HARKing, JARKing and/or p-hacking (not seen in this case).
Overall, Philip Fernbach, Nicholas Light and their 3 co-authors have done the right things — they have Pre-Registered their study design and subsequently posted all data and code to publicly accessible repositories. Further, they have included blunt recognition of achieving a null result for a full half of their originally planned study. With that many pluses, I am happy to give them a free pass on slipping off the track a little by “explaining” in the discussion section of their paper.
# # # # #
It would be a major improvement in all the sciences, helping to curtail the Irreproducibility Crisis, if researchers worked up carefully-crafted, precise Study Designs for every proposed research effort, posted them to a registration site for their scientific specialty, and then followed the design with scientific precision, utilizing the standard protocol illustrated above. This emerging trend is called for by the National Academies of Science; The Center for Open Science; the American Psychological Association and others.
# # # # #
Author’s Comment Policy:
Lots to talk about, both in regards to this particular paper and the steps that will be necessary to begin to end the Irreproducibility Crisis. Imagine a paper on historical surface temperature that pre-defined all of its steps, methods, and proposed analyses in a pre-registered study design … with no post hoc selection or rejection of data sets, no “let’s try some other analysis”, and no fudging allowed with everyone able to look over their shoulders seeing exactly what the researchers are doing at every step.
I am not particularly concerned with the actual findings of the featured study … as for climate science it was a null. Note that it would be a mistake to claim a pro-skeptic result from the paper as the study design was only adequate to test the original hypothesis which failed.
Happy to answer any questions.
# # # # #