Guest Essay by Kip Hansen – 6 October 2019
“Epidemiology is the study and analysis of the distribution (who, when, and where), patterns and determinants of health and disease conditions in defined populations. “
“It is the cornerstone of public health, and shapes policy decisions and evidence-based practice by identifying risk factors for disease and targets for preventive healthcare.”
That’s the Wiki speaking on the subject.
And that is precisely where the broad field of epidemiology has gone wrong.
WARNING: This is a long essay. Not a quick read — about 5,000 words — a twenty minute read for most. Put it aside for reading at your leisure. Let me know in comments if you find it was worth your time.
If we ask what epidemiologists mean by “risk factors” we find the fatal flaw:
“Risk factors or determinants are correlational and not necessarily causal, because correlation does not prove causation. For example, being young cannot be said to cause measles, but young people have a higher rate of measles because they are less likely to have developed immunity during a previous epidemic. Statistical methods are frequently used to assess the strength of an association and to provide causal evidence (for example in the study of the link between smoking and lung cancer). Statistical analysis along with the biological sciences can establish that risk factors are causal.”
“Some prefer the term risk factor to mean causal determinants of increased rates of disease, and for unproven links to be called possible risks, associations, etc.” — from the Wiki
Why do I describe the above as “where the broad field of epidemiology has gone wrong?” First, statistical analysis cannot, ever, establish causality in this field. To see why this is so, let’s look at what John P.A. Ioannidis (see here and here) wrote just last year about nutritional epidemiology in the Journal of the American Medical Association (JAMA) — an editorial piece titled: “The Challenge of Reforming Nutritional Epidemiologic Research” [ pdf courtesy of Columbia University ]:
“Some nutrition scientists and much of the public often consider epidemiologic associations of nutritional factors to represent causal effects that can inform public health policy and guidelines. However, the emerging picture of nutritional epidemiology is difficult to reconcile with good scientific principles. The field needs radical reform.”
“In recent updated meta-analyses of prospective cohort studies, almost all foods revealed statistically significant associations with mortality risk. Substantial deficiencies of key nutrients (eg, vitamins), extreme over consumption of food, and obesity from excessive calories may indeed increase mortality risk. However, can small intake differences of specific nutrients, foods, or diet patterns with similar calories causally, markedly, and almost ubiquitously affect survival?”
Ioannidis’ findings and his question are both very apt. What is he saying here? He is saying that when they looked at epidemiological nutrition studies, almost every food item looked at had “statistically significant associations with mortality risk” or in other words, everything we eat is killing us faster and sooner, or making us live longer, and he asks if that is really possible.
So, what is going on here? The first thing that is going on is that epidemiologists are being lazy — by this I mean that in many of these studies, the study design involves looking a single dietary factor (sometimes a single dietary item) — almost always from some broad general health study database such as the European Prospective Investigation into Cancer and Nutrition (EPIC) or, in the United States, the Nurse’s Health Studies (NHS) — and comparing that dietary factor to “All Cause Mortality”. All Cause Mortality means simply death by any cause. There are a lot of causes of death — the official list is called ICD-10-Cause of Death. [ pdf ] So, in the following example from Ioannidis, a study looked at a database that included the self-reported daily/weekly/monthly the dietary intake of hazelnuts by a huge number of people who filled out dietary surveys at some time many years ago (maybe even only once) and then checked death indexes (in the U.S., they use the Social Security Death Index) to see which individuals had died and when. The epidemiologists then used statistical analysis techniques to determine how those hazelnuts affected life-span. The results of such studies? (quoting Ioannidis as above):
“Assuming the meta-analyzed evidence from cohort studies represents life span–long causal associations, for a baseline life expectancy of 80 years, eating 12 hazelnuts daily (1 oz) would prolong life by 12 years (ie, 1 year per hazelnut), drinking 3 cups of coffee daily would achieve a similar gain of 12 extra years, and eating a single mandarin orange daily (80 g) would add 5 years of life. Conversely, consuming 1 egg daily would reduce life expectancy by 6 years, and eating 2 slices of bacon (30 g) daily would shorten life by a decade, an effect worse than smoking. Could these results possibly be true?”
Of course they can’t! One of the reasons is that hazelnuts do not prevent , for instance, accidents — which are the Number 3 cause of death in the United States and represent about 6% of all-cause deaths. It is hard to imagine any plausible way in which eating hazelnuts could even contribute to the prevention of accidents (they could cause choking, if not chewed properly, but that is a separate cause of death). There is no biologically plausibility to the idea that eating hazelnuts somehow prevents both (or either) heart disease and cancer — the Number 1 and 2 killers (though they may be a possible contributory factor to either a benefit or a harm, through some unknown pathway). These three Causes of Death alone account for 50% all “All Cause Mortality” in the United States. Any study that analyzes individual food items or diet components against All Cause Mortality is flawed before they are begun because they are looking at an end-point that is known not to be caused by the intervention (food item). You’ll see the significance of this later….
“In 2017, the 10 leading causes of death were, in rank order: Diseases of heart; Malignant neoplasms [ cancer ] ; Accidents (unintentional injuries); Chronic lower respiratory diseases; Cerebrovascular diseases [ stroke ]; Alzheimer disease; Diabetes mellitus; Influenza and pneumonia; Nephritis, nephrotic syndrome and nephrosis [ kidney diseases ]; and Intentional self-harm (suicide). They accounted for 74% of all deaths occurring in the United States.” — CDC [ pdf ] [bracketed explanations for clarity — kh]
So, if an epidemiologist is weighing consumption of hazelnuts against All Cause Mortality then (quoting Ioannidis again):
“These implausible estimates of benefits or risks associated with diet probably reflect almost exclusively the magnitude of the cumulative biases in this type of research, with extensive residual confounding and selective reporting.”
“Almost all nutritional variables are correlated with one another; thus, if one variable is causally related to health outcomes, many other variables will also yield significant associations in large enough data sets. With more research involving big data, almost all nutritional variables will be associated with almost all outcomes. Moreover, given the complicated associations of eating behaviors and patterns with many time-varying social and behavioral factors that also affect health, no currently available cohort includes sufficient information to address confounding in nutritional associations.” [ emphasis added – kh ]
Bottom Line: Given the extremely complex and only vaguely understood details of human nutrition, and the correlations between those innumerable diet variables, these types of studies will find correlations and associations between [almost] all the variables and all possible outcomes. In other words, these Big Data nutritional studies are magical — and can be used to produce almost any outcome for any variable.
We already know, from long experience, not to rely on single studies and thus we can avoid the “Single Study Syndrome”. The oft provided solution to the Single Study Syndrome is to do meta-analysis studies — a study in which the findings, both qualitative and quantitative, of many studies on the same topic are combined “to develop a single conclusion that has greater statistical power.” [ source ]. Sounds like a grand idea, doesn’t it? We look at lots of studies on hazelnuts or coffee or mandarin oranges and combine their results and re-do the statistical analyses and see what’s what.
But, Ioannidis has this to say about that:
“…In an inverse sequence, instead of carefully conducted primary studies informing guidelines, expert-driven guidelines shaped by advocates dictate what primary studies should report. Not surprisingly, an independent assessment [ pdf here ] by the National Academies of Sciences, Engineering, and Medicine of the national dietary guidelines suggested major redesign of the development process for these guidelines: improving transparency, promoting diversity of expertise and experience, supporting a more deliberative process, managing biases and conflicts, and adopting state-of-the-art processes.”
So, here we find that the studies being done find “non-science-ical” results — obviously invalid findings — and yet they still get reported and published in the journals. These individual studies, that are capable of finding almost any outcome desired (or outcomes that the author’s biases lead them to) are then combined into meta-analyses which end up being simply a reflection of the biases in field of nutritional epidemiology, much of it driven by advocates which dictate what primary studies should be about and what findings they should report.
Does this strike you as Sound Science?
It is not sound science — it is a mockery of sound science.
“Beyond food studies, results of single-nutrient studies have largely failed to be corroborated in randomized trials. False-positive associations are common in the literature. For example, updated meta-analyses of published data from prospective cohort studies have demonstrated that a single antioxidant, beta carotene, has a stronger protective effect on mortality than all the foods mentioned above. The relative risk of death for the highest vs lowest group of beta carotene levels in serum or plasma was 0.69 (95% CI, 0.59-0.80). Even when measurement error is mitigated with biochemical assays (as in this example), nutritional epidemiology remains intrinsically unreliable. These results cannot be considered causal, especially after multiple large trials have yielded CIs [confidence intervals] excluding even a small benefit.” [emphasis added — kh ] (quoting, again, Ioannidis)
Although I could really use those extra 12 years that are to be gained by eating hazelnuts, I have to admit that It Is Not True in the Real World.
The mass media — and all those health advocates and advocate-journalists of various stripes — have been agog with the news of a new study out of Europe. Oh, yes, a BIG study — 451,743 people — a big cohort study (exactly the type being discussed by Ioannidis above). The details of the study are just too ….. I was going to say “imaginary” but thought better of it. Instead, here they are in capsule form:
Question: Is regular consumption of soft drinks associated with a greater risk of all-cause and cause-specific mortality?
Findings: In this population-based cohort study of 451 743 individuals from 10 countries in Europe, greater consumption of total, sugar-sweetened, and artificially sweetened soft drinks was associated with a higher risk of all-cause mortality. Consumption of artificially sweetened soft drinks was positively associated with deaths from circulatory diseases, and sugar-sweetened soft drinks were associated with deaths from digestive diseases.
Meaning: Results of this study appear to support ongoing public health measures to reduce the consumption of soft drinks.” — [ source ]
In this study, we see that Ioannidis is proved correct in all aspects of his criticism of Nutritional Epidemiology. The “question” (hypothesis) of the study is pre-determined by current advocacy against “soft drinks”, a large and varied general category of popular carbonated beverages. Sure enough, because currently practiced nutritional epidemiology using large cohort studies allows the finding of [almost] any association desired, they find that “greater consumption of total, sugar-sweetened, and artificially sweetened soft drinks was associated with a higher risk of all-cause mortality.” When they then shine their statistical packages on more general classes of cause of death, they still find “artificially sweetened soft drinks … positively associated with deaths from circulatory disease” and “sugar-sweetened soft drinks … associated with deaths from digestive diseases”.
[ As an interesting note, these associations, after being “adjusted” for a dozen or so possible confounders, are non-linear — that is J-shaped, low consumption appearing to improve survival. The abstract of this study is here — one needs to get a copy of the full study pdf and download the supplemental information to see the non-linear graphs — note that the first two comments, which appear under the abstract, agree with Ioannidis. ]
Thus, we find Ioannidis’ statement that “almost all nutritional variables will be associated with almost all outcomes” seem to be validated. Further, “Results of this study appear to support ongoing public health measures to reduce the consumption of soft drinks” indeed [as per Ioannidis] “reflect[ing] almost exclusively the magnitude of the cumulative biases” and are simply in support of “expert-driven [already existing] guidelines shaped by advocates dictat[ing] what primary studies should report.”
Further, Ioannidis states: “Individuals consume thousands of chemicals in millions of possible daily combinations. . . . . Disentangling the potential influence on health outcomes of a single dietary component from these other variables is challenging, if not impossible.”
Nutritional Epidemiology has been outed by Ioannidis and we now have an explanation of the “nutritional science whipsaw effect” — which we see as the always popular: “One week drinking coffee is good for you, and the next week it is lethal”. This effect is so prevalent that Ioannidis concludes “Nutritional research may have adversely affected the public perception of science.”
The important point of all this about nutritional epidemiology is not just that the findings published in our local newspapers, heralded in the TV news and echo-chambered ad nauseam on the ‘Net are at best almost entirely misleading (to avoid the simpler word “wrong”), it is the reason that these results fail to inform us of anything true about nutrition — what we should eat or avoid eating:
The scientific and statistical methods used in today’s nutritional epidemiology are not capable of correctly informing us of the truths they are claiming — the causal relationships between dietary items and health outcomes.
The proponents of this type of nutritional epidemiology are fooling themselves, particularly by looking at end points (effects) that are not directly and causally biologically connected to the intervention (dietary food item) — far too often focusing on All Cause Mortality or vague large classes of disease (such as cardiovascular disease or cancer). When some dietary item is then statistically found to be beneficial or harmful, these effects are often justified by researchers with Kiplingesque “Just So…” stories to explain the finding.
For diet soda:
“Experimental evidence conducted in animals and humans has shown that artificial sweeteners disrupt the composition of gut microbes (that is, the gut microbiota) in a direction that could lead to obesity, glucose intolerance, diabetes and, ultimately, cardiovascular disease. Artificial sweeteners may also cause biological changes in the brain that influence satiety and weight gain.” [ source ]
And, finally: Climate Science?
How does a better understanding of the problems found in nutritional epidemiology offer us any insight into the field of Climate Science?
At the core of both fields is the issue of causality.
Causation indicates that one event is the result of the occurrence of the other event; i.e. there is a causal relationship between the two events. This is also referred to as cause and effect. [ source ]
As Ioannidis has pointed out in nutritional epidemiology: the specific scientific methods (large cohort studies based on food frequency surveys) and resultant statistical analyses are fundamentally incapable of ferreting out the individual effects on human health of individual, or classes of, dietary factors — they cannot discover causality. This is both a methodological problem and a result of the object of the study — human nutrition. The complexity of, and incredible variation in, human diets and the interplay between dietary intake and the myriad positive and negative health effects of those dietary components and their interaction with each other, as well as a near infinite number of environmental, genetic, and societal factors make it very hard for nutritional epidemiology to discover all but the largest of effects (which are seen in poisoning by strychnine or the development clinical vitamin deficiencies).
Similarly, for climate science, the object of study, the Earth’s climate system is not only exceptionally complex, but also chaotic. First, we have to understand that, as we see in nutrition science, climate is comprised of hundreds of interacting components, each changing on time scales ranging from seconds to centuries, each being integral influencing and causal factors for the others — all correlated in ways we often (almost always) do not fully understand. And, as in nutrition science, almost all climate variables are correlated with one another; thus, if one variable is found to be correlated to some weather/climate outcome, many other variables will also yield significant associations in the huge present-time and historical data sets relating to Earth’s weather and climate.
Thus we find the situation, unacknowledged by most of the climate science field, that [paraphrasing Ioannidis] “Disentangling the potential influence on medium to long range climate outcomes of a single climatic factor, such as atmospheric GHG concentrations, from these myriad other variables is challenging, if not impossible” based simply on the complexity of the climate itself.
This is further complicated by the fact that the climate system itself is known to be chaotic and thus highly resistant to the prediction of future states.
Edward Lorenz’s work on the topic culminated in the publication of his 1963 paper “Deterministic Nonperiodic Flow” in Journal of the Atmospheric Sciences, and with it, the foundation of chaos theory. He states in that paper:
“Two states differing by imperceptible amounts may eventually evolve into two considerably different states … If, then, there is any error whatever in observing the present state—and in any real system such errors seem inevitable—an acceptable prediction of an instantaneous state in the distant future may well be impossible….In view of the inevitable inaccuracy and incompleteness of weather observations, precise very-long-range forecasting would seem to be nonexistent.”
That is to say, the physics of the climate system themselves are chaotic (in the special sense used in the field of study known as Chaos Theory). Further, the Earth climate system comprises two coupled chaotic systems — the atmosphere and the oceans. This is not controversial at all, but rather well-known and widely acknowledged by nearly everyone in the field:
“The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future exact climate states is not possible.”
And then because the complexity and chaotic nature of the dual system prevents the normal course of science inquiry — the discovery predictable effects of known of causes — IPCC-style climate science calls for:
“Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.”
[ quotes from: IPCC WG1 TAR ]
What does it mean that the climate system is chaotic? It means that small changes in one place or one climate component can cause small or large changes in another. Volcanic eruptions in Southeast Asia can change next year’s weather in Europe. Large wildfires in America’s Northwest can change weather in Australia. Sun spots, or the lack of them, may or may not be changing the climate now.
What does chaos have to do with numeric climate models? Climate Models are numeric representations of little pieces of the climate, which all feed into one another. The models themselves are fantastically, possibly preternaturally, complex. Many, possibly most, of the mathematical formulas necessary to simulate the relationships and interactions between the many components of the climate system are nonlinear differential equations which do not lend themselves to solution, thus must be simplified before being used in the climate model. These simplified formulas are mere approximations of the real relationships.
Nonlinear differential equations are often extremely sensitive to initial conditions…thus one gets the results seen in NCAR’s 40 Earths Project. NCAR claims that the results are “a staggering display of Earth climates that could have been along with a rich look at future climates that could potentially be.” What their study actually demonstrates is that Edward Lorenz was absolutely right — climate models are and will always be extremely sensitive to initial conditions and will produce hugely different results over mid- to long-term runs even when starting points are as small as less than one one-trillionth of a degree different (in this case, in the global average surface temperature).
The solution of the IPCC and most climate modelers is to focus on “the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.” This sounds like very sound science — but is, in light of Chaos Theory, nonsensical, and offers no real world prediction or projection at all. Dr. Robert G. Brown, physicist at Duke University, explains why (at length) in Real Science Debates Are Not Rare. [For his discussion of the point about climate models, start reading at his sentence “At the moment, I’m reading Gleick’s lovely book on Chaos”.]
There is much, much more to this idea of the complexity and chaotic nature of the climate system and what that means for climate models. Some of this has recently been allowed to come to light in a book (in Japanese, with an introduction and appendix in English) by Dr. Mototaka Nakamura, himself a career-long climate modeler, in his new book “The Global Warming Hypothesis is an Unproven Hypothesis“. (an eBook version is available for 99 cents) From the appendix in English:
“All climate simulation models have many details that become fatal flaws when they are used as climate forecasting tools, especially for mid- to long-term (several years and longer) climate variations and changes. These models completely lack some of critically important climate processes and feedbacks, and represent some other critically important climate processes and feedbacks in grossly distorted manners to the extent that makes these models totally useless for any meaningful climate prediction.”
Donahue and Caldwell (2018) explain what happens when the order of processing is changed in climate models — you get different results! They have a cute PowerPoint that illustrates the problems. Erica Thompson and Leonard Smith, at the London School of Economics’ Centre for the Analysis of Time Series have discussed The Hawkmoth Effect in terms of climate models. They too have a poster.
“What is the Hawkmoth Effect? The term “butterfly effect”, coined by Ed Lorenz, has been surprisingly successful as a device for communication of one aspect of nonlinear dynamics, namely, sensitive dependence on initial conditions (dynamical instability), and has even made its way into popular culture. The problem is easily solved using probabilistic forecasts. [ a point with which this I disagree — kh ] A non-technical summary of the Hawkmoth Effect is that “you can be arbitrarily close to the correct equations, but still not be close to the correct solutions”. The less media-friendly hawkmoth does not get as much attention as its celebrated butterfly cousin. However, it is not yet accounted for by modern methods. Due to the Hawkmoth Effect, it is possible that even a good approximation to the equations of the climate system may not give output which accurately reflects the future climate. Climate decision-makers and climate model developers must take this into account.”
Thompson and Smith are willing to let “probabilistic forecasts” be a handling for The Butterfly Effect [it is not really, see Dr. R. G. Brown above — kh] but there is no getting around The Hawkmoth Effect.
So we see climate models facing a set of scientifically strong arguments against their efficacy:
- Lorenz and The Butterfly Effect — extreme strong sensitivity to initial conditions.
- The Processing Order problem — “There is no “correct” process ordering… and process order has a big impact on model behavior”.
- The Hawkmoth Effect — “Structural instability of complex dynamical systems” — tiny differences in the equations used in models result in different model output (forecasts).
- Missing and/or mathematically misrepresented processes or feedbacks in climate models result in meaningless predictions.
- The complexity and internal correlation between the myriad components of the climate system itself inhibit the discovery of the causalities of individual components of the climate system.
None of these five factors actually prevent us from improving our understanding the climate of today or how it works. A great deal of very good science is taking place in an attempt to figure out how the climate works, what the relationships exist between atmospheric and oceanic modes and their cycles, how clouds form and why, the relationship between the Sun and atmospheric phenomena and many other important questions. They just make it harder.
Each of these five factors has direct bearing on the question of causality in climate science — what causes what and when. IPCC-style climate science is focused almost in its entirely on one single climate causal factor: greenhouse gas concentrations in the atmosphere. This single cause is then hard-coded into climate models to produce “projections” of possible future climate factors. These predictions/projections are then presented as proof of the necessity of implementing the proposed social and political solutions that preceded the science by decades.
Like Nutritional Epidemiology, we see, in IPCC-style climate science, a system which thus “reflect[s] almost exclusively the magnitude of the cumulative biases” of the field and are simply in support of “expert-driven [IPCC reports and policy recommendations] guidelines shaped by advocates [including the IPCC itself among many others] dictat[ing] what primary studies should report.” Because climate science is such a young field, and so much is still unknown, the field has been driven by policy advocacy, funding and publication bias and social pressure on climate scientists to conform, and models which are known to be unfit-for-purpose have been used to reinforce the “necessity” of the social/economic/political solutions proposed by the IPCC by predicting catastrophic futures including the imminent demise of human civilization.
Also like Nutritional Epidemiology, the difficulties in discovering causality in climate science has led experts to make to loud public policy pronouncements and predictions, not based on science but on preferred policy outcomes, which have, with the passage of time, repeatedly and consistently failed to come to pass. In nutrition, this is the whipsaw effect: butter is bad, eat margarine — oops! — margarine is bad, eat butter. In climate science, we have had:
- “James Hansen of NASA’s Goddard Institute of Space Studies beginning in 1988 predicted major droughts and up to six feet of sea level rise in the 1990s. One reporter recalled that in the late 1980s, he asked Hansen in his Manhattan office whether anything in the window would look different in 20 years. Hansen replied, “The West Side Highway [which runs along the Hudson River] will be under water. And there will be tape across the windows across the street because of high winds.”” [ source ]
- “Al Gore predicted in 2009 that the North Pole would be completely ice free in five years. A U.S. Navy scientist in 2013 concluded that the Arctic’s summer sea ice cover would all be melted by 2016.” [ source ]
- “ABC News ran a segment in 2008 promoting a movie called Earth 2100. Some predictions to scare us to buy the propaganda were gas reaching $9 per gallon, $12.99 cartons of milk, and New York City — engulfed by water in 2015.” [source ]
The failed climate predictions are so ubiquitous that they have become a standing joke among the general public, at least in the United States. The constant drumbeat of catastrophic climate change predictions has, again as with nutrition science, in all probability harmed the public perception of Science in general, and continues to do so in the present.
There are many working in the Sciences to try to bring about changes in the way science is done and the way it is reported. These efforts to bring about corrections are often fought by the purveyors of the science field’s status quo.
It is unfortunate that many of those fighting against needed changes are government agencies and professional science and medical associations that would have the most to gain from better science. Like the advocacy groups that have staked out positions on various topics in nutrition science, and used their trusted positions to influence policy makers to create public health guidelines in keeping with their advocacy planks, the IPCC and associated social and political advocacy groups have seized control of public climate policy advocacy and are demanding that governments set policy conforming with their advocated social and political goals. Not only is this confounding of science with social politics bad for science — it is bad for public policy.
Daniel Sarewitz, a professor of science and society at Arizona State University’s School for the Future of Innovation and Society, and the co-director of the university’s Consortium for Science, Policy, and Outcomes wrote in an article in The New Atlantis (Spring/Summer 2016) titled “Saving Science”:
“In the future, the most valuable science institutions will be closely linked to the people and places whose urgent problems need to be solved; they will cultivate strong lines of accountability to those for whom solutions are important; they will incentivize scientists to care about the problems more than the production of knowledge. They will link research agendas to the quest for improved solutions — often technological ones — rather than to understanding for its own sake. The science they produce will be of higher quality, because it will have to be. The current dominant paradigm will meanwhile continue to crumble under the weight of its own contradictions, but it will also continue to hog most of the resources and insist on its elevated social and political status.”
There have been some efforts to accomplish the ideals set out by Sarewitz in various fields, in addition to those of the CSPO. Ocean Acidification has had several efforts to correct the methods and reporting of OA research (pdf here and pdf here, reported by me here and here ) and Social Psychology has seen similar efforts (examples here and here and here; and in book form here ). Another paper, “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant”, “demonstrate[s] how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis” and offers some possible solutions. The Royal Statistical Society (and its American counterparts) has called for reform of the use of statistics in scientific research. Devang Mehta has called for reform of research publishing in Nature. These combined proposed methodological solutions can be applied to many fields.
Reading the popular science press, we see that, in real practice, many fields of science are still in the stage wherein the “current dominant paradigm …. continue[s] to hog most of the resources and insist on its elevated social and political status.”
Whatever your relationship is with Science — be it in research, education or science journalism — you can support good careful and rigorous science; you can tactfully call-out poor science and bad science reporting; and you can lend your efforts and your voice to the task of reforming the Sciences and restoring their proper practices and returning them to their proper place in society.
# # # # #
We can learn by comparing the problems in one scientific field to the problems in another — and hopefully see a way forward through the obstacles and impediments to the discovery of the underlying truths of the world around us.
Both Nutritional Epidemiology and Climate Science are filled with honest hard working thinkers and researchers. Still, the challenges presented by the need to publish, to get funding for research, to be accepted by their peers and achieve tenure and security in employment that will allow them to support themselves and their families can push them to produce results that, in the end, do not lead to real advancements in their field. We see this played out when those who retire from the academic field of battle, only then, become very honest and open about the problems with biases being enforced in their research topic.
Share your experiences in the comments, if you can.
# # # # #