
James D. Agresti,
Transcript
Models & Lab Studies
If you want to determine how well a particular medicine works, what’s the worst possible type of study you can do?
Is it (a) an observational study, (b) a laboratory study, or (c) a randomized control trial?
The correct answer is “(b) a lab study.”
Before you demand a replay review, lab studies do serve an important purpose. In this Just Facts Academy lesson, we’ll define lab studies, see how they’re useful, how they’re dangerous, and how you can keep them between the uprights.
Let’s get started.
A laboratory study is a type of research conducted in an environment with controlled conditions, whether it involves a beaker, particle accelerator, cage full of mice, crash test dummy, or living dummies.[1] [2] [3] [4]
These studies share an important similarity with models. No, not that kind of model.
I’m talking about a scientific model, which is “a physical, conceptual, or mathematical representation” used to “explain and predict the behavior of real objects or systems.”[5]
Models often take the form of computer simulations, like those used to predict the consequences of climate change.
Lab studies and models provide something that everyone wants: CONTROL. This allows researchers to isolate and study interactions between different variables.[6]
Sounds ideal, doesn’t it? Especially since one major pitfall of observational studies is that it’s often impossible to control for all the variables that might impact an outcome.[7] [8] [9] [10] [11] [12]
So, controlling those would make lab studies and models superior, right?
Wrong. Their benefit of control is also their downfall. Why? Because reality is often much more complicated than the simplified conditions and parameters used for these studies.[13] [14] [15] [16] [17]
Don’t get me wrong. Lab studies and models are great for discovering or measuring fundamental laws of nature like gravity,[18] [19] [20] but their results are much less reliable when applied to complex biological or social systems like human bodies, our planet’s ecosystem, or nations’ economies. That’s because they can’t accurately replicate all of the relevant variables and their interactions.[21] [22]
The issue of climate change is a prime example. An academic paper aptly summarizes the situation by explaining that:
- Model-based “analyses of climate policy create a perception of knowledge and precision that is illusory and can fool policymakers into thinking that the forecasts the models generate have some kind of scientific legitimacy.”
- “The argument is sometimes made that we have no choice—that without a model we will end up relying on biased opinions, guesswork, or even worse. … This might be a valid argument if we were honest and up-front about the limitations of the model. But often we are not.”[23]
A scholarly work makes the same point about flood models:
[I]f the wrong equations are programmed because of inadequate understanding of the system, then what the computer will produce, if believed by the analyst, will constitute the opposite of science.[24]
Yet, politicians, journalists, and scholars often peddle such junk science to massive audiences who don’t understand how uncertain models and lab studies can be. This is the case in incredibly diverse fields of science and economics.[25] [26] [27] [28] [29]
Consider, for instance, this head-turning headline from the BBC:
Climate Change Made US And Mexico Heatwave 35 Times More Likely.[30]
When an educated reader challenged the BBC by pointing out that this headline was merely an “output from computer models” and not a fact, the BBC corrected their article.
No, not that kind of correction—they simply placed quotation marks around the words “35 times more likely.”[31] [32]
Unfortunately, this is too little, too late.
An honest correction would convey what 22 professors and researchers wrote in the scientific journal Nature:
- “Mathematical models produce highly uncertain numbers….”
- “Modellers must not be permitted to project more certainty than their models deserve….”
- “Rather than using models to inform their understanding, political rivals often brandish them to support predetermined agendas.”[33]
To research like a genius, you must be aware that scientific models don’t always equal real life — just like the other kinds of models.
So how can you, mild-mannered citizen, use models and lab studies to expand your understanding instead of warping it?
- Realize that they can be a great place to start — but a horrible place to end — despite the hype often accorded to them by ill-informed journalists and shifty scholars.[34]
- Learn how to spot models and lab studies. Watch for tell-tale signs like predictions about the future,[35] artificial conditions,[36] and words like “simulation” and “in vitro,” a Latin phrase that literally means “in glass,” like a test tube or petri dish.[37] [38]
- Use lab studies to measure basic laws of nature, like those that govern motion and electromagnetism.[39] [40] [41]
- Lab studies are also helpful for discovering “whether something can happen, rather than whether it does happen” in the real world.[42] A prime example figuring out how viruses could transmit from one person to another.[43] [44]
- Use models and computer simulations for engineering and physics. These tend to be highly reliable in situations where the applicable laws of nature are clear-cut and precise.[45] [46]
- Be aware that despite tight controls, laboratory studies may not be reproducible, which means they are largely useless. You would be flabbergasted at the number of peer-reviewed studies that cannot be replicated, even those that involve life-or-death matters like cancer research.[47] (Watch Just Facts Academy’s lesson on Analyzing Studies for details.)
- Tread very lightly when authors aren’t crystal clear about their assumptions and limitations. This is a troubling indication that they might not be competent and/or forthright.[48] [49] [50] [51]
- Don’t lean on lab studies or models if more reliable studies are available. That means don’t rely on lab studies if observational studies are available, and don’t rely on observational studies if RCTs are available.[52]
- Do use lab studies in combination with observational studies and/or RCTs. Data from the lab combined with data from the real world are “likely to provide deeper insights than either in isolation.”[53]
Though we can’t control how researchers, policymakers, and the media wield lab studies and models, we can rein them in so they don’t take us for a ride.
In sum, keep it real and keep it locked to Just Facts Academy so you can research like a genius.
Just Facts is a research and educational institute dedicated to publishing facts about public policies and teaching research skills.
Endnotes
[1] Article: “Laboratory.” Encyclopædia Britannica. Accessed October 11, 2025 at https://www.britannica.com/science/laboratory-science
laboratory, Place where scientific research and development is conducted and analyses performed, in contrast with the field or factory. Most laboratories are characterized by controlled uniformity of conditions (constant temperature, humidity, cleanliness). Modern laboratories use a vast number of instruments and procedures to study, systematize, or quantify the objects of their attention. Procedures often include sampling, pretreatment and treatment, measurement, calculation, and presentation of results; each may be carried out by techniques ranging from having an unaided person use crude tools to running an automated analysis system with computer controls, data storage, and elaborate readouts.
[2] Entry: “laboratory study.” National Cancer Institute Dictionary of Cancer Terms. Accessed October 11, 2025 at https://www.cancer.gov/publications/dictionaries/cancer-terms/def/laboratory-study
A laboratory study may use special equipment and cells or animals to find out if a drug, procedure, or treatment is likely to be useful in humans. It may also be a part of a clinical trial, such as when blood or other samples are collected. These may be used to measure the effect of a drug, procedure, or treatment on the body.
[3] Book: Research Design and Methods: A Process Approach (8th edition). By Kenneth S. Bordens and Bruce B. Abbott. McGraw-Hill, 2011.
Page 118:
A common complaint about research using white rats or college students and conducted under the artificial conditions of the laboratory is that it may tell us little about how white rats and college sophomores (let alone animals or people in general) behave under the conditions imposed on them in the much richer arena of the real world.
[4] Textbook: Psychology (7th edition). By Peter Gray and David F. Bjorklund. Worth Publishers, 2014.
Page 38:
A laboratory study is any research study in which the subjects are brought to a specially designated area that has been set up to facilitate the researcher’s collection of data or control over environmental conditions. Laboratory studies can be conducted in any location where a researcher has control over what experiences the subject has at that time.
[5] Article: “Scientific Modeling.” By Kara Rogers. Encyclopædia Britannica. Accessed October 11, 2025 at https://www.britannica.com/science/scientific-modeling
scientific modeling, the generation of a physical, conceptual, or mathematical representation of a real phenomenon that is difficult to observe directly. Scientific models are used to explain and predict the behaviour of real objects or systems and are used in a variety of scientific disciplines, ranging from physics and chemistry to ecology and the Earth sciences. Although modeling is a central component of modern science, scientific models at best are approximations of the objects and systems that they represent—they are not exact replicas. Thus, scientists constantly are working to improve and refine models.
[6] Book: Research Design and Methods: A Process Approach (8th edition). By Kenneth S. Bordens and Bruce B. Abbott. McGraw-Hill, 2011.
Page 120:
The two research settings open for psychological research are the laboratory and the field. For this discussion, the term laboratory is used in a broad sense. A laboratory is any research setting that is artificial relative to the setting in which the behavior naturally occurs. This definition is not limited to a special room with special equipment for research. A laboratory can be a formal lab, but it also can be a classroom, a room in the library, or a room in the student union building. In contrast, the field is the setting in which the behavior under study naturally occurs. …
If you choose to conduct your research in a laboratory setting, you gain important control over the variables that could affect your results. The degree of control depends on the nature of the laboratory setting. For example, if you are interested in animal learning, you can structure the setting to eliminate virtually all extraneous variables that could affect the course of learning. This is what Ivan Pavlov did in his investigations of classical conditioning. Pavlov exposed dogs to his experimental conditions while the dogs stood in a sound-shielded room. The shielded room permitted Pavlov to investigate the impact of the experimental stimuli free from any interfering sounds. Like Pavlov, you can control important variables within the laboratory that could affect the outcome of your research.
Complete control over extraneous variables may not be possible in all laboratory settings. For example, if you were administering your study to a large group of students in a psychology class, you could not control all the variables as well as you might wish (students may arrive late, or disruptions may occur in the hallway). For the most part, the laboratory affords more control over the research situation than does the field.
[7] Book: Introductory Econometrics: Using Monte Carlo Simulation with Microsoft Excel. By Humberto Barreto and Frank M. Howland. Cambridge University Press, 2006.
Page 491:
Omitted variable bias is a crucial topic because almost every study in econometrics is an observational study as opposed to a controlled experiment. Very often, economists would like to be able to interpret the comparisons they make as if they were the outcomes of controlled experiments. In a properly conducted controlled experiment, the only systematic difference between groups results from the treatment under investigation; all other variation stems from chance. In an observational study, because the participants self-select into groups, it is always possible that varying average outcomes between groups result from systematic difference between groups other than the treatment. We can attempt to control for these systematic differences by explicitly incorporating variables in a regression. Unfortunately, if not all of those differences have been controlled for in the analysis, we are vulnerable to the devastating effects of omitted variable bias.
[8] Book: Multiple Regression: A Primer. By Paul D. Allison. Pine Forge Press, 1998.
Chapter 1: “What Is Multiple Regression?” https://us.sagepub.com/sites/default/files/upm-binaries/2725_allis01.pdf
Page 20:
Multiple regression shares an additional problem with all methods of statistical control, a problem that is the major focus of those who claim that multiple regression will never be a good substitute for the randomized experiment. To statistically control for a variable, you have to be able to measure that variable so that you can explicitly build it into the data analysis, either by putting it in the regression equation or by using it to form homogeneous subgroups. Unfortunately, there’s no way that we can measure all the variables that might conceivably affect the dependent variable. No matter how many variables we include in a regression equation, someone can always come along and say, “Yes, but you neglected to control for variable X and I feel certain that your results would have been different if you had done so.”
[9] Book: Theory-Based Data Analysis for the Social Sciences (2nd edition). By Carol S. Aneshensel. SAGE Publication, 2013.
Page 90:
The numerous variables that are omitted from any model are routinely assumed to be uncorrelated with the error term, a requirement for obtaining unbiased parameter estimates from regression models. However, the possibility that unmeasured variables are correlated with variables that are in the model obviously cannot be eliminated on empirical grounds. Thus, omitted variable bias cannot be ruled out entirely as a counterargument for the empirical association between the focal independent and dependent variables in observational studies.
[10] Encyclopedia of Education Economics and Finance. Edited by Dominic J. Brewer and Lawrence O. Picus. Sage Publications, 2014.
Page 498:
Omitted variable bias (OVB) occurs when an important independent variable is excluded from an estimation model, such as a linear regression, and its exclusion causes the estimated effects of the included independent variables to be biased. Bias will occur when the excluded variable is correlated with one or more of the included variables. An example of this occurs when investigating the returns to education. This typically involves regressing the log of wages on the number of years of completed schooling as well as on other demographic characteristics such as an individual’s race and gender. One important variable determining wages, however, is a person’s ability. In many such regressions, a measure of ability is not included in the regression (or the measure included only imperfectly controls for ability). Since ability is also likely to be correlated with the amount of schooling an individual receives, the estimated return to years of completed schooling will likely suffer from OVB.
[11] Paper: “Observational Research Rigour Alone Does Not Justify Causal Inference.” By Keisuke Ejima and others. European Journal of Clinical Investigation, October 6, 2016. https://onlinelibrary.wiley.com/doi/10.1111/eci.12681
The greatest challenge to drawing causal inferences in observational studies is the existence of potential confounding variables, not all of which can be specified, measured, or modeled. Randomization is the only method that can eliminate all potential confounders of the effect of treatment assignment per se, doing so by making the distribution of prerandomization factors identical for all treatment assignments at the population level [2].
[12] Commentary: “Observational Studies, Bad Science, and the Media.” By Steven E. Nissen, MD. American College of Cardiology, May 25, 2012. https://www.acc.org/Latest-in-Cardiology/Articles/2012/05/25/12/44/Observational-Studies
The limitations of observational studies are myriad, but the most common flaws are easily understood and explained. Since patients are not randomly assigned to a treatment group, there always exist differences in characteristics between the study groups. The best observational studies attempt to adjust for these “confounders,” but often consider only the most common demographic variables, such as age and gender. Statistical adjustment can never fully compensate for all of the differences in patient characteristics, leading a common problem known as “residual confounding.”
[13] Paper: “The ‘Real-World Approach’ and Its Problems: A Critique of the Term Ecological Validity.” By Gijs A Holleman and others. Frontiers in Psychology, April 30, 2020. https://pmc.ncbi.nlm.nih.gov/articles/PMC7204431/
As Anderson et al. (1999, p. 3) put it: “A common truism has been that … laboratory studies are good at telling whether or not some manipulation of an independent variable causes changes in the dependent variable, but many scholars assume that these results do not generalize to the “real-world.” The general concern is that, due to the ‘artificiality’ and ‘simplicity’ of the laboratory, some (if not many) lab-based experiments do not adequately represent the ‘naturality’ and ‘complexity’ of psychological phenomena in everyday life (see Figure 1).
[14] Report: “Face Coverings in the Community and COVID-19: A Rapid Review.” Public Health England, June 26, 2020. https://www.justfacts.com/document/face_coverings_community_covid-19_public_health_england_june_2020.pdf
Page 6:
Part of the limitations of modelling studies is that they must make assumptions in cases where the evidence or data are lacking. For example, models used different parameters to define ‘effectiveness’ of masks, which ranged from an 8% (24) reduction in risk to >95% (29) reduction in risk. The nature of modelling studies also means that simulations are run in controlled environments that may not accurately reflect the behaviours that we observe in real life. Unless controlled for, parameters can be fixed that are usually variable.
Pages 7–8:
[M]odelling and laboratories studies provide only theoretical evidence…. We, therefore, cannot recommend the use of modelling studies alone as evidence to inform or change policy measures.
[15] Textbook: Social Science Research: Principles, Methods, and Practices (2nd edition). By Professor Anol Bhattacherjee, 2012. https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=1002&context=oa_textbooks
Page 39:
Experiments can be conducted in an artificial or laboratory setting such as at a university (laboratory experiments) or in field settings such as in an organization where the phenomenon of interest is actually occurring (field experiments). Laboratory experiments allow the researcher to isolate the variables of interest and control for extraneous variables, which may not be possible in field experiments. Hence, inferences drawn from laboratory experiments tend to be stronger in internal validity, but those from field experiments tend to be stronger in external validity. Experimental data is analyzed using quantitative statistical techniques. The primary strength of the experimental design is its strong internal validity due to its ability to isolate, control, and intensively examine a small number of variables, while its primary weakness is limited external generalizability since real life is often more complex (i.e., involve more extraneous variables) than contrived lab settings.
[16] Book: Research Design and Methods: A Process Approach (8th edition). By Kenneth S. Bordens and Bruce B. Abbott. McGraw-Hill, 2011.
Page 194:
Several problems with in vitro and computer simulation methods preclude them from being substitutes for psychological research on living organisms. In drug studies, for example, in vitro methods may be adequate in the early stages of testing. However, the only way to determine the drug’s effects on behavior is to test the drug on living, behaving animals. At present, the behavioral or psychological effects of these chemical agents cannot be predicted by the reactions of tissue samples or the results of computer simulations. Behavioral systems are simply too complex for that. Would you feel confident taking a new tranquilizer that had only been tested on tissues in a petri dish?
The effects of environmental variables and manipulations of the brain also cannot be studied using in vitro methods. It is necessary to have a living organism. For example, if you were interested in determining how a particular part of the brain affects aggression, you could not study this problem with an in vitro method. You would need an intact organism (such as a rat) in order to systematically manipulate the brain and observe behavioral changes.
[17] Paper: “Transmission of SARS-CoV-2 by Children.” By Joanna Merckx, Jeremy A. Labrecque, and Jay S. Kaufman. Deutsches Ärzteblatt International (The German Medical Association’s Official International Bilingual Science Journal), July 2020. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7376445/
Results from laboratories, animal models and controlled experiments on SARS-CoV-2 and other respiratory pathogens contribute to knowledge about transmission, but issues of generalization remain central (e15). For example, how does transmission between hamsters translate to transmission dynamics and intervention effects in a 4th grade classroom (15)? Experiments offer the advantage of controlled environments but never approximate real-world conditions. These studies therefore provide important mechanistic insights, but do not directly inform policy decisions. For example, environmental studies provided crucial information about duration of SARS-CoV-2 on different types of surfaces (16). To what extent transmission and infection occur from such media remains unknown, with epidemiology suggesting it may be relatively infrequent (e16).
[18] Article: The Curious Case of the Gravitational Constant.” By Adam Mann. Proceedings of the National Academy of Sciences, September 6, 2016. https://www.pnas.org/doi/10.1073/pnas.1612597113
Using a newly designed device called a torsion balance, [British scientist Henry] Cavendish measured the almost-infinitesimal gravitational attraction between two spheres of lead. His experiment allowed physicists to calculate a value for the gravitational constant—often called Big G to differentiate it from little g, the acceleration due to gravity—for the first time since Isaac Newton wrote down his law of gravity approximately a century earlier.
[19] Book: Six Easy Pieces: Essentials of Physics Explained By Its Most Brilliant Teacher. Addison-Wesley, 1995. This book is comprised of six chapters taken from the book Lectures on Physics by Richard Feynman. Addison-Wesley, 1963.
Where do the laws that are to be tested come from? Experiment, itself, helps to produce these laws, in the sense that it gives us hints. But also needed is imagination to create from these hints the great generalizations—to guess at the wonderful, simple, but very strange patterns beneath them all, and then to experiment to check again whether we have made the right guess. This imagining process is so difficult that there is a division of labor in physics: there are theoretical physicists who imagine, deduce, and guess at new laws, but do not experiment; and then there are experimental physicists who experiment, imagine, deduce, and guess.
[20] Paper: “What Do Laboratory Experiments Tell Us About the Real World?” By John A. List (University of Chicago and National Bureau of Economic Research) and Steven D. Levitt (University of Chicago and the American Bar Association), September 20, 2005. https://pricetheory.uchicago.edu/levitt/Papers/LevittList2005.pdf
Page 1:
Nearly 400 years ago, Galileo performed the first recorded laboratory experiment, timing balls as they rolled down an inclined plane to test his theory of acceleration (Settle, 1961). …
A critical maintained assumption underlying laboratory experiments is that the insights gained in the lab can be extrapolated to the world beyond, a principle we denote as generalizability.1
For physical laws and processes (e.g. gravity, photosynthesis, mitosis), the evidence to date supports the idea that what happens in the lab is equally valid in the broader world. Shapley (1964, p. 43), for instance, noted that “as far as we can tell, the same physical laws prevail everywhere.” Likewise, Newton (1687; p. 398 (1966)) scribed that “the qualities….which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.”2
[21] Paper: “What Do Laboratory Experiments Tell Us About the Real World?” By John A. List (University of Chicago and National Bureau of Economic Research) and Steven D. Levitt (University of Chicago and the American Bar Association), September 20, 2005. https://pricetheory.uchicago.edu/levitt/Papers/LevittList2005.pdf
Pages 2–3:
The allure of the laboratory experimental method in economics is that, in principle, it provides ceteris paribus observations of motivated individual economic agents, which are otherwise exceptionally difficult to obtain using conventional econometric techniques. Lab experiments provide the investigator with a means to directly influence the set of prices, budget sets, and actions available to actors, and thus measure the impact of these factors on behavior within the context of the laboratory.
The basic strategy underlying laboratory experiments in the physical sciences and economics is similar, but the fact that humans are the object of study in the latter raises fundamental questions about the ability to extrapolate experimental findings beyond the lab that do not arise in the physical sciences.4
While few would question whether Uranium 239 would emit beta particles and turn into Neptunium in the presence or absence of scientists, human behavior may not be governed by immutable laws of nature. In particular, we emphasize four characteristics of laboratory economic experiments that raise potential problems in generalizing results from the lab to the outside world:5
[22] Paper: “Limitations of Controlled Experimental Systems as Models for Natural Systems: A Conceptual Assessment of Experimental Practices in Biogeochemistry and Soil Science.” By Daniel Haag and Gunda Matschonat. Science of the Total Environment, September 28, 2001. https://www.sciencedirect.com/science/article/abs/pii/S0048969700008780
Experimental systems in which phenomena are studied under controlled conditions allow scientists to infer causal relationships from observable effects. When investigating ecosystems, however, scientists face complex systems. The conventional approach is to divide the system into conceptual units and to prepare experimental systems accordingly. Experimental systems are used as models for ecosystems: initially, scientists assume an analogy between the experimental system and ecosystem, then encode the experimental system into a formal system by measuring variables, and decode statements from the formal system to the ecosystem. … Laboratory systems are idealized systems which contain a limited number of a priori defined variables, and which are shielded from environmental influences. In contrast, ecosystems are materially and conceptually open, non-stationary, historical systems, in which system-level properties can emerge, and in which variables are produced internally. We conclude that when conducting experiments, causal factors can be identified, but that causal knowledge derived from insufficiently closed systems is invalid. In ecosystems, innumerous factors interact, which may enhance, reduce or neutralize the effect of an experimentally determined factor.
[23] Paper: “The Use and Misuse of Models for Climate Policy.” By Robert S. Pindyck. Review of Environmental Economics and Policy, March 11, 2017. https://www.journals.uchicago.edu/doi/10.1093/reep/rew012
In a recent article (Pindyck 2013a), I argued that integrated assessment models (IAMs) “have crucial flaws that make them close to useless as tools for policy analysis” (page 860). In fact, I would argue that the problem goes beyond their “crucial flaws”: IAM-based analyses of climate policy create a perception of knowledge and precision that is illusory and can fool policymakers into thinking that the forecasts the models generate have some kind of scientific legitimacy. …
The argument is sometimes made that we have no choice—that without a model we will end up relying on biased opinions, guesswork, or even worse. Thus we must develop the best models possible and then use them to evaluate alternative policies. In other words, the argument is that working with even a highly imperfect model is better than having no model at all. This might be a valid argument if we were honest and up-front about the limitations of the model. But often we are not. …
Models sometimes convey the impression that we know much more than we really do. They create a veneer of scientific legitimacy that can be used to bolster the argument for a particular policy. This is particularly true for IAMs, which tend to be large and complicated and are not always well documented. IAMs are typically made up of many equations; these equations are hard to evaluate individually (especially given that they are often ad hoc and without any clear theoretical or empirical foundation) and even harder to understand in terms of their interactions as a complete system. In effect, the model is just a black box: we put in some assumptions about GHG emissions, climate sensitivity, discount rates, etc., and we get out some results about temperature change, reductions in GDP, etc. And although it is not clear exactly what is going on, since the black box is “scientific,” we are supposed to take those results seriously and use them for policy analysis.
[24] Textbook: Flood Geomorphology. By Victor R. Baker and others. Wiley, April 1998.
Page ix:
[T]rue science is concerned with understanding nature no matter what the methodology. In our view, if the wrong equations are programmed because of inadequate understanding of the system, then what the computer will produce, if believed by the analyst, will constitute the opposite of science.
[25] Report: “Face Coverings in the Community and COVID-19: A Rapid Review.” Public Health England, June 26, 2020. https://www.justfacts.com/document/face_coverings_community_covid-19_public_health_england_june_2020.pdf
Page 6:
Part of the limitations of modelling studies is that they must make assumptions in cases where the evidence or data are lacking. For example, models used different parameters to define ‘effectiveness’ of masks, which ranged from an 8% (24) reduction in risk to >95% (29) reduction in risk. The nature of modelling studies also means that simulations are run in controlled environments that may not accurately reflect the behaviours that we observe in real life. Unless controlled for, parameters can be fixed that are usually variable.
Pages 7–8:
Laboratories studies do not take into account real-life settings and only provide mechanistic evidence which should be considered with caution. …
[M]odelling and laboratories studies provide only theoretical evidence…. We, therefore, cannot recommend the use of modelling studies alone as evidence to inform or change policy measures.
[26] Paper: “Risk of Bias in Model-Based Economic Evaluations: The ECOBIAS Checklist. By Charles Christian Adarkwah and others. Expert Review of Pharmacoeconomics & Outcomes Research, November 20, 2015. https://www.researchgate.net/profile/Charles-Adarkwah-2/publication/284274465_Risk_of_bias_in_model-based_economic_evaluations_the_ECOBIAS_checklist/links/56544ebb08aeafc2aabbb745/Risk-of-bias-in-model-based-economic-evaluations-the-ECOBIAS-checklist.pdf
Page 1:
Economic evaluations are becoming increasingly important in providing policymakers with information for reimbursement decisions. However, in many cases, there is a significant difference between theoretical study results and real-life observations. This can be due to confounding factors or many other variables, which could be significantly affected by bias. …
There are basically two analytical frameworks used to conduct economic evaluations: model-based and trial-based. In a model-based economic evaluation, data from a wide range of sources [e.g., randomized-controlled trials (RCTs)], meta-analyses, observational studies) are combined using a mathematical model to represent the complexity of a healthcare process.
Page 6:
This study identified several additional biases related to model-based economic evaluation and showed that the impact of these biases could be massive, changing the outcomes from being highly cost-effective to not being cost-effective at all.
[27] Paper: “Economic Evaluations in Fracture Research an Introduction with Examples of Foot Fractures.” By Noortje Anna Clasina van den Boom and others. Injury, March 2022. https://www.sciencedirect.com/science/article/pii/S0020138322000146
The lack of reliable data in the field of economic evaluation fractures could be explained by the lack of reliable literature to base the models on. Since model based studies are the most common design in this field of research, this problem is significant.
[28] Paper: “Transmission of SARS-CoV-2 by Children.” By Joanna Merckx, Jeremy A. Labrecque, and Jay S. Kaufman. Deutsches Ärzteblatt International (The German Medical Association’s Official International Bilingual Science Journal), July 2020. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7376445/
Results from laboratories, animal models and controlled experiments on SARS-CoV-2 and other respiratory pathogens contribute to knowledge about transmission, but issues of generalization remain central (e15). For example, how does transmission between hamsters translate to transmission dynamics and intervention effects in a 4th grade classroom (15)? Experiments offer the advantage of controlled environments but never approximate real-world conditions. These studies therefore provide important mechanistic insights, but do not directly inform policy decisions. For example, environmental studies provided crucial information about duration of SARS-CoV-2 on different types of surfaces (16). To what extent transmission and infection occur from such media remains unknown, with epidemiology suggesting it may be relatively infrequent (e16).
[29] Report: “Non-Pharmaceutical Measures for Mitigating the Risk and Impact of Epidemic and Pandemic Influenza.” World Health Organization, 2019. Annex: “Report of Systematic Literature Reviews.” https://www.justfacts.com/document/mitigating_risk_impact_influenza_who_2019_annex.pdf
Page 2:
The quality of evidence was ranked for each study as high, moderate, low or very low, based on its risk of bias, consistency, directness, precision of the results and publication bias. Hence, we set the highest priority on randomized controlled trials (RCTs), then on observational studies and finally on simulation studies. If RCTs were reported, as a general principle we did not review observational studies or simulation studies, and if observational studies were reported, as a general principle we did not review simulation studies.
[30] Article: “Climate Change Made US and Mexico Heatwave 35 Times More Likely.” By Greg Brosnan. BBC, June 20, 2024. https://www.bbc.com/news/articles/czvvqdg8zxno
[31] Article: “BBC & Weather Attribution Models.” By Paul Homewood. Not a Lot of People Know That, April 12, 2025. https://wattsupwiththat.com/2025/04/13/bbc-weather-attribution-models/
You may recall this BBC report from last summer:
Climate Change Made US and Mexico Heatwave 35 Times More Likely
Human-induced climate change made recent extreme heat in the US south-west, Mexico and Central America around 35 times more likely, scientists say.
The World Weather Attribution (WWA) group studied excess heat between May and early June, when the US heatwave was concentrated in south-west states including California, Nevada and Arizona.
I complained at the time that the WWA claims were presented as “factual,” rather than output from computer models. Furthermore the actual historical data showed the WWA claims to be false. (See my post here).
I have just received this response from the BBC:
As I understand your email’s complaint it is that the headline presents the findings of the scientists’ study as a fact.
Having reviewed the article, we believe the headline should have included quotation marks around the scientists claim.
The headline has been changed to read: Climate change made US and Mexico heatwave ‘35 times more likely’
We have also added a note at the bottom of the article to acknowledge the change.
[32] Article: “Climate Change Made US and Mexico Heatwave 35 Times More Likely.” By Greg Brosnan. BBC, June 20, 2024. https://www.bbc.com/news/articles/czvvqdg8zxno
Correction April 8: The headline of this article was edited to include quotation marks to reflect that it is a claim from scientists that climate change made the US and Mexico heatwave 35 times more likely.
[33] Commentary: “Five Ways to Ensure That Models Serve Society: A Manifesto.” By Andrea Saltelli and others. Nature, June 24, 2020. https://www.nature.com/articles/d41586-020-01812-9
Now, computer modelling is in the limelight, with politicians presenting their policies as dictated by ‘science’2. Yet there is no substantial aspect of this pandemic for which any researcher can currently provide precise, reliable numbers. Known unknowns include the prevalence and fatality and reproduction rates of the virus in populations. There are few estimates of the number of asymptomatic infections, and they are highly variable. We know even less about the seasonality of infections and how immunity works, not to mention the impact of social-distancing interventions in diverse, complex societies.
Mathematical models produce highly uncertain numbers that predict future infections, hospitalizations and deaths under various scenarios. Rather than using models to inform their understanding, political rivals often brandish them to support predetermined agendas. To make sure predictions do not become adjuncts to a political cause, modellers, decision makers and citizens need to establish new social norms. Modellers must not be permitted to project more certainty than their models deserve; and politicians must not be allowed to offload accountability to models of their choosing2,3.
[34] Report: “Face Coverings in the Community and COVID-19: A Rapid Review.” Public Health England, June 26, 2020. https://www.justfacts.com/document/face_coverings_community_covid-19_public_health_england_june_2020.pdf
Page 6:
Part of the limitations of modelling studies is that they must make assumptions in cases where the evidence or data are lacking. For example, models used different parameters to define ‘effectiveness’ of masks, which ranged from an 8% (24) reduction in risk to >95% (29) reduction in risk. The nature of modelling studies also means that simulations are run in controlled environments that may not accurately reflect the behaviours that we observe in real life. Unless controlled for, parameters can be fixed that are usually variable.
Pages 7–8:
[M]odelling and laboratories studies provide only theoretical evidence…. We, therefore, cannot recommend the use of modelling studies alone as evidence to inform or change policy measures.
[35] Commentary: “Five Ways to Ensure That Models Serve Society: A Manifesto.” By Andrea Saltelli and others. Nature, June 24, 2020. https://www.nature.com/articles/d41586-020-01812-9
Now, computer modelling is in the limelight, with politicians presenting their policies as dictated by ‘science’2. Yet there is no substantial aspect of this pandemic for which any researcher can currently provide precise, reliable numbers. Known unknowns include the prevalence and fatality and reproduction rates of the virus in populations. There are few estimates of the number of asymptomatic infections, and they are highly variable. We know even less about the seasonality of infections and how immunity works, not to mention the impact of social-distancing interventions in diverse, complex societies.
Mathematical models produce highly uncertain numbers that predict future infections, hospitalizations and deaths under various scenarios.
[36] Paper: “Transmission of SARS-CoV-2 by Children.” By Joanna Merckx, Jeremy A. Labrecque, and Jay S. Kaufman. Deutsches Ärzteblatt International (The German Medical Association’s Official International Bilingual Science Journal), July 2020. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7376445/
Results from laboratories, animal models and controlled experiments on SARS-CoV-2 and other respiratory pathogens contribute to knowledge about transmission, but issues of generalization remain central (e15). For example, how does transmission between hamsters translate to transmission dynamics and intervention effects in a 4th grade classroom (15)? Experiments offer the advantage of controlled environments but never approximate real-world conditions. These studies therefore provide important mechanistic insights, but do not directly inform policy decisions. For example, environmental studies provided crucial information about duration of SARS-CoV-2 on different types of surfaces (16). To what extent transmission and infection occur from such media remains unknown, with epidemiology suggesting it may be relatively infrequent (e16).
[37] Report: “Face Coverings in the Community and COVID-19: A Rapid Review.” Public Health England, June 26, 2020. https://www.justfacts.com/document/face_coverings_community_covid-19_public_health_england_june_2020.pdf
Page 6:
Part of the limitations of modelling studies is that they must make assumptions in cases where the evidence or data are lacking. For example, models used different parameters to define ‘effectiveness’ of masks, which ranged from an 8% (24) reduction in risk to >95% (29) reduction in risk. The nature of modelling studies also means that simulations are run in controlled environments that may not accurately reflect the behaviours that we observe in real life. Unless controlled for, parameters can be fixed that are usually variable.
Pages 7–8:
Laboratories studies do not take into account real-life settings and only provide mechanistic evidence which should be considered with caution. …
[M]odelling and laboratories studies provide only theoretical evidence…. We, therefore, cannot recommend the use of modelling studies alone as evidence to inform or change policy measures.
[38] Book: Research Design and Methods: A Process Approach (8th edition). By Kenneth S. Bordens and Bruce B. Abbott. McGraw-Hill, 2011.
Page 194:
Animal rights activists point out that viable alternatives to using living animals in research (known as in vivo methods) exist, two of which are in vitro methods and computer simulations. These methods are more applicable to biological and medical research than to behavioral research. In vitro (which means “in glass”) methods substitute isolated living tissue cultures for whole, living animals. Experiments using this method have been performed to test the toxicity and mutagenicity of various chemicals and drugs on living tissue. Computer simulations also have been suggested as an alternative to using living organisms in research. In a computer simulation study, a mathematical model of the process to be simulated is programmed into the computer. Parameters and data concerning variables fed into a computer then indicate what patterns of behavior would develop according to the model.
[39] Article: The Curious Case of the Gravitational Constant.” By Adam Mann. Proceedings of the National Academy of Sciences, September 6, 2016. https://www.pnas.org/doi/10.1073/pnas.1612597113
Using a newly designed device called a torsion balance, [British scientist Henry] Cavendish measured the almost-infinitesimal gravitational attraction between two spheres of lead. His experiment allowed physicists to calculate a value for the gravitational constant—often called Big G to differentiate it from little g, the acceleration due to gravity—for the first time since Isaac Newton wrote down his law of gravity approximately a century earlier.
[40] Book: Six Easy Pieces: Essentials of Physics Explained By Its Most Brilliant Teacher. Addison-Wesley, 1995. This book is comprised of six chapters taken from the book Lectures on Physics by Richard Feynman. Addison-Wesley, 1963.
Where do the laws that are to be tested come from? Experiment, itself, helps to produce these laws, in the sense that it gives us hints. But also needed is imagination to create from these hints the great generalizations—to guess at the wonderful, simple, but very strange patterns beneath them all, and then to experiment to check again whether we have made the right guess. This imagining process is so difficult that there is a division of labor in physics: there are theoretical physicists who imagine, deduce, and guess at new laws, but do not experiment; and then there are experimental physicists who experiment, imagine, deduce, and guess.
[41] Paper: “What Do Laboratory Experiments Tell Us About the Real World?” By John A. List (University of Chicago and National Bureau of Economic Research) and Steven D. Levitt (University of Chicago and the American Bar Association), September 20, 2005. https://pricetheory.uchicago.edu/levitt/Papers/LevittList2005.pdf
Page 1:
Nearly 400 years ago, Galileo performed the first recorded laboratory experiment, timing balls as they rolled down an inclined plane to test his theory of acceleration (Settle, 1961). …
A critical maintained assumption underlying laboratory experiments is that the insights gained in the lab can be extrapolated to the world beyond, a principle we denote as generalizability.1
For physical laws and processes (e.g. gravity, photosynthesis, mitosis), the evidence to date supports the idea that what happens in the lab is equally valid in the broader world. Shapley (1964, p. 43), for instance, noted that “as far as we can tell, the same physical laws prevail everywhere.” Likewise, Newton (1687; p. 398 (1966)) scribed that “the qualities….which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.”2
[42] Book: Research Design and Methods: A Process Approach (8th edition). By Kenneth S. Bordens and Bruce B. Abbott. McGraw-Hill, 2011.
Page 118:
A common complaint about research using white rats or college students and conducted under the artificial conditions of the laboratory is that it may tell us little about how white rats and college sophomores (let alone animals or people in general) behave under the conditions imposed on them in the much richer arena of the real world.
The idea seems to be that all studies should be conducted in such a way that the findings can be generalized immediately to real-world situations and to larger populations. However, as Mook (1983) notes, it is a fallacy to assume “that the purpose of collecting data in the laboratory is to predict real-life behavior in the real world” (p. 381). Mook points out that much of the research conducted in the laboratory is designed to determine one of the following:
1. Whether something can happen, rather than whether it typically does happen
2. Whether something we specify ought to happen (according to some hypothesis) under specific conditions in the lab does happen there under those conditions
3. What happens under conditions not encountered in the real world
In each of these cases, the objective is to gain insight into the underlying mechanisms of behavior rather than to discover relationships that apply under normal conditions in the real world. It is this understanding that generalizes to everyday life, not the specific findings themselves.
[43] Report: “Face Coverings in the Community and COVID-19: A Rapid Review.” Public Health England, June 26, 2020. https://www.justfacts.com/document/face_coverings_community_covid-19_public_health_england_june_2020.pdf
Page 7: “Laboratories studies do not take into account real-life settings and only provide mechanistic evidence which should be considered with caution.”
[44] Paper: “Transmission of SARS-CoV-2 by Children.” By Joanna Merckx, Jeremy A. Labrecque, and Jay S. Kaufman. Deutsches Ärzteblatt International (The German Medical Association’s Official International Bilingual Science Journal), July 2020. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7376445/
Results from laboratories, animal models and controlled experiments on SARS-CoV-2 and other respiratory pathogens contribute to knowledge about transmission, but issues of generalization remain central (e15). For example, how does transmission between hamsters translate to transmission dynamics and intervention effects in a 4th grade classroom (15)? Experiments offer the advantage of controlled environments but never approximate real-world conditions. These studies therefore provide important mechanistic insights, but do not directly inform policy decisions. For example, environmental studies provided crucial information about duration of SARS-CoV-2 on different types of surfaces (16). To what extent transmission and infection occur from such media remains unknown, with epidemiology suggesting it may be relatively infrequent (e16).
[45] Paper: “Verification, Validation, and Predictive Capability in Computational Engineering and Physics.” By William L Oberkampf, Timothy G Trucano, and Charles Hirsch. Prepared for Sandia National Laboratories, February 2003. https://www.osti.gov/servlets/purl/918370
Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue.
This paper presents our viewpoint of the state of the art in V&V in computational physics. (In this paper we refer to all fields of computational engineering and physics, e.g., computational fluid dynamics, computational solid mechanics, structural dynamics, shock wave physics, computational chemistry, etc., as computational physics.) …
During the last three or four decades, computer simulations of physical processes have been increasingly used in scientific research and in the analysis and design of engineered systems. The systems of interest have been existing or proposed systems that operate, for example, at design conditions, off-design conditions, and failure-mode conditions in accident scenarios. The systems of interest have also been natural systems, for example, computer simulations for environmental impact, as in the analysis of surface-water quality and the risk assessment of underground nuclear-waste repositories. These kinds of predictions are beneficial in the development of public policy, the preparation of safety procedures, and the determination of legal liability. Thus, because of the impact that modeling and simulation predictions can have, the credibility of the computational results is of great concern to engineering designers and managers, public officials, and those who are affected by the decisions that are based on these predictions. …
Because of the infeasibility and impracticality of conducting true validation experiments on most complex or large scale systems, the recommended method is to use a building-block approach. … Each comparison of computational results with experimental data allows an inference of validation concerning tiers both above and below the tier where the comparison is made. However, the quality of the inference depends greatly on the complexity of the tiers above and below the comparison tier. For simple physics, the inference may be very strong, e.g., laminar, single phase, Newtonian, nonreacting flow, and rigid-body structural dynamics. However, for complex physics, the inference is commonly very weak, e.g., turbulent reacting flow and fracture dynamics. This directly reflects the quality of our scientific knowledge about the experiments and calculations that are being compared for more complex tiers.
[46] Paper: “Quantum Mechanical Modeling: A Tool for the Understanding of Enzyme Reactions.” By Gábor Náray-Szabó, Julianna Oláh, and Balázs Krámos. Biomolecules, September 2013. https://pmc.ncbi.nlm.nih.gov/articles/PMC4030948/
Most enzyme reactions involve formation and cleavage of covalent bonds, while electrostatic effects, as well as dynamics of the active site and surrounding protein regions, may also be crucial. Accordingly, special computational methods are needed to provide an adequate description, which combine quantum mechanics for the reactive region with molecular mechanics and molecular dynamics describing the environment and dynamic effects, respectively. … As a result of the spectacular progress in the last two decades, most enzyme reactions can be quite precisely treated by various computational methods.
[47] Paper: “Drug Development: Raise Standards for Preclinical Cancer Research.” By C. Glenn Begley and Lee M. Ellis. Nature, March 28, 2012. https://www.nature.com/articles/483531a
The scientific community assumes that the claims in a preclinical study can be taken at face value — that although there might be some errors in detail, the main message of the paper can be relied on and the data will, for the most part, stand the test of time. Unfortunately, this is not always the case. …
Over the past decade, before pursuing a particular line of research, scientists (including C.G.B.) in the haematology and oncology department at the biotechnology firm Amgen in Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-three papers were deemed ‘landmark’ studies…. Nevertheless, scientific findings were confirmed in only 6 (11%) cases. …
To address these concerns, when findings could not be reproduced, an attempt was made to contact the original authors, discuss the discrepant findings, exchange reagents and repeat experiments under the authors’ direction, occasionally even in the laboratory of the original investigator. ….
Some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis. More troubling, some of the research has triggered a series of clinical studies — suggesting that many patients had subjected themselves to a trial of a regimen or agent that probably wouldn’t work.
[48] The Handbook of Social Research Ethics. Edited by Donna M. Mertens and Pauline E. Ginsberg. Sage, 2009. Chapter 24: “Use and Misuse of Quantitative Methods: Data Collection, Calculation, and Presentation.” By Bruce L. Brown and Dawson Hedges. Pages 373-386.
Kuhn’s (1962) well-known sociological analysis of the actual human process of science reminds us that science, is, after all, a human endeavor. We are not angels, and there is much of politics and even intrigue, the managing of appearances, and the protection of turf in scholarship. …
Basic and often unexamined assumptions affect directly the phenomenon studied, the interpretation of results … and, ultimately, the presentation of the findings. When an investigator reports data from the perspective of materialistic, dualistic, or any other set assumptions, the presented findings will contain echoes of the assumptions, often significantly constraining the presented data and blinding the reader to alternative interpretations…. In this regard, “ideologies and dogmas are the jailers” … and, as such, a fully ethical presentation of scientific data requires an understanding on the part of the readers and the author alike of the assumptions that gird the research and findings. …
Because data from the social sciences are a public affair having the potential to influence theory, public policy, and clinical treatment, an awareness of assumptions becomes an ethical issue in that philosophical sloppiness may lead to a faulty interpretation of research findings. …
Science is only as good as the collection, presentation, and interpretation of its data. The philosopher of science Karl Popper argues that scientific theories must be testable and precise enough to be capable of falsification (Popper, 1959). To be so, science, including social science, must be essentially a public endeavor, in which all findings should be published and exposed to scrutiny by the entire scientific community. Consistent with this view, any errors, scientific or otherwise, in the collection, analysis, and presentation of data potentially hinder the self-correcting nature of science, reducing science to a biased game of ideological and corporate hide-and-seek.
[49] Report: “Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget.” By the Committee to Review the OMB Risk Assessment Bulletin, Board on Environmental Studies and Toxicology, Division on Earth and Life Studies, National Research Council. National Academies Press, 2007.
Page 137:
5. Standards Related to Critical Assumptions
Risk assessments should explain the basis of each critical assumption and those assumptions which affect the key findings of the risk assessment. If the assumption is supported by, or conflicts with, empirical data, that information should be discussed. This should include discussion of the range of scientific opinions regarding the likelihood of plausible alternate assumptions and the direction and magnitude of any resulting changes that might arise in the assessment due to changes in key assumptions. Whenever possible, a quantitative evaluation of reasonable alternative assumptions should be provided. If an assessment combines multiple assumptions, the basis and rationale for combining the assumptions should be clearly explained.
[50] Webpage: “Standards for High-Quality Research and Analysis.” Rand Corporation. Accessed May 10, 2007 at <rand.org>
Assumptions should be explicit and justified.
Assumptions can mask uncertainties that affect the validity of findings and the expedience of recommendations. Major assumptions underlying a study must be explicitly identified and defended. A high-quality study usually enhances the robustness of its results by varying assumptions in order to analyze alternative scenarios.
[51] Book: Business Research Methodology. By T N Srivastava and Shailaja Rego. McGraw Hill Education India, 2010.
Criteria, Characteristics and Challenges for Good/Ideal Research …
(vi) All the assumptions made in the research design and analysis should be stated. These have impact on defining scope and limitations of the study.
[52] Report: “Non-Pharmaceutical Measures for Mitigating the Risk and Impact of Epidemic and Pandemic Influenza.” World Health Organization, 2019. Annex: “Report of Systematic Literature Reviews.” https://www.justfacts.com/document/mitigating_risk_impact_influenza_who_2019_annex.pdf
Page 2:
The quality of evidence was ranked for each study as high, moderate, low or very low, based on its risk of bias, consistency, directness, precision of the results and publication bias. Hence, we set the highest priority on randomized controlled trials (RCTs), then on observational studies and finally on simulation studies. If RCTs were reported, as a general principle we did not review observational studies or simulation studies, and if observational studies were reported, as a general principle we did not review simulation studies.
[53] Paper: “What Do Laboratory Experiments Tell Us About the Real World?” By John A. List (University of Chicago and National Bureau of Economic Research) and Steven D. Levitt (University of Chicago and the American Bar Association), September 20, 2005. https://pricetheory.uchicago.edu/levitt/Papers/LevittList2005.pdf
Based on theory and empirical evidence, we argue that lab experiments are a useful tool for generating qualitative insights, but are not well-suited for obtaining deep structural parameter estimates. We conclude that the sharp dichotomy sometimes drawn between lab experiments and data generated in natural settings is a false one. Each approach has strengths and weaknesses, and a combination of the two is likely to provide deeper insights than either in isolation.
I’m not quite certain about this transition from medical study to climate study.
Medical studies generally compare a control group against one of more test groups that receive various amounts of whatever is being studied.
Climate studies don’t do that, as we don’t have a decent control planet to compare against, and even if we did, many tests would take decades to execute. So we have to make do with small-scale or non-controlled studies and mathematical models.
BTW, I’m a fan of mouse model studies and a longtime donor to the Jackson Laboratory, https://www.jax.org/
The point is that a climate RCT is impossible, which leaves observation and “lab” modelling as the only ways to do climate research. But the limitations of the models are rarely, if ever, acknowledged and are ignored by policymakers, ideologues, and grant seekers, all of whom pretend that computer output is data and a reliable predictor of the future. The unwarranted faith in unreliable predictions based on fallible assumptions results in squandered resources, damaged economies, and even futile environmental damage, none of which is remotely scientific. The accompanying fearmongering is wreaking social damage as well. It made sense for climatology to be regarded as a branch of physical geography, as it was before climatism arose.
This leaves observation as the most reliable kind of climate study. It made sense for climatology to be regarded as a branch of physical geography, as it was before climatism arose.
GCM’s (Global Climate Models) are nothing more that a jobs program for employment of academia scientists, statisticians, and programmers. Starting off to create a “global” model is a joke. It will take centuries to do an adequate job of programming let alone devising proper input measurements.
Why not start small, say choose a square mile, put CRN type stations every 300 feet around the perimeter and stations inside every 10,000 square feet. Measure temperature, dry bulb, wet bulb, insolation, soil temps, cloud coverage, UV, etc., every six seconds, so that one can begin to determine a reasonable functionable relationship between the variables on a “control area”. So this at various sites around the globe to START obtaining some reasonable data to begin with. With all the money being devoted to super computers, salaries, conferences, solar panels, wind turbines, distribution systems, surely some could be directed to the subject of actually obtaining actual scientific data that is fit for the purpose.
That would provide scientifically valid data. But what could it tell you about climate, let alone change in climate? The climate change “problem” is an entirely manufactured one, created by some who saw a way to generate fear that weakens opposition to their growing power. Embarrassingly, most of us fell for it.
It needs to be 3-D! Surface studies will tell you little about convection except that it’s happening (when the morning temperature rise slows down or airspeed picks up that generally means the temperature inversion has ended). It will give you some idea about cloud base, but there’s more to clouds than that.
Ric, studying a deterministic chaotic system may confirm your belief it is a deterministic chaotic system. In other words, its future states are unpredictable by definition.
Experiment will show you that adding CO2 to air does not make thermometers hotter. Climate is the statistics of weather observations.
The conclusion to be drawn is that “climate scientists” are either fools or frauds. In either case, their opinions have no value to anyone who is not ignorant or gullible.
My opinion is of similar value.
Let’s choose a one-square mile area in the middle of DFW.
The article talks about 3 different types of studies.
It then criticized the over use of (b) lab studies. You are talking about (c) randomized control trial.
Not the same thing.
Models have no control, so what are you talking about? The computer models may have a randomization factor, and the modellers may call running the program an experiment, but it’s math, not an RCT.
Excellent article. It lays out very well what a scientific study should have in terms of uncertainty.
I first became a sceptic when I saw temperature and CO2 being conflated via a time series of temperature only. The conclusion goes like, if CO2 is rising and temperature is also rising, then we can infer that they are obviously directly connected. Now substitute postal rates for CO2. Does that make sense?
When doing budgets in my job, I didn’t want to hear that revenues were projected to increase or that expenses were going to go down. I wanted to know what revenues exactly and why. I wanted to know exactly what expenses and why.
I don’t believe in inferences of physical values with a resolution that was not measured. That is one reason why replication is becoming impossible. When one infers unknowable results from measurements that do not have sufficient resolution, that is just guessing. Do we just “guess” at the value of gravity? Or, do we continue to revise it with better and higher resolution measurement devices?
Turns out it depends where, when and how you measure it… Today.
Tomorrow the speed of light might be different.
We ARE starting to postulate alternatives that completely avoid all the many, many dark and sinister holes in gravitational theory.
One day I’ll tell you how it ties in with denying the soul.
As defined today, the speed of light will not change. It may very well be refined further and further as better measurements are taken.
How should confidence in modeling and simulation be critically assessed?
You can model aerodynamics with a wind tunnel and end up with cars and aeroplanes that all look alike to a degree. But other areas are deeply sus and also prone to biases.
it is surely true that at least at times, disastrous decisions have been made through reliance on models that proved to be incorrect. – What’s Wrong with Economic Models?
2001: Expert opinion was that Ferguson’s modelling was ‘severely flawed’.
In 2002 they predicted up to 50,000 ‘mad cow disease’ deaths in the UK from eating beef infected with Bovine Spongiform Encephalopathy (BSE). Only 170 people died.
In 2005 we had the bird flu outbreak. Ferguson predicted that up to 200 million people could be killed globally. He arrived at this figure by scaling up from the 1918 Spanish flu outbreak. The actual worldwide death toll was only 282 people between 2003 and 2009.
In 2009/10 we had the swine flu (H1N1) ‘pandemic’. Ferguson predicted that 65,000 people would die in the UK. In reality, 457 people died in the UK. • This was the team who were entrusted with the modelling for COVID-19
Ferguson’s modelling has come in for numerous criticisms relating to bugs in the software, obsolete computer code, flawed assumptions and a far too narrow selection of possible outcomes. The analysis of Ferguson’s model was seen by MP Steve Baker, to which he tweeted “As a software engineer, I am appalled.” (https://twitter.com/SteveBakerHW/status/1258165810629087232)
https://hert.org.uk/wp-content/uploads/2024/10/2.-Mathematical-modelling.pdf
How should confidence in modeling and simulation be critically assessed? By what it’s being applied to, and whether any ‘’guesses’, ‘assumptions’, ‘parameters’ or fiddle factors have to go into it.
Ferguson defined himself as utterly incompetent. Is he still publishing?
professor of mathematical biology, who specialises in the patterns of spread of infectious disease in humans and animals. He is the director of the Jameel Institute and the School of Public Health at Imperial College London. – Wiki
Repeatability. That is the final and true test, Repeatability.
Climastrology cannot replicate nuffin’
And some people, or models, can be repeatably wrong.
Not “whether” – how many.
Useless model.
Violates GAAP & LoT.
Here’s Dr. Trenberth’s original heat budget chart:
http://www.atmo.arizona.edu/students/courselinks/spring07/atmo336s3/lectures/sec3/fig1-2.gif
It balances 342 w/m² in and 342 out. Some years later ~2012, Trenberth changed that to ” Net Absorbed 0.9 w/m² ”
The model didn’t do that Trenberth did. It’s pretty easy to assume he did it because that’s the number he wanted.
The author James D. Agresti, didn’t mention dishonesty as a problem with models.
Maybe I missed it.
Let’s see if I can get Trenberth’s 1st chart to show up
Where is the chart for Canada in the winter time? Is that chart for atmosphere with no wind?
Here’s the contact information for Dr. Trenberth
Email. trenbert@ucar.edu.
Ask him.
I have.
He refuses to engage with anyone who denies GHE.
BTW
Can you refute my points?
Be the first.
Similar w notes.
This model (and others) is based on Fourier’s notion that the Earth is evenly warmed in an averaged pail of cosmic poo.
Pi r^2 disc = 4 Pi r^2 sphere
1,368 W/m^2 discular / 4 = spherical 342.
Even Pierrehumbert says this model is nfg.
Lit hemisphere is heated, sphere exits.
“When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.”
Lord Kelvin
The US, Brazil, India, Russia, Mexico and Peru account for half of the global C-19 deaths and a third of the cases. WTF kind of GLOBAL pandemic is that? The US leads the ENTIRE globe w 16% of the deaths and 15.6% of cases. And WTF is wrong with US medical care??
Since 1/20 there have been over 11.5 MILLION deaths from ALL causes in the US, 8.9% attributed to C-19. There were 12.6% unnatural deaths, i.e. suicide, drug O.D., murder, accident. And cancer (18.0%), heart disease (20.8%) and assorted C-19 comorbidity deaths (10.2%) were EACH a greater share than C-19 alone. Ordinary flu/pneumonia/respiratory each were under 1.5%.
Would not know that from the lying, race baiting, fact free, rabble-rousing, anti-democratic, fake news MSM that rebranded the ordinary flu/pneumonia/respiratory as C-19 pandemic so they could stampede the electorate into deposing Trump.
Over 80% of C-19 CASES are among those UNDER 65.
75% of C-19 DEATHS are among those OVER 65 at 15% of the population.
Japan has the highest percentage of 65+ & Covid was a non-event. What do they know/do?
26% of C-19 DEATHS are among those OVER 85 at 2% of the population.
33% of C-19 DEATHS have been in hospice, nursing homes, residence, DoA.
Per CDC data.
What kind of widespread pandemic is that?
C-19 is just Mother Nature and the Grim Reaper culling the herd of the too old, too sick, too many, too crammed together as Medicare/Medicaid cash cows in contagious badly run (BLUE) eldercare warehouses.
More like “big lie” scam-demic foisted on the public by the lying, science illiterate, race baiting, fake news scumbag press and NWO politicians.
That vaccines prevent or reduce infection & illness has been known since 1796 and demonstrated for over 200 years.
mRNA attacks the virus directly when the patient is already ill and does nothing for the immune system.
Very nice and important information.
Oh, I lost the will to live one quarter of the way through it.
You got THAT far??
That’s supposed to tell you what a “lab study is”?
“May”, “is likely”, “may also”, “may be” – or may not be, of course.
Just more speculation and wishful thinking, to convince the ignorant and gullible to believe that the “researchers” opinions are useful and “scientific”.
A “lab study” is a wonderful tool for say, drug manufacturers to assert their drug is amazingly good. Of course, the 25 studies which show that their drug was no better than a placebo are shuffled off to one side. Another problem with “lab studies” is that they are generally phrased in statistical terms, with outcomes such as “75% showed significant improvement . . . “, which tells you precisely zip about the reasons for the 25% who didn’t!
Very, very, occasionally, a gifted researcher will notice something unexpected during a study, and pursue the reason for the apparent anomaly, finding something nobody realised before. But not often. Studies are generally arranged to support a pre-ordained conclusion.
Experiments are one thing. Lab studies are what you use when you can’t be bothered to perform rigorous experiments. Models are fine – if you know what you’re doing.
Nullius in Verba – the motto of the The Royal,Society.
There is a lot to take in here but I agree with the premise that a lot of what is thought to be science today is junk produced by people who have a biased agenda that they want us all to follow. Two of the worst so called areas are climate and nutrition. The plant based diet is promoted in nutrition today using so called scientific methods and they even want us to follow this diet to save the planet, whatever that means. I followed a low carb diet in order reverse my type two diabetes and I have no intention of giving up meat whatever they want. a lot of research is paid for by those who have a vested interest in certain things being true and this is the case for both climate and nutrition. I think science is generally poor today compared to what it used to be and how we reacted to the recent pandemic is an example of this the consequence of this is I no longer have any trust in what scientists are telling us.