Guest Essay by Kip Hansen — 22 June 2022
Why do we have so many wildly varying answers to so many of the important science questions of our day? Not only varying, but often directly contradictory. In the health and human diet field, we have findings that meat/salt/butter/coffee/vitamin supplements are good for human health and longevity (today…) and simultaneously or serially, dangerous and harmful to human health and longevity (tomorrow or yesterday). The contradictory findings are often produced through analyses using the exact same data sets. We are all so well aware of this in health that some refer to it as a type of “whiplash effect”.
[Note: This essay is almost 3000 words – not a short news brief or passing comment. It discusses an important issue that crosses science fields. – kh ]
In climate science, we find directly opposing findings on the amount of ice in Antarctica (here and here) both from NASA or the rate at which the world’s oceans are rising, or not/barely rising. Studies are being pumped out which show that the Earth’s coral reefs are (pick one) dying and mostly dead, regionally having trouble, regionally are thriving, or generally doing just fine overall. Pick almost any scientific topic of interest to the general public today and the scientific literature will reveal that there are answers to the questions people really want to know – plenty of them – but they disagree or directly contradict one another.
One solution to this problem that has been suggested is the Many-Analysts Approach. What is this?
“We argue that the current mode of scientific publication — which settles for a single analysis — entrenches ‘model myopia’, a limited consideration of statistical assumptions. That leads to overconfidence and poor predictions.
To gauge the robustness of their conclusions, researchers should subject the data to multiple analyses; ideally, these would be carried out by one or more independent teams. We understand that this is a big shift in how science is done, that appropriate infrastructure and incentives are not yet in place, and that many researchers will recoil at the idea as being burdensome and impractical. Nonetheless, we argue that the benefits of broader, more-diverse approaches to statistical inference could be so consequential that it is imperative to consider how they might be made routine.” [ “One statistical analysis must not rule them all” — Wagenmakers et al. Nature 605, 423-425 (2022), source or .pdf ]
Here’s an illustration of the problem used in the Nature article above:

This chart shows that nine different teams analyzed the UK data on Covid spread in 2020:
“This paper contains estimates of the reproduction number (R) and growth rate for the UK, 4 nations and NHS England (NHSE) regions.
Different modelling groups use different data sources to estimate these values using mathematical models that simulate the spread of infections. Some may even use all these sources of information to adjust their models to better reflect the real-world situation. There is uncertainty in all these data sources, which is why estimates can vary between different models, and why we do not rely on one model; evidence from several models is considered, discussed, combined, and the growth rate and R are then presented as ranges.” … “This paper references a reasonable worst-case planning scenario (RWCS).”
Nine teams, all have access to the same data sets, nine very different results, ranging from, “maybe the pandemic is receding” (R includes less than 1) to “this is going to be really bad” (R ranges from 1.5 to 1.75). How do policy makers use such results to formulate a pandemic response? The range of results is so wide that it represents the question itself: “Is this going to be OK or is it going to be bad?” One group was quite sure it was going to be bad (and they seem to have been right). At that time, with these results, the question remained unanswered.
Wagenmakers et al. then say this:
“Flattering conclusion
This and other ‘multi-analyst’ projects show that independent statisticians hardly ever use the same procedure. Yet, in fields from ecology to psychology and from medicine to materials science, a single analysis is considered sufficient evidence to publish a finding and make a strong claim. “ … “Over the past ten years, the concept of P-hacking has made researchers aware of how the ability to use many valid statistical procedures can tempt scientists to select the one that leads to the most flattering conclusion.”
But, not only tempted to select the procedures that lead to the “most flattering” conclusion, but also to the conclusion that best meets the needs of agreeing with the prevailing bias of their research field. [ ref: Ioannidis ].
Wagenmakers et al. seem to think that this is just about uncertainty: “The dozen or so formal multi-analyst projects completed so far (see Supplementary information) show that levels of uncertainty are much higher than that suggested by any single team.”
Let’s see where this goes in another study, “A Many-Analysts Approach to the Relation Between Religiosity and Well-being”, which was co-authored by Wagenmakers:
“Summary: In the current project, 120 analysis teams were given a large cross-cultural dataset (N = 10,535, 24 countries) in order to investigate two research questions: (1) “Do religious people self-report higher well-being?” and (2) “Does the relation between religiosity and self-reported well-being depend on perceived cultural norms of religion?”. In a two-stage procedure, the teams first proposed an analysis and then executed their planned analysis on the data.
Perhaps surprisingly in light of previous many-analysts projects, results were fairly consistent across teams. For research question 1 on the relation between religiosity and self-reported well-being, all but three teams reported a positive effect size and confidence/credible intervals that included zero. For research question 2, the results were somewhat more variable: 95% of the teams reported a positive effect size for the moderating influence of cultural norms of religion on the association between religiosity and self-reported well-being, with 65% of the confidence/credible intervals excluding zero.”
The 120 analysis teams were given the same data set and asked to answer two questions. While Wagenmakers calls the results “fairly consistent”, what the results show is that they are just not as contradictory as the Covid results. On the first question, 117 teams found a “positive effect size” whose CI excluded zero. All these teams agreed at least on the sign of the effect, but not the size. 3 teams found an effect that was negative or whose CI included zero. The second questioned fared less well. While 95% of the teams found a positive effect, only 65% had CIs excluding zero.
Consider such results for the effect of some new drug – the first question looks pretty good despite great variation in positive effect size but the second question has 45% of analysis teams reporting positive effects which had CIs that included zero – which means a null effect. With such results, we might be “pretty sure” that the new drug wasn’t killing people, but not so sure that it was good enough to be approved. I would call for more testing.
But wait …. can’t we just average the results of the 120 teams and get a reliable answer?
No, averaging the results is a very bad idea. Why? It is a bad idea because we do not understand, at least at this point, why the analyses arrived at such different results. Some of them must be “wrong” and some of them may be “right”, particularly with results that contradict one another. In 2020, it was wrong, incorrect that Covid was receding in the UK. Should the incorrect answers be averaged into the maybe-correct answers? If four drug analyses say “this will harm people” and 6 analyses say “this will cure people” – do we give it a 60/40 and approve it?
Let’s look at a sports example. Since soccer is the new baseball, we can look at this study: “Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results”. (Note: Wagenmakers is one of a dizzying list of co-authors). Here’s the shortest form:
“Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship.”



If you want to understand this whole Many-analysts Approach, read the soccer paper linked just above. It concludes:
“Implications for the Scientific Endeavor: It is easy to understand that effects can vary across independent tests of the same research hypothesis when different sources of data are used. Variation in measures and samples, as well as random error in assessment, naturally produce variation in results. Here, we have demonstrated that as a result of researchers’ choices and assumptions during analysis, variation in estimated effect sizes can emerge even when analyses use the same data.
The main contribution of this article is in directly demonstrating the extent to which good-faith, yet subjective, analytic choices can have an impact on research results. This problem is related to, but distinct from, the problems associated with p-hacking (Simonsohn, Nelson, & Simmons, 2014), the garden of forking paths (Gelman & Loken, 2014), and reanalyses of original data used in published reports.“
It sounds like Many-Analysts isn’t the answer – many analysts produce many analyses with many, even contradictory, results. Is this helpful? A little, as it helps us to realize that all the statistical approaches in the world do not guarantee a correct answer. They each produce, if applied correctly, only a scientifically defensible answer. Each new analysis is not “Finally the Correct Answer” – it is just yet another analysis with yet another answer.
Many-analyses/many-analysts is closely related to the many-models approach. The following images show how many-models produce many-results:



[ Note: The caption is just plain wrong about what the images mean….see here. ]



Ninety different models, projecting both the past and future, all using the same basic data inputs, produce results so varied as to be useless. Projecting their own present, (2013) Global Temperature 5-year Running Mean has a spread of 0.8°C with all but two of the projections of the present being higher than observations. This unreality widens to 1°C nine years in CMIP5’s future in 2022.
And CMIP6? Using data to 2014 or so (anyone know the exact date?) they produce this:



Here we are interested not in the differences between observed and modeled projections, but in the spread of the different analyses – many show results that are literally off the top of the chart (and far beyond any physical possibility) by 2020. The “Model Mean” (red bordered yellow squares) is nonsensical, as it includes those impossible results. Even some of the hindcasts (projections of known data in the past) are impossible and known to be more than wrong (for instance, 1993 and 1994 shows one model projecting temperatures below -0.5) while another in 1975-1977 hindcasts temperatures a full degree too high).
A 2011 paper compared different analyses of possible sea level rise in 5 Nunavut communities (in Alaska). It presented this chart for policymakers:



For each community, the spread of the possible SLR given is between 70 and 100 cm (29 to 39 inches) — for all but one locality, the range includes zero. Only for Iqaluit is even the sign (up or down) within their 95% confidence intervals. The combined analyses are “pretty sure” sea level will go up in Iqaluit. But for the others? How does Whale Cove set policies to prepare for either a 29 inch drop in sea level or an 8 inch rise in sea level? For Whale Cove, the study is useless.
How can multiple analyses like these add to our knowledge base? How can policymakers use such data to make reasonable, evidence-based decisions?
Answer: They can’t.
The most important take-home from this look at the Many-Analysts Approach is:
“Here, we have demonstrated that as a result of researchers’ choices and assumptions during analysis, variation in estimated effect sizes can emerge even when analyses use the same data.
The main contribution of this article is in directly demonstrating the extent to which good-faith, yet subjective, analytic choices can have an impact on research results.” [ source ]
Let me interpret that for you, from a pragmatist viewpoint:
[Definition of PRAGMATIST: “someone who deals with problems in a sensible way that suits the conditions that really exist, rather than following fixed theories, ideas, or rules” source ]
The Many-Analysts Approach shows that research results, both quantitative and qualitative, are primarily dependent on the analytical methods and statistical approaches used by analysts. Results are much less dependent on the data being analyzed and sometimes appear independent of the data itself.
If that is true, if results are, in many cases, independent of the data, even when researchers are professional, unbiased and working in good faith then what of the entire scientific enterprise? Is all of the quantified science, the type of science looked at here, just a waste of time, useless for making decisions or setting policy?
And if your answer is Yes, what is the remedy? Recall, Many-Analysts is proposed as a remedy to the situation in which: “in fields from ecology to psychology and from medicine to materials science, a single analysis is considered sufficient evidence to publish a finding and make a strong claim.” The situation in which each new research paper is considered the “latest findings” and touted as the “new truth”.
Does the Many-Analysts Approach work as a remedy? My answer is no – but it does expose the unfortunate, for science, underlying reality that in far too many cases, the findings of analyses do not depend on the data but on the methods of analysis.
“So, Mr. Smarty-pants, what do you propose?”
Wagenmakers and his colleagues propose the Many-Analysts Approach, which simply doesn’t appear to work to give us useful results.
Tongue-in-cheek, I propose the “Locked Room Approach”, alternately labelled the “Apollo 13 Method”. If you recall the story of Apollo 13 (or the movie), the solution to an intractable problem was solved by ‘locking’ the smartest engineers in a room with a mock-up of the problem with the situation demanding an immediate solution and they had to resolve their differences in approach and opinion to find a real world solution.
What science generally does now is the operational opposite – we spread analytical teams out over multiple research centers (or lumped into research teams at a “Center for…”) and have them compete for kudos in prestigious journals, earning them fame and money (grants, increased salaries, promotions based on publication scores). This leads to pride-driven science, in which my/our result is defended against all comers and contrary results are often denigrated and attacked. Science Wars ensue – volleys of claims and counter-claims are launched in the journals – my team against your team – we are right and you are wrong. Occasionally we see papers that synopsize all competing claims in a review paper or attempt a meta-analysis, but nothing is resolved.
That is not science – that is foolishness.
There are important issues to be resolved by science. Many of these issues have plenty of data but the quantitative answers we get from many analysts vary widely or are contradictory.
When the need is great, then the remedy must be robust enough to overcome the pride and infighting.
Look at any of the examples in this essay. How many of them could be resolved by “locking” representatives from each of the major currently competing research teams in a virtual room and charging them with resolving the differences in their analyses in an attempt to find not a consensus, but the underlying reality to the best of their ability? I suspect that many of these attempts, if done in good faith, would result in a finding of “We don’t know.” Such a finding would produce a list of further research that must be done to resolve the issue and clarify uncertainties along with one or more approaches that could be tried. The resultant work would not be competitive but rather cooperative.
The Locked Room Approach is meant to bring about truly cooperative research, in which groups peer-review each other’s research designs before the time and money are spent; in which groups agree upon the questions needing answers in advance; agree upon the data itself, ask if it is sufficient or adequate or is more data collection needed?; and agree which groups will perform which necessary research.
There exist, in many fields, national and international organizations like the AGU, the National Academies, CERN, the European Research Council and the NIH that ought to be doing this work – organizing cooperative focused-on-problems research. There is some of this being done, mostly in medical fields, but far more effort is wasted on piecemeal competitive research.
In many science fields today, we need answers to questions about how things are and how they might be in the future. Yet researchers, after many years of hard work and untold research dollars expended, can’t even agree on the past or on the present for which good and adequate data already exists.
We have lots of smart, honest and dedicated researchers but we are allowing them to waste time, money and effort competing instead of cooperating.
Lock ‘em in a room and make ‘em sort it out.
# # # # #
Author’s Comment:
If only it were that easy. If only it could really be accomplished. But we must do something different or we are doomed to continue to get answers that contradict or vary so widely as to be utterly useless. Not just in CliSci, but in medicine, the social ‘sciences’, biology, psychology, and on and on.
Science that does not produce new understanding or new knowledge, does not produce answers that society can use to find solutions to problems or science that does not correctly inform policy makers, is USELESS and worse.
Dr. Judith Curry has proposed such cooperative efforts in the past such as listing outstanding questions and working together to find the answers. Some efforts are being made with Cochrane Reviews to find out what we can know from divergent results. It is not all hopeless – but hope must motivate action.
Mainstream Climate Science, those researchers that make endless proclamations of doom to the Mainstream Media, are lost on a sea of prideful negligence.
Thanks for reading.
# # # # #
Very informative. Thanks
Robinson ==> Thank you. I write about things I find interesting and am glad you found it so as well.
Same here.This article really brings to light (for those that are not climate scientists) the potential issues with using a simple average of models to get a “better” or “more accurate” estimation of what the future holds.
Too bad the scientific method is not honored by politicians. The real issue is that politics must not get involved in establishing what is and what is not science.
The model of “for the people” has fully transitioned to “for the politicians and their keepers.”
Hoping that the supreme court decisions will lead back to “for the people” governance, as much as clarifying the specifics of the individual cases.
Thank you Clarence Thomas.
The scientific method requires data.
There are no data for the future climate.
Predictions of the future climate rely on unproven theories
and speculation, with a horrible “batting average” so far.
Richard ==> Yes, there are no data about the future. Never….until it is the past.
Still plenty of confusion over the past climate and exactly what every climate variable did in the past. Even with good data since at least 1979, for temperature, and since 1958, for CO2.
For the predictions of environment doom, 100% of which have been wrong since the 1960s, maybe it’s long past time to start ignoring predictions?
As a child, my parents taught me that predictions are usually wrong — so don’t believe them. That may be the most valuable fact they ever taught me. That’s why I’ve only made one climate prediction, way back in 1997: “The climate will get warmer,
unless it gets colder.”
Richard ==> Yes, things will stay the same until they change….
“The real issue is that politics must not get involved in establishing what is and what is not science.” I’d add “or engineering”
Treating studies, or models, as black boxes is not science. Rather every single step must be subject to evaluation and analysis – and is either right or wrong. If that should be “too compliated” because there are too many uncertainties, then you don’t have science.
Yes its called a sensitivity analysis and it should be done for every variable parameter. Its very hard, very time consuming, and a thankless task, but it has the great benefit of reducing the number of variable parameters.
That can’t be emphasised strongly enough.
It’s how Lorenz came up with “sensitivity to initial conditions”, more commonly known as Chaos Theory.
E. Schaffer ==> Every steps is not necessarily “either right or wrong”. But you are absolutely right, every step must be subject to evaluation and analysis by trained and knowledgeable people….and not just in the field of study. Evaluation should include statisticians, data experts, engineering experts and all, depending on what the study encompasses.
And there are almost always “too many uncertainties” that have to be considered and dealt with in a transparent manner.
It is not a theoretical consideration, it is about what keeps “climate science” alive. If you take on all the little details and assumptions you will find mistake after mistake. Still one of my favorites is the surface emissivity = 1 assumption, when the hemispheric spectral emissivity of water is only 0.91. This “detail” alone shrinks the GHE by 35W/m2, making a huge difference in terms of the attribution to GHGs, and climate sensitivity too.
It is a pivotal mistake incorporated in ALL models. So why would look at different model outcomes, compare and average them, if you know they all share the same faulty assumption? It is a joke..
Kip: Yes, a lot of the problem comes down to lack of statistical practice discipline. Of course the selection of differing analytical methods produces differing results, but that is largely because many methods are not appropriate for the specific task or data. As Ross McKitrick and Steve McIntyre have pointed out in many cases, analysts frequently seem to not know or simply ignore rules that must be met to apply a specific analysis method correctly. And worse yet, some analysts just make up their own methods to get the results they want without assuring their method is valid.
Rick C ==> Yes, I agree but there is also the simple and shocking finding that serious, honest, well-intentioned, knowledgeable, well-trained analysis groups get widely variant answers to the same question with the same data. Not just a little different, we expect that, but often 180 degrees different — little and big effects, plus and minus effects.
Most “climate scientists” are government bureaucrats, working on government grants and / or paid by universities. They are paid to make scary climate predictions. So they make scary climate predictions. Creating fear is a leftist strategy for ramping up government powers. It works. Covid fears worked even better than climate fears. Government bureaucracies and universities are staffed mainly with leftists. They get the scary climate predictions they pay for.
Leftist Politics + Science = Leftist Politics
What I found remarkable is that the only model that did track temperatures fairly accurately made different assumptions as to the effects of GHGs, the Russian model.
The Russian people are already oppressed by their government which doesn’t need the excuse of climate change to implement centralized control and thus their scientists can be more objective.
Tom ==> The Russian Model has consistently produced the most realistic output…but other groups fail to cooperatively investigate why is does so much better. A huge failure of the modelling world.
Hansen
Do you really think accurate predictions are a goal?
They are not.
I’m surprised the Russian INM model was not affected by Ukraine War anti-Russia sanctions, so the UPCC could completely ignore it.
Richard => Have no idea about the personal goals of modellers — but models that turn out to be accurate when tested against reality are surely better than models that are obviously just plain WRONG. (Ref: Gavin Schmidt)
Any model could add a fudge factor to arbitrarily cut the model’s projected warming rate in half. The result would appear to be the most accurate model. But no one does that. And the least inaccurate Russian IMN model gets no individual attention, except here.
The models over predict the warming rate to scare people.
Their past inaccuracy is almost absent from the mass media. Modelers apparently have no incentive for accurate predictions. They predict what they are paid to predict.
Just like the scientists working for cigarette companies were paid to claim cigarettes were safe.
And the scary predictions are used to make us do what they WANT us to do
re: “The models over predict the warming rate to scare people.”
Another blind parroting of a totally false “skeptical” echo chamber talking point. The models have been quite accurate.
Study Confirms Climate Models are Getting Future Warming Projections Right
https://climate.nasa.gov/news/2943/study-confirms-climate-models-are-getting-future-warming-projections-right/
Climate models reliably project future conditions
National Academies of Science, Engineering, and Medicine
https://www.nationalacademies.org/based-on-science/climate-models-reliably-project-future-conditions
Evaluating the Performance of Past Climate Model ProjectionsGeophysical Research Letters Oct 2019
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019GL085378
We find that climate models published over the past five decades were skillful in predicting subsequent GMST changes, with most models examined showing warming consistent with observations, particularly when mismatches between model-projected and observationally estimated forcings were taken into account.
Note that the Russian INM model never gets special attention.
It gets binned with dozens of others in CMIP.
Can you imagine meteorologists ignoring their apparently
most accurate forecasting model, and instead using an average
of models instead? I can’t imagine that, but it happens
in modern “climate science”.
In addition the CMIP 6 models are predicting even faster global warming than prior models, which already over predicted global warming. The models have not become more accurate in the past 40 years — they appear to be less accurate than ever !
The reason for that is simple.
Accurate climate predictions are not a goal.
Scary climate predictions are the goal.
Climate models predict what they are programmed to predict.
They could be made to appear more accurate in five minutes.
Just divide the predicted global warming rate in half with a . Think about why that has never happened in the past 40 years.
Because models are not intended for accurate predictions.
They never were.
Climate models are merely propaganda tools to support the predictions of global warming doom that first surfaced in late 1950s science papers. One well known scientist of that time was Roger Revelle. Al Gore’s favorite scientist.
Richard ==> It is true that climate models were not intended for “accurate” predictions — Edward Lorenz, the father of all climate modelling — told them that was impossible in 1961.
Perhaps because the Russians know their is no climate emergency. And are happy to let most of the rest of the world destroy their own way of life.
A bad choice of illustration: very clear that any climate skeptic can’t see an elephant in the room…
Oh the irony…
ROTFLMFAO !!
Griffo & people who’ve had one too many see the same elephant!
You mean Lukes, I guess. 🙂
A typical characteristic of them on the wall, kinda of types… who can never ever manage to call a spade a spade, for the best of them… no matter what.
Even when the wolf staring them straight in their face just a foot or so way,
they still remain skeptic, as some footprint there in the snow is not certainly clear enough to be that of a wolf.
🙂
cheers
An excellent choice of illustration: very clear that Griff can’t see an elephant in the room…
Griff typifies another problem in that the bulk of society who is science illiterate and innumerate simply can not follow simple scientific arguments of competing analyses and are therefore left with viewing science as arguments from authority. They merely pick the authorities they agree with or have them filtered by their news source of choice and voila! They go about their business imagining they are in possession of the truth when studies like this show clearly they are not.
Griff’s clueless own goal comment here demonstrates this perfectly!
With the current levels of uncertainty comparing the models, perhaps it is an elephant, or maybe a mouse.
… or nothing at all
I would have gone with we can’t see if the elephant needs a new puffer jacket of bikini.
Heck, at least one model has it being a blue whale.
Yea, the Grifter is here.
We all know the Grifter is really Charles Rotten’s alter ego.
“Grifter” comments pop up when the rate of comments slows down.
Charles gets everyone excited, and the comments increase.
Charles misses the excitement when he was the lead singer
of the Sex Pistols, then known as Johnny Rotten,
because Charles Rotten sounded too formal for rock music.
Moderator Bait
Yep, with 4 variables I can get the elephant to wiggle its trunk.
True.
https://skepticalscience.com/scafetta-widget-problems.html
“A bad choice of illustration: very clear that any climate skeptic can’t see an elephant in the room…”
What’s very clear to me is that there are no “climate skeptics” in the room. And quite possibly none on the planet.
Great article, thanks for putting it together. Very thought provoking. I’ve had a bit of experience of science over the past 15 years and I agree with most of this. But what I think is lacking is an appreciation for what science is. In the Karl Popper sense of a scientific statement. The falsification of a risky hypothesis, or not. The problem with climate science is that it is generally speaking not science. If the hypothesis is that atmospheric co2 controls global (or any subglobal) temperature. Then this has been falsified. After that it’s all politics.
I invented a quote of my own after a particularly unhelpful review: there are an almost unlimited number of ways in which my hypothesis can be falsified, but your opinion isn’t one of them.
So climate science needs to formulate a set of falsifiable hypotheses before it can produce scientific statements.
Jay => I really like this: “There are an almost unlimited number of ways in which my hypothesis can be falsified, but your opinion isn’t one of them.”
By the way, Wm Briggs has some interesting things to say about Popper and Falsification.
Kip Hansen
If I may.
“The null has been falsified”.
Do you know what the above means in consideration of “null” “hypothesis”.
Yes, as put is a bit bizarre, and opens a way to jeopardize and having a lot of polluting angles to argue against and diminish the meaning of the
“null hypothesis”.
(as many these days not much at all persuaded by the scientific method)
cheers
whiten ==> Don’t quite get what you are saying here….are you referring to the Briggs piece?
Yes indeed Kip, the one you linked too.
Just trying to see or understand your take of it in consideration and regard of “null” “hypothesis”.
You must have an understanding point regarding it.
Again, if you may.
cheers
Whiten ==> There are whole books written on Popper’s Falsibility and the Null Hypothesis concept. This essay, as you know, is not about Popper or the Null Hypothesis approach.
But, you have sparked my interest and I will add it to my (nearly unending) list of essays to write.
Thanks.
Ok, Kip, but it was your link, that supposes to contribute to better understanding… else why you had to link to that one.
And, my only intention in addressing it, is that from my point of view it could have helped directly with this one article of yours here.
To be honest your article very good one, and valuable, but still addressing a given condition from an ideal point of relation.
Kinda of naivete, so to speak.
And, by the way, your link is to a Briggs article, not Popper’s… as I understand.
Now as I started this with you, let me give my understanding of;
“The null has been falsified”.
It means that the null condition of a hypothesis is falsified, therefor the given hypothesis has gained extra value, moving from 50/50 up to even 90-95% validity… depending in the clarity and the determinative strength of that condition to the hypothesis… but still it means a significant gain in validity.
For example, for Climate Change hypothesis the null has not been falsified, as still there is not a clear observation of an ongoing Tropical Hot Spot… so that hypothesis value a 50/50, no better than a coin throw.
In the other hand if an ongoing Tropical Hot Spot being clearly observed then we could state;
“The null has been falsified”, in regard to Climate change Hypothesis.
The Tropical hot spot is a very determinative null condition for CC hypothesis.
Still same can be applied in the medication arena.
If for what ever reason a conclusion of the efficacy of a medication from a given study can not be again replicated, even by the same team then;
“The null has been not falsified”… in regard of the efficacy of that given medication…
therefore the efficacy of the given medication 50/50 valid, no better then the throw of a coin…
regardless of a clear significant valid and replicable of it’s safety.
Ok, hope you do not mind this long reply. 🙂
cheers
whiten ==> Not at all — have at it — you are commenting with your honest opinions and trying to contribute something. A good thing.
Kip Hansen
Thank you Kip.
Appreciated.
🙂
cheers
Kip –> Thanks, yes I had a quick look at that. Interesting, and I guess one has to be careful of absolutes, and believing p values are unconditional. In the end we all have to deal with opinion, ours and other’s in some way or other, a ‘rational belief’ in Keynes and Popper’s language. At least Popper tried to lay down a consistent philosophical approach, a reproducible and logical system. It’s a good place to start.
Sure it can be shown to be inappropriate in some cases, or used inappropriately in others, but when somebody like Popper has laid down such an authoritative text, there’s always somebody willing to publish “why Popper was wrong” like “why Einstein was wrong”, and I’m sure both great men would have been humble in the face of such criticism, before they would have eviserated such usually puny arguments – had they still been alive – but sadly they aren’t still here to explain. Also, I am a great fan of Fisher, and have used his original stuff directly in my work – it works – and it has proven extremely effective in an uncountable number of science and engineering applications – again, it’s a bit rich to slag the great man off (as Wm Briggs does 🙂 without due respect for his position in historical context, and without due respect for the sheer weight of useful applications of his work. Calling the main proposition of Popper and Fisher ‘silly’ in the first paragraph really seals it for me with Brigg’s piece. Popper might have been many things but he wasn’t silly. I could have a crack at explaining why Briggs is wrong, but here’s probably not the place.
We should also mention Bayes, who was discussed by Popper, but well before the real emergence of Bayesian Stats (Gelman Style!) and although I think many have tried, none have really put Bayesian Stats into a comprehensive philosophical system so well as Popper did for hypotesis testing. But I’m no expert and I might be wrong.
Anyhow, it’s all rather funny when you think about climate science, co2, and the 97% – which is so far away from any of this logic stuff it doesn’t need reference to any of it.
Jay ==> Briggs is a statisticians’ statistician. He wrote the book, so to speak (at least one of them.)
I wrote a piece for him some time ago: “The Kind-Hearted Magician“….and he followed up with “Solution To The Kind-Hearted Magician — REVEALED!” in which he exposes my fraud…..
Kip ==> Thanks yes I read those. That is such a tricky problem, but a great illustration. I don’t care what his creds are – if he calls Popper ‘silly’ he is being purposely provocative. He started that other peice with his own bit of distraction and obsfucation, new wording truthify etc., your problem shows how tricky language can be. But I think I’m too wise to duel with the dueling master. I’ve done a few national infrastructure public hearings for billion dollar projects, defending my science of fish, and I’ll take Popper in to bat with me any day over some upstart provocateur… 🙂
Jay ==> “purposely provocative” is a pretty good description for Briggs — however, he is being serious and honest at the same time. He gives “Popper through the lens of statistics”.
Well, it looks more like he was a true provocateur and has mischaracterised the approach.
The Briggs proposition in the article is that falsifying the null hypothesis automatically makes the alternative correct.
The technique of formulating and attempting to falsify a null hypothesis is far more along the lines of successive approximation, rather akin to “binary chops” in IT, or the technique we were taught in high school for finding roots.
Once a null hypothesis has been falsified, a new null hypothesis is formulated to further narrow down the field.
Cocky ==> I amplanning on doing a deep dive on the Briggs/Popper issue in the next couple of weeks. Stay tuned.
I look forward to that. Your posts are always interesting.
Marilyn vos Savant and Wm Briggs are both wrong, and have been fooled by the verbal misdirection in the way the problem is posed.
There are a number of ways of demonstrating this, of greater or lesser complexity. Drawing a full decision tree is the most comprehensive, but it is very easy to fool oneself and miss a branch (damhik,ijk,ok)
The shortest way to demonstrate is to point out that Sam (or Monty in the original has knowledge, so will always reveal a joker (or goat, or…). This is not probabilistic – he will always do it. This means that starting with 3 cards (or doors) was just misdirection – the probability of picking the winner was always 0.5 because the host would always remove one of the adverse outcomes. The first stage with 3 cards (or doors) was just showmanship. The showmanship can be extended to any number of revelation steps; it makes no difference to the real end game.
As an aside, I would never take a bet where I thought my chance of winning was 1/3 but the payout was 1/2.
As another aside, the above analysis assumes a fair game, where there really is an ace
Cocky ==> (And true to your handle….) Probability does not exist outside of the proposition posed by a human.
I like your logical approach…..but….
If one accepts that the proposition is exactly as posed by Briggs/vos Savant — three cards, one Ace (car as prize), two jokers (goats), and the procedure and rules precisely as used on the Monte Hall show — then Briggs/vos Savant are correct. But ONLY because it is their proposition thus the propability is exactly are REQUIRED by that proposition.
However, if one allows the Kind-Hearted Magician to CHANGE the proposition, then the probability changes because the proposition changed.
The misdirection of having two identical booby prizes is particularly subtle, and plays on our STEM backgrounds which lead us to look for symmetry and to factorise/simplify.
It becomes more apparent if the booby prizes are distinct.
Everything prior to the final 2 cards is just misdirection. All that matters is the end game.
I first read about the Monty Hall problem in Analog many years ago, and used its analysis as my train commute entertainment for a couple of weeks. The vos Savant / Briggs result did seem to be confirmed, but a deeper dive showed that the misdirection had led me to miss branches of the decision tree.
No, the probability just is. But we’re getting a bit Zen now.
The more complete, but slower, approach is to draw out a full decision tree – no probabilities, every possible move.
To avoid the misdirection, each of the jokers needs to be distinct. Let’s call them Jack and Heath.
It grows big quickly, so for manageability, do one tree for Ace, Jack, and Heath.
The first move for each card is for P1 to select it.
The second move is for P2 to turn over either J or H.
For the J or H trees, P2 must turn over the other joker, so there is only a single branch.
However, for the A tree, there are now 2 branches, 1 each for J and H.
The third move is for P1 to sit or switch.
The end result is that A has twice as many leaf nodes as either J or H, so adding up the leaf nodes gives the same counts for sit and switch.
The misdirection of using 2 visually/verbally identical adverse outcomes along with our tendency towards symmetry and simplification makes it very easy to miss the fact that there are 2 branches for move 2 on the A tree.
For bonus marks, one can add probabilities to each move. Assume mandatory moves have a probability of 1, and where an alternative is available that all moves are equiprobable.
It doesn’t change the outcome, but the numbers are tidier.
Let’s hope this comes out the way it looks while drafting it:
The end result is that the Dupes (player 1) ends up with the ace 1/6 + 1/12 + 1/12 + 1/6 of the time, sit or switch.
This assumes a fair game, and equiprobable outcomes where a choice is available. In the one case where P2 (Sam or Monty) has a choice, it doesn’t make any difference to the final sum of A likelihoods, but it does to the (irrelevant here) J and H outcomes.
My STEM training led me to draw a symmetric decision chart 🙂
The format didn’t come through, so edit and hope it will this time. Maybe format it as code
Cokcy ==> You could write an esaay just for grins and sennd it to me — or even just a few paragrpahs in MS Word forma….my first name at i4.net
Thank. I might do that.
It’s just as well that the formatting didn’t work, because it dawned on me that I’d once again fallen into the same trap of working forward from card selection and assigning probabilities to moves.
This problem needs to be worked back from outcomes (holding each card) and finding all possible paths to reach that outcome.
There are 8 possible paths, 4 of which lead to the ace, and 2 each leading to Jack and Heath.
Cocky ==> That idea is esily done I something as simple as MS Excel…..
Somewhere in my essay files I have a Excel spreadsheet full of all that stuff….I may reprise in in the Birggs/Popper essay.
Cocky ==> you and Briggs can have that go-round…..
Someone once accused me of becoming a statisticians and I made them apologize….
Not my intention at all. It’s a bit of Australian inverse snobbery, and I started using this nom de plume on Jo Nova’s
The term was originally used by the well to do looking down their noses at cereal farmers “scratching about in the dirt like cockies”.
The white cockatoo is a native Australian bird which eats grass seed amongst many other things.
I see it as an overreliance on modeling in the office and not enough field work to build a good database to build credible models around.
Tommy ==> Oddly, what those studying the Many-Analysts Approach have found is that results are more method dependent than data dependent……even analysis groups using the exact same data sets get wildly varying or even contradictory results.
Kip—as you are likely no doubt aware, contrary to the climate science common misconception, averaging does not reduce measurement uncertainty, except in limited special cases.
Great article.
Monte ==> Yes, oh so aware and battling for a greater understanding of that point for years.
Thank you for the compliment, I found this topic both interesting and enlightening.
I would support Kip in his “Apollo 13 lock em up” suggestion, especially if good scientists were in the mix. In the science of Geology there traditionally were two types of geologists, those with their head-down (focused on generating data) and heads-up (immersed in advancing grand theories, commonly associated with such vigorous arm-waving that it appeared they might actually fly). Now we enter the Science-For-Money Era, where data can be tortured as needed and the heads-up scientists can get large amounts of funding by formulating the politically correct finding. The only change that appears possible (not likely) to me is legal penalties for knowingly, or recklessly, advancing ideas harmful to the well-being of humanity, for monetary gain.
Ron ==> Scientists working on the same question need to get together to find the reality underlying their object of research — to find why they all find such different results from the same data (or the same real world!)
Perhaps if one research team arrives at a completely different result than another team, the two teams should get together and figure out why the results were so different.That simple act by itself would result in a level of understanding about the problem greater than the two studies combined.
ifhan ==> Yes — that is exactly the type of activity I recommend. They ought to be curious as to why their results disagree so widely…..
Good intentions to get them together, but, since one side has a vested interest in the outcome, there won’t be any agreements.
Like climate models, just average the results. No need to fix any mistakes, the average MUST be right.
I just wrote something destined for some consultants involved in “Saving the Bay” from its “bad health” mostly damaged by last year’s freeze that they apparently ignored or were unaware of. Among such real facts, I pointed out that the health community supported having second or more opinions, but also said that science is done neither by consensus nor committee. I also wrote that more homework is required now, but still necessary.
H.D. ==> I’d live o see your piece on the bay. If you wish, you can email it to me at my first name at i4.net.
There’s a crowd called Seabin in Australia claiming in TV ads the most fanciful outcomes of its little pot nets in Sydney Harbour … and also showing marine fish entrapped in plastic bags … except that the bagged fish look ‘deceptively’ exactly the same as an aquarium fish collector’s work in progress !
The problem is that everything, including science, is now political science aka politics aka BeeEss,
where “winning isn’t everything, it’s the only thing”!
Ike’s second warning: “…that public policy could itself become the captive of a scientific-technological elite.”
https://wattsupwiththat.com/2009/02/21/ikes-second-warning-hint-it-is-not-the-military-industrial-complex/
(interesting comments from 13 yrs ago, before Climategate)
Government experts are just a tool to achieve political goals.
https://mises.org/wire/why-progressives-love-government-experts
Climategate showed how this was being done in climate science. The Wuhan flu epidemic showed
the same thing done in medicine. I really feel sorry for people in those fields that have been
hijacked the most.
This clown show isn’t funny any more, if it ever were. Now Biden is saying that if he could only just spend more money (a trillion or so), then he can tamp down on inflation.
LGB’s now has proof inflation’s whipped: the DNC has slashed the price of a photo-op with
Giggles from $15,000 to $5,000!
https://twitter.com/RussianMeddler/status/1539246539792588814?ref_src=twsrc%5Etfw
That’s still at least $25,000 too much!
But even at the reduced price no one is signing up for a photo op with cackles.
It would help if governments got out of science and medicine. That’s not why we created our government.
Kip,
I honestly don’t know how you do it— but thanks for doing it.
You’ve cast light on a very interesting human problem. The phenomena is a primary reason that my uncle distrusted committees and passed that distrust on to his relations.
John ==> You’re welcome. Committees can be useful if the people involved are honest, intending to cooperate, and have the right purpose. Too often they are composed of people intent on being right and seeing that decisions are theirs and theirs alone — ten people, ten different right answers.
Effective IQ of a committee: Sum of the squares divided by the Square of the sum. (your formula may vary using exponential or factorial divisors).
And don’t forget Bikeshedding, or Parkinson’s law of triviality, where a committee will spend a great deal of time on unimportant details while its original objective goes unattended. This can severely limit the usefulness of the committee approach.
So true. The years I spent in committees where the trivia got hammered to death whilst much of the important stuff hardly got a look in!
One of the underlying beliefs of ‘Progressivism’ in all its forms (e.g., fascism), is that a relative handful of state-supported ‘experts’ can derive and forcefully implement top-down solutions to economic / societal problems that are superior to the bottom-up and voluntary solutions that would normally be arrived at by individuals.
The ultimate result of this belief is the alignment of the desire of the experts for more funding with the desire of the government to expand its role in the economy and society. This is why all government funded science, whose implications might favor significant government interventions in markets and society, e.g., public health and climate, must also provide for a review of the ‘science’ by a ‘red team’ of independent experts representing those who may be harmed by such interventions.
Dude you are my favorite commentator. You grasp the big problems.
Frank ==> I am a supported of the Red Team Blue Team approach — but it would not be necessary if all involved would get together in a cooperative effort to sort out differences in findings.
Kip, you’re right, of course, but I, for one, wouldn’t hold my breath waiting for the likes of Michael Mann to seek common ground with, say, Steve McIntyre, simply because there is absolutely no incentive for the former to do so. And likewise, I don’t see any incentive for most administrations to appoint red teams to critique ‘politically correct’ science, either.
I am very optimistic, however, that something along the lines of Jeffersonian / Madisonian ‘nullification’ could evolve where states and/or local jurisdictions could opt out of enforcing Federal policy interventions based on ‘consensus’ science when there is clear evidence that the dissenting views of competent parties have either been ignored (e.g., climate change) or suppressed (e.g., the Covid-19 pandemic).
At the very least, this would protect some portion of the populace from draconian regulations based on bad science, and at best, might also incentivize the Federal government to set up competent red teams early on to eliminate any taint of ‘self-dealing’ in policy making pursuant to government-funded science.
Frank ==> W must be sure the the US does not become the EU.
‘W(e) must be sure the the US does not become the EU.’
Kip, I agree. The loss of sovereignty by the individual states that comprise the EU to the EU bureaucracy means the latter will eventually govern unopposed. For this reason it is very important that the individual states of the US maintain their sovereignty in all powers that are not expressly granted to the Federal government by the Constitution.
The loss of sovereignty by the individual states
We’re pretty much there at this point. Only a handful of states even try to maintain it.
State sovereignty exists under the Constitution. Ironically, nullification, which is how states can refuse to do the Federal government’s bidding, has most recently been exercised by progressives to prevent immigration law enforcement in so-called ‘sanctuary’ cities and states. IMHO, if the Biden administration continues on its current trajectory, we’ll see a number of states step up to protect the rights of their citizens.
State sovereignty exists under the Constitution.
Yes, but not as a practical matter if the states don’t behave as sovereign entities. Plus, the feds have a great way of undermining it by tying money to policy. “Well it’s their choice, we’re not telling them what to do”
‘Yes, but not as a practical matter if the states don’t behave as sovereign entities.’
I think they do in a lot of ways, but you’re correct that they generally go with the federal flow as long as there’s no real upside to doing otherwise. But I sincerely believe that if the feds try to implement regulations that are clearly detrimental to the people of the states, e.g., outlawing the use of fossil fuels, the states will push back via nullification,
The simple answer is that causation is different than correlation. Medical “science” is the worst offender at understanding this concept. They really have very little idea how the human body works beyond mechanics.
Lance ==> That is true, but it is not the simple answer. Nailing down CAUSE is not as easy as we would wish — think “What causes cancer?” We know how cancer acts, we have some ways to kill cancer cells, but we have almost NO IDEA what causes cancer or the various cancers.
But your statement that we “really have very little idea how the human body works beyond mechanics” is certainly true and very few people understand just how true that is.
Gee Kip, the state of California says MANY things “have been shown to cause cancer”.
Are you questioning California?
sarc/off
Drake ==> California is absolutely whacked out….
The State of California is known to the State of California to cause cancer, birth defects, or other reproductive harm.
Cures don’t maximize profits. Getting people on the most pills to manage symptoms rather than address the underlying cause is how that is achieved. Big Pharma and their government shills have been suppressing cures for at least 80 years.
Too true.
It is a long read, and thanks for the links to the papers. The Wagonmakers papers especially.
OK S. ==> Truthfully, the researching of this concept led directly to my understanding of just how much of today’s science is pride-driven.
Kip, very good post. I actually used the ‘lock them in a room to sort it out’ approach several times during my consulting career at BCG. Them being disagreeing/confused senior management. Our role was to present indisputable facts and ask probing questions. They all eventually ‘sorted it out’ themselves without us having to provide the answer we had already come up with (just in case).
Rud ==> I think many of us professionals (field does not matter really) have occasionally stumbled on the “Locked Room Approach” out of sheer necessity. I used it at IBM with complex technical (internet related) problems, to surprisingly good result….
Kip have you seen this Covid data in google too?
https://stevekirsch.substack.com/p/if-vaccines-are-safe-how-will-they
Derg ==> I had not seen it — interesting. How does he (you) explain the July 2009 uptick? Is this whole thing just the Social Media effect where phrases (word combinations) are linked to Google searches?
I don’t know (don’t do SM – no FB, no Tweety, no TikTik, etc).
I just think it is very interesting. Could be Russian bots making all these Google queries 😉
Folks: To me, using the second figure which includes CMIP-6 “results”, people are NOT comparing their model results to available observations. I think one of the examples had “90-model solutions”, only one of which – the “Russian model” – managed to produce results that were close to observations. Seems like output from 89 of them should be ignored and money diverted somewhere else…
Joseph ==> The problems with climate models are legion….CMIP5 and 6 are used here as examples of “many-analysts” arriving at wildly varying results from the “same” data sets.
Your comment is why averaging such results is less than useful.
“Seems like output from 89 of them should be ignored and money diverted somewhere else…”
You are not thinking like a leftist.
The Russian model is the ONLY one they want deleted.
They want scary predictions.
So to them, the Russian model is a failure.
Anyone who claims the models are intended
for accurate predictions is a real comedian !
It has been shown that the conclusions of any study are almost exclusively a measurement of research bias.
The scientific method demands that all data and possibilities are considered before making conclusions. Even then, good scientists wonder what they might have missed. In climate science empirical measurements are ignored in favor of hypothetical speculation which is treated as unquestionable fact.
Gyan1 ==. CliSci has some serious problems — but not all of it. There are good honest really-smart people in the field, who keep their heads down and do fabulous basic research (what causes clouds? How do clouds effect the weather and climate? etc).
Check out Judith Curry’s blog for her lists of things that caught her eye this week/month.
There are a lot of peer reviewed papers which are ignored by the climate establishment. Judith lists a lot and Notrickszone features papers that they aren’t considering. Preserving the false narratives requires sins of omission.
“Why do we have so many wildly varying answers to so many of the important science questions of our day? Not only varying, but often directly contradictory. “
Because they are not political.
fretslider ==> In some fields, internal politics dictates results — prevailing bias of the field. That is true and follows Ioannidis.
The other answer is that results are almost entirely dependent on the methods and approaches and not so dependent on the data.
Can the UN etc make use of them, Kip?
fretslider ==> The IPCC depends almost entirely on models that are mostly independent on the data — and depend instead on statistical approaches, bias, and desired results — achieved through tuning and throwing out those results that fly up off the graph or dive down out the bottom.
I still wonder what causes all the seemingly random noise in any one output curve in the spaghetti graph—they all have it, but none of it is correlated.
Carlo ==> Non-linear dynamics……see my Chaos Series.
This one?
https://wattsupwiththat.com/2020/07/25/chaos-and-weather/
my note is NOT about MULTI analyses but about two opposing studies. The National Post is running its annual feature on Junk Science. An article titled “Is climate change making judges meaner?” appeared in the June 23 edition of section FP at p, 14. According to this piece, 2 researchers published a paper in 2019, using temperature data and asylum claim decisions to show that judges were negatively affected by hot outdoor temperatures despite air conditioned courtrooms. HOWEVER, a third researcher is about to publish, in the same journal. an article which shows many data and coding errors, as well as inclusion of a quarter of cases that were withdrawn or abandoned. His conclusion: after correcting for numerous errors, there is no evidence of outside temperatures affecting judging.,
Janice ==> Thank you for that — Link to the NP article here. If you see the follow-up study, will you email me a link to my first name at i4.net please?
While the article was interesting it missed the two main points
of why climate science is becoming a “laughingstock”:
(1) Politics + Science = Politics, and
(2) Inability of scientists to admit “we don’t know”
“We don’t know” is the correct answer to many science questions.
But you rarely hear “we don’t know”.
The most common failure is the always wrong
wild guess predictions of the future climate.
It would seem easy to predict the future climate:
It’s going to get warmer or colder (choose one)
The temperature change be harmless or dangerous (choose one)
That adds up to four possibilities with this simple example.
With random guessing you’d expect 25% right.
In reality we have had a large majority of predictions
(the consensus) of dangerous global warming.
But there has been no dangerous global warming.
The warming since 1940 when CO2 emissions
began accelerating has been mild and harmless.
The so called “expert” scientists, as a group.
have obviously failed to predict the future climate.
That result is similar to US economists,
who as a group, have never predicted a US recession.
When it comes to predicting the future,
the many analysts approach has repeatedly failed.
Of course science is more than predictions, but with
modern climate “science” the main subject seems to
be the politically popular “predictions of climate doom.”
In my opinion, always wrong wild guess predictions
of a coming climate crisis are not science at all.
Climate astrology would be a better description.
My own climate prediction in 1997, stated one hour after
I began what has been 25 years of climate and energy
reading, was 100% correct, and I hope to win a Nobel Prize,
or at least a Nobel Prize participation trophy some day:
“The climate will get warmer, unless it gets colder.”
You hit the nail on the head. When a particular talking head is regurgitating alarmist talking points, i simply ask questions, eventually the smarter ones will start saying “i don’t know” at which point i congratulate them for taking the first step toward reason and science.
Instead most just keep weaving the BS deeper and deeper (CO2 now causes cooling as well as warming or the AMOC doesn’t exist or this is the hottest point in the holocene).
Once you start lying, only lies can support it.
Richard ==> This essay is not primarily about CliSci.
I know that. and I said it was a good essay.
But this is a climate science website
and if one applied the many analysts approach
to predictions, which seem to dominate climate science,
all you’d get is averaged wild guesses
of a coming climate crisis.
The right answer is still
“we don’t know” the future climate.
Climate predictions are made by people
who have not demonstrated
an ability to predict the future climate.
“Complex climate models, as predictive tools for many variables and scales, cannot be meaningfully calibrated because they are simulating a never before experienced state of the system; the problem is one of extrapolation. It is therefore inappropriate .to apply any of the currently available generic techniques which utilise observations to calibrate or weight models to produce forecast probabilities for the real world. To do so is misleading to the users of climate science in wider society.
‘Confidence, uncertainty and decision-support relevance in climate predictions’ Stainforth et al Philosophical Transaction of the Royal Society
Phil. Trans. R. Soc. A 365, 2145-2161 14 June 2007
One of the other authors was Myles Allen who is now an IPCC lead author!
It seems that so much contradiction arises mainly because some papers rely on models and some on observations.
For corals, the models all say they are dead but when you go look at them they seem fine. Because they are fine.
Issue remains, people publishing papers entirely model based instead of using models to try and simulate something but then go and look at it and test the model.
Publishing the model output as science without confirmation of observation.
Pat ==> All research is model based….one’s research approach is to build at least a mental model that will test one’s hypothesis. As we see in the Many-Analysts papers, the decisions made on what to include, what methods to use, what confounders to “adjust for” and how to “adjust for” them, what statistical methods —- those are the main factors that affect the results. The data — not so much.
The step of going back with one’s results and seeing how they look when held up next to the reality is important, but not always (maybe seldom) a possibility in many fields.
“The step of going back with one’s results and seeing how they look when held up next to the reality is important, but not always (maybe seldom) a possibility in many fields.”
If this is *really* true then the study conclusions should state so EXPLICITLY. It should be stated in no uncertain terms that the model is a subjective guess and that no reality check was possible. If the model has *any* base in reality then it *can* be compared to reality in some form or fashion.
Tim ==> That is what is really required in scientific papers of all sorts. There should be a section called “Limitations” in which these points are explicitly stated, I am writing an example from a wonky study which is partially redeemed by a good Limitations section.
This clip is from my early 1970’s college text Environmental Geoscience about CO2 and global warming
Mike ==> Can you give us the book title, author, etc? Thanks,
Environmental Geoscience copyright 1973 John Wiley and published by their division Hamilton Publishing co. Authors Arthur Strahler and Alan Strahler. There are a number pages dedicated to rising CO2. It also has a part on water vapor
https://www.amazon.com/Environmental-Geoscience-Interaction-Between-Natural/dp/0471831638
Mike => Thank you. Also available to read free on line via Internet Archive at : https://archive.org/details/environmentalgeo00stra/page/146/mode/2up (thi sis the page with the CO2 quote, the whole book is available)
You can log-in with your Google identity.
I still have the book.
Mike ==> OCD a bit? I saved uni books as well, but many have been “loaned and owned.”
LOL half of my study
A picture of my study
Mike ==> Thanks for sharing — if I hadn’t “run off to sea” when I was 21, mine would look like that. Not too many books fit in a sea bag.
I was in the US Army 19-22 1966-1969
Kip
Thanks for your article – I almost kept up with you.
A parallel read could be “A Mathematician reads the Newspaper” by John Allen Paulos especially as bed time reading! Plenty of cheap copies on Abebooks or in your local charity shops.
I am particularly a fan of Interdisciplinary approaches and lateral thinking. When I was working my company (a Construction Company) circulated “Journal of Interdisciplinary Sciences” amongst the Engineers and organized Lateral Thinking meetings. I remember on one W/E gathering we were posed the question of how to construct the Channel Tunnel by considering how to make a hole through an orange. It is surprising how informative forthcoming, and transferable, ideas were. Yes, we were involved on the construction and it all worked – thanks to that orange?!
re: “we have findings that meat/salt/butter/coffee/vitamin supplements are good for human health and longevity (today…) and simultaneously or serially, dangerous and harmful to human health and longevity (tomorrow or yesterday).”
No we don’t. Typical Kipian misrepresentation.
Why does Kip post a graph of temperatures since 1983 which has the actual surface observational data ending in 2013? Oh, yeah, that’s right … there’s been a huge temperature spike since then that has brought observed temperatures back up to around the mean level of the models (black line in the graph).
Can’t display the data over the past decade or so; it doesn’t support the “skeptical” agenda.
That’s ANOTHER lie, MGC. The 2016 El Nino spike brought the actual temperature close to the median of the model predictions but actuals are now back near the bottom.
Your lies aren’t going to convince anyone, MGC. Your history of being disproved has given you the reputation of someone to be distrusted.
re: “actuals are now back near the bottom”
Attached is the same graph with the HadCRUT4 data extended up to the present.
Your claim is nowhere near correct. As usual.
Don’t you ever get tired of being wrong, Meab?
Leave it up to you, MCG, to post a chart that PROVES that you are wrong. Do you even look at the stuff you post?
The actuals ARE near the bottom of the range of predictions. LOOK! Open your eyes.
Jesus.
The last several years have zig-zagged above and below the mean trend line. And the latest point is less than half the distance between the mean and the very bottom.
“Look! Open your eyes.”
In 6 of the last 8 years actuals have been below the median prediction. Actuals are now near the bottom of the range, as I said. You lied and were stupid enough to post a plot that proved that you lied.
I see a collection of predictions that are almost all way above actuals nearly all the time – because that’s what it is. You see a tiny number of years that don’t fit the overall pattern and think you can support your dishonest alarmist position by calling attention to just those few exceptions.
When a thinking person sees a pile of manure they think “this is shit and I better not step in it.” When a maggot sees a pile of manure they think “lunch”. You’re like the maggot.
re: “Actuals are now near the bottom of the range”
And the Meabian falsehoods just keep on coming.
Splitting the range into four quadrants, two above the mean, two below, actuals are in the 3rd quadrant. They’d have to be in the 4th quadrant to be “near the bottom of the range”.
The Las Vegas sports book approach might work better for climate research. Have the research groups publish their predictions for 2025. Agree on a actual measure (like UAH) to be used to score which prediction comes closest. Let the betting begin. The winning prediction (closest to UAH in 2025) will become the favorite for 2030. The odds will soon favor the (few) reasonable models and discount the alarmist (RCP8.5) models.
And then we need to punish people like Mann who delete the unfavorable data to come up with an answer that matches his prediction.
Another blind parroting of anti-science nonsense. Mann’s results have been corroborated over and over and over and over and over and over and over again, by a variety of researchers, from a variety of scientific disciplines, from all over the world, using a variety of different techniques.
Pretending otherwise is just plain old lying.
ACK! More irony.
“A Disgrace To The Profession”
Speaking of “another blind parroting of anti-science nonsense” …
Meab ==> And for medical research?
A similar approach is already used in medical research. It’s called the free market. Biopharmaceuticals that have a track record of success attract investment. Those that kill patients tend to go bankrupt.
The reason that the free market doesn’t work for climate research is because governments keep funding climate researchers no matter how badly they have performed.
Several such bets between scientists and “skeptics” have been performed over the years. Not surprisingly, the “skeptics” lost pretty much every time.
Here’s one example:
This scientist keeps winning money from people who bet against climate change“It’s like taking candy from a baby.”
https://mashable.com/article/climate-change-science-bet
Really? Another trotting out of that same tired old tropospheric hot spot cherry pick from John Christy?
The cherry picked region that Christy analyzed constitutes only a tiny 5% of the volume of the troposphere. It is cherry picked because it just happens to be the one region with the largest negative divergence between observations and models (less warming than models projected).
There are also other regions of the earth that are warming much faster than models projected. But so called “skeptics” never make mention of those regions now, do they?
No, of course not. Doesn’t support the anti-science agenda.
My region was cold. Gosh the winter was awful with the 4th coldest April ever.
Fourth coldest Spring (March to June) ever recorded going back 130 years for the Puget Sound in Washington State.
Weather is highly variable but the predicted equatorial tropospheric hot spot still hasn’t happened.
MGC doesn’t understand that if GCM models get this wrong, then they’re missing critical atmospheric physics. It doesn’t surprise me that he doesn’t understand as he’s the idiot who claims that Global Warming will simultaneously cause fresh water to become a scarce resource while also causing the world to be inundated with rain.
Yet another round of mindless cherry picking.
The “equatorial tropospheric hot spot” constitutes a mere 5% of the total volume of the troposphere. Folks who pretend that this little 5% discrepancy somehow “invalidates” the greenhouse effect are completely missing the forest for the trees.
Oh, and climate change actually is causing water to become scarce in some places (like the U.S. Southwest) even while rainfall increases in many other locations.
What’s been happening in places like the U.S. Southwest is exactly what was predicted decades ago by those so-called “failed” climate models. But folks like Meab just stick their heads in the sand babble “Nuh Uh because I say so” over and over and over again.
Such tragic anti-science nonsense.
“Oh, and climate change actually is causing water to become scarce in some places (like the U.S. Southwest) even while rainfall increases in many other locations.”
The US Southwest has been semi-arid desert and/or arid desert FOR MILLENIA! That’s why the climate there is classified as *desert*!
Water has *always* been scarce in the US Southwest. That’s why plant life like cactus and scrub mesquite evolved there!
The more things change the more they stay the same. History didn’t begin when you were born!
Tim ==> And if the present turns out to be part of a new Mega-Drought, it won’t be the first one — only the first one to take place when humans have built megalopolises in that desert.
Same tired old “skeptical” excuses, blindly echoed over and over and over again. Just pretend away the fact that this is exactly what the so-called “failed” models predicted decades ago.
So the GCMs were able to predict regional precipitation totals?
You are a liar.
These general regional trends were predicted, yes, decades ago. Sorry that you are unable to handle the truth.
“Same tired old “skeptical” excuses, blindly echoed over and over and over again. Just pretend away the fact that this is exactly what the so-called “failed” models predicted decades ago.”
In other words you have no actual rebuttal to offer as to what the history of the SW US has been over millenia.
So the climate models predicted deserts would remain deserts? ROFL!!
Where is the regional climate model for the SW US that actually the SW US remaining the desert that it has always been? Give us a link!
Gorman bleats: “So the climate models predicted deserts would remain deserts?”
No, models predicted that an already arid region would become even more arid. Duh.
Here is one example:
Model Projections of an Imminent Transition to a More Arid Climate in Southwestern North America
Seager et al Science 2007
“Here we show that there is a broad consensus among climate models that this region will dry in the 21st century and that the transition to a more arid climate should already be under way.”
And that’s exactly what’s happened.
Isn’t it amazing how so-called “skeptics” like Gorman like to pretend to themselves that they are “well informed” about climate change, yet they constantly reveal that they haven’t the first clue.
Perhaps such folks should look into reputable sources of climate information for a change.
How funny, LOL. Tell what the rainfall of arid and more arid actually is!
Somehow I suspect there is little difference. Probably within the uncertainty limits.
“No, models predicted that an already arid region would become even more arid. Duh.”
How does an arid desert become “more arid”?
““Here we show that there is a broad consensus among climate models that this region will dry in the 21st century and that the transition to a more arid climate should already be under way.””
In other words the Southwest is going to return to what it has always been! Exactly what I said!
“Isn’t it amazing how so-called “skeptics” like Gorman like to pretend to themselves that they are “well informed” about climate change, yet they constantly reveal that they haven’t the first clue.”
History didn’t begin when you were born. There *is* a reason why the Southwest has never been a highly populated area, not for thousands of years. Places like Las Vegas, etc in the Southwest were made temporarily habitable by man-made infrastructure and by natural variation of moisture. But the actual climate never really changed!
re: “How does an arid desert become “more arid”?”
This has gotta be one of the all-time most tragically ignorant Gormanian comments ever.
So typical.
In other words you actually have nothing to offer in rebuttal. Why am I not suprised?
The posted reference (Seager et al Science 2007) contains all the information required to explain, quantitatively, how “arid” becomes “more arid” and to completely refute your abysmally ignorant comments.
But of course you’ve blindly ignored that reference. “Why am I not surprised?” You Gormans shamefully continue to wallow in your truly tragic cesspool of willful “skeptical” ignorance.
I didn’t ignore the reference. Here is the abstract (the paper itself is paywalled and I’m not going to pay for something that is based only on models).
“How anthropogenic climate change will affect hydroclimate in the arid regions of southwestern North America has implications for the allocation of water resources and the course of regional development. Here we show that there is a broad consensus among climate models that this region will dry in the 21st century and that the transition to a more arid climate should already be under way. If these models are correct, the levels of aridity of the recent multiyear drought or the Dust Bowl and the 1950s droughts will become the new climatology of the American Southwest within a time frame of years to decades.”
here is a graph from the article:
Their model runs from 1900 to 2080. Hardly representative of the history of the desert SW in the US.
This *is* nothing more than the desert SW returning to its historical norms. That may be an inconvenient truth for you to accept but it is the truth nonetheless!
re: “This *is* nothing more than the desert SW returning to its historical norms”
Regardless whether this is a “return to historical norms” or not, the change is being driven, as the paper clearly states, by anthropogenic influences.
A statement you so conveniently ignored.
So typical.
“Regardless whether this is a “return to historical norms” or not,”
In other words, you are a faithful believer and no evidence will change your belief.
You reference is a MODEL! It’s a model formulated to show anthropogenic influences, whether they exist or not!
If it is nothing more than a return to historic norms then what, EXACTLY, set the historic norms? It certainly wasn’t anthropogenic. In order to show that is it anthropogenic today then the model would have to show *what* caused the historical climate and how that “what” isn’t the operative force today! And the model doesn’t do that! It doesn’t even attempt to quantify historic climate in the US SW! The model is just like you – history began when you were born!
re: “It’s a model formulated to show anthropogenic influences, whether they exist or not!”
And the delusional zero evidence Gormanian conspiracy theories just go on and on and on and on …
So tragically sad.
Ad hominem after ad hominem attack. Haven’t you learned by now that this kind of argument earns you no trust at best and a failing grade at worst.
You are terrible at being a troll. Most good trolls limit their exposure and continually regurgitate the same reference in post after post. You do none of this, only ad hominem attacks that people get tired of seeing. Good luck with that tactic!
Sorry that you are unable to handle the truth about your claims about climate models, Gorman.
You’ve tried to falsely pretend that the scientists have created models that will provide the result they want. “a model formulated to show anthropogenic influences, whether they exist or not”.
But you have zero evidence to back any of this claim. None. Nada. Zilch. Zippo. Squadoosh.
It is therefore not “ad hominem” to state that your claims are “delusional zero evidence conspiracy theories”.
That is a entirely valid description.
Zero evidence?
The present discussion concerns modeled climate projections of the earth’s surface in the Southwestern U.S .
But J Gorman wants to ridiculously pretend that a graph of atmospheric temperatures in the tropics, at altitudes several kilometers above the surface, no less, is somehow “relevant” to this discussion.
Unbelievably ludicrous.
I have to admit that I’m actually ashamed to even be “discussing” climate science at all with someone like J Gorman, who constantly proves, over and over and over and over and over and over and over again, that he hasn’t the first clue what he is talking about.
Why don’t you include pertinent information from your references? You don’t engender trust by cherry picking.
From the abstract you referenced.
“If these models are correct, the levels of aridity of the recent multiyear drought or the Dust Bowl and the 1950s droughts will become the new climatology of the American Southwest within a time frame of years to decades.”
The “recent drought, Dust Bowl drought, 1950’s drought.”. Sounds very much like nothing new is going to happen, doesn’t it?
No wonder the news article had to hype the article with a fake headline. Too bad you didn’t read the whole thing with understanding.
And here’s another Gorman so conveniently ignoring the fact that the paper states that these changes occurring are due to anthropogenic influence .
Also being ignored is the original issue which began this thread, that being that what “skeptics” like to pretend are “failed” climate models have … in reality … accurately predicted what is now occurring in the U.S. Southwest.
Yep, it’s a seemingly never ending parade of one “skeptical” fail after another after another after another after another with these Gormans!
Keep on spinning and maybe you can convert someone to your faith.
The abstract of the study said:
“If these models are correct, the levels of aridity of the recent multiyear drought or the Dust Bowl and the 1950s droughts will become the new climatology of the American Southwest within a time frame of years to decades.”
Like it or not, these “levels of aridity” have occurred before, and at best, the models indicate that they will occur again. You can’t spin it any other way.
Gormania keeps on ignoring the fact that the key point that began this entire thread was the false “skeptical” claim of so-called “failed” models.
Climate models accurately predicted, long ago, what is now occurring in the U.S. Southwest. And those predictions were accurate because the models did not ignore anthropogenic influences, as so-called “skeptics” would blindly do.
The evidence here demonstrates that these “failed model” claims are just flat out wrong. Just like practically everything else that so-called “skeptics” claim.
“Climate models accurately predicted, long ago”
ROFL!!
If I create a model showing that prairie grasses in the semi-arid Great Plains will grow 8′-10′ deep root systems because of “more arid” conditions is that predicting something unknown? Or is that predicting a return to historical norms?
How did those prairie grasses evolve to have the capability of such deep root systems? Has that evolution occurred just since the 1920’s? Or did it evolve thousands of years ago when conditions were similar to today?
Predicting a return to historical norms isn’t earth shattering! History didn’t start the day you were born!
And the mindless Gormanian hand waving sadly continues.
re: “Predicting a return to historical norms isn’t earth shattering”
It is not a return to “historical norms”; it is a return to historical lows.
Predicting almost exactly when this happens, and showing that it would not have happened if anthropogenic influences were ignored, says something.
But so-called “skeptics” like Gorman just blindly ignore what this is saying to us, because they can’t handle the truth.
Historical is the operative words. Nothing that hasn’t happened before, as you say. Funny how people think “lows” in the past aren’t precedent setting. Do you really think the forecasted lack of rainfall is unprecedented? I suppose a lack of CO2 caused that!
More sadly and tragically mindless Gormanian handwaving nonsense, representing a shameful inability to accept the demonstrated fact that the models have made accurate projections of Southwest U.S. climate.
Those accurate projections are predicated, by the way, on not following the so-called “skeptics” who would ignorantly and disingenously pretend away obviously important anthropogenic influences.
Yea – like that has never happened before. Only after we start driving SUVs does this happen.
Looks like you’re guilty of cherry-picking too!
Speaking of cherry picking, maggot, – climate models are all over the map predicting droughts here and droughts there. As many predictions haven’t come true as have, but you choose to lie and claim that climate models predicted the drought in the SW. They all didn’t agree.
It’s well known that climate models have exhibited no skill in predicting local long-term departures from normal rainfall (where normal means the local historical distribution). Claim they do and you’re lying again.
Listen up, maggot, read Cadillac Desert written in 1979 (well before the “Climate Crisis” scam) by Marc Reisner. It details the history of drought in the US SW. Want a contemporaneous account of a larger scale drought than the current drought that occurred before the “Climate crisis” scam? Read “The Grapes of Wrath” written by John Steinbeck in 1939.
re: “As many predictions haven’t come true as have”
Another made up out of thin air “skeptical” fairy tale. Typical.
The tropical hotspot is such a fundamental core concept to the global warming hypothesis as put forward that without it, the whole hypothesis falls apart. Most people, like MGC, have never bothered to go back and read the hypothesis that was presented so have little or no idea what they are discussing. It is not about ‘a bit of warming here and there’ it was about a fundamental shift in temperatures and weather patterns that couldn’t be explained by any other mechanism known up until that point. The fact that it couldn’t be supported by observations of the required hotspots seems to have escaped most people who persist in the ignorance that any amount of warming must be catastrophic anthropogenic global warming!
re: “The tropical hotspot is such a fundamental core concept to the global warming hypothesis as put forward that without it, the whole hypothesis falls apart”
Another well worn falsehood blindly parroted over and over and over again within the pseudo-scientific “skeptical” echo chamber.
This “hotspot” is not something that would occur “only” by greenhouse gas warming. Other warming mechanisms could create the “hotspot” as well. But “skeptics” falsely pretend otherwise.
“Most people, like Richard, have never bothered to go back and read the hypothesis that was presented so have little or no idea what they are discussing.”
So why are politicians focusing on CO2, then?
none of these people are using “data” in the pure sense of the word … what is the error band of the model 2+2 … trick question … there is no error band … the answer is 4 … forever … the 2 is real data … what these fools have is a measurement of 2 which may or may not actually be 2 (all measurements have error bands) … by treating the 2 as data when it is a “guess” not data, they are starting with a flawed process and simply hiding the flaws by calling it data … so in the climate world 2+2 should really be 2(+/-.5) + 2(+/-.5) which gives an answer of somewhere between 3 and 5 …
just because you can try to “measure” something doesn’t mean you have meaningful “data” in those measurements …
sure, its better than nothing … but not by much …
Dark ==> You are right somewhat about the tendency of modern science (all fields) to use “point data” ignoring that much of the real data are actually ranges.
This occurs because mathematicians have been utilized to do “statistical analysis” of numbers. How many statistical classes in math departments ever address measurement errors and uncertainty? Let’s use a climate science number, 0.01%!
I’ll do a short screed on climate science. When was the last study you saw that actually dealt with causative measurements? 99.99% of studies have to do with time series of temperature. In other words, trying to make a forecast of what will happen to temperature trends or what else correlate with temperature trends. The last study I read that dealt with causative measurements was Dr. Happer’s paper. It is replicable by anyone who wishes to do so.
Which brings us to your subject, analysis of data. Statistical analysis seldom will allow a mathematical derivation of causative phenomena that allow one to accurately PREDICT a physical response. All statistical analysis includes an error/uncertainty as shown with the variance/standard deviation statistical descriptors. Here is a website that provides a pretty good definition.
Descriptive Statistics – Examples, Types, Definition, Formulas (cuemath.com)
It is interesting to see the difference between descriptive and inferential descriptors and their uses.
A whole lot of inferential statistical analysis is being done in climate and other science. As one who spent years analyzing usage data for telephone central office equipment, people requirements, and budgets I can tell you bosses love predictions. Those making them from inferring what might happen based upon history, not so much. Inferences are not predictions and it is no wonder your study of different groups inferences shows what it does.
Measured data should *always* be specified as State Value +/- Uncertainty.
The rule in so much of science today is to assume that if you average enough data points then the uncertainty always cancels and can be ignored. It doesn’t matter what the data distribution looks like or whether there is any systematic uncertainty – uncertainty always cancels. It just makes it so much simpler to just ignore the propagation of uncertainty from the data elements into the final result.
I have five different probability/statistics textbooks I have collected since 2000. Not a single one has a single example of handling uncertainty in measured data. Not one! They all assume that all data is 100% accurate and so are the means calculated from the data.
They even go into how to handle samples from a population by calculating the means of the samples and then using those means to calculate the mean of the population. They *ALL* call the standard deviation of the sample means the “uncertainty” of the mean calculated from the sample means. They call it a “standard error” of the mean – when it has nothing to do with the accuracy of the mean calculated from the sample means but only how precisely the resultant mean has been calculated. If the standard deviation of the sample means is zero then the resultant mean is considered to be 100% accurate – even when the actual measurement data can have an uncertainty interval wider than the actual measurement itself!
Tim ==> If you read here regularly, you’ll know that I have beaten that drum for a decade…..
I know. And I appreciate it even if most do not!
Tim ==> And I do appreciate your support…..
Without uncertainty, all you have is a number. It’s not a measurement.
Michael Mann and Anthony Watts locked in a room… would be interesting.
Opus ==> Unfortunately, they could only talk about their opinions and convictions. If I remember correctly, Anthony did have lunch with Mann once — and reported he was quite cordial. (I could be wrong…anyone here with a food long memory>)
As part of my work, analysis was important but the analysis must be subjected to rigorous cross examination. When we were subjected to six sigma, my statistics partner used the line paraphrased from Mark Twain, ” there are liars, damned liars and then statisticians”. Always revert back to the science. Models built on coincident events only send you looking for the scientific connection and do not actually work for predicting anything.
Some time ago Astronomers were confused about the temperature of Jupiter, they thought it should be hundreds of degrees colder than it is because of it’s distance from the Sun. They finally concluded that Jupiters aurora warmed the planet. Well Earth also frequently experiences aurora some visible some not, I have not seen anywhere that the effect of such aurora has been factored into any of the climate models. Perhaps this is another one trillionth of one degree factor, among many, that make these models and the policies motivated by them unfit for purpose. In order to model something successfully you have to understand it and they don’t!
This is really not a problem – science, not to mention politics and religion, has always worked on the basis of debate and eventual resolution, one way or the other, even if the apparent “loser” refuses to concede defeat. Reality always intrudes eventually, sometimes it happens quickly, say over a few years time, and other times occurs over decades.
Nobody can call a time out on debate of any kind, or establish any kind of ground rules – which people will naturally ignore.
Look, there are still massive debates going on TODAY about the Civil War that ended 157 years ago, what caused it, what the issues were, how it should or should not have been prosecuted by either side, the rightness and wrongness of the two opposing causes, and how we should think about the same issues today. Society is still arguing over the Confederate battle flag, which Civil War veterans are appropriate to put up or maintain memorials to, how to name our ships and military bases, etc.
But no matter what one believes as a matter of opinion, the fact is that Union still won the war and the Confederacy lost the war, and that outcome has affected everyone in the nation ever since. Yet the debate still goes on..
Duane ==> Hmmmm….the causes of the Civil War are not subject to scientific consideration.
And science is not done by debate, though results can be debated to some advantage for the overall endeavor.
But you see, a debate is about convincing the audience “who” is right. It does not determine which scientific results are correct.
Those who would debate would be better engaged in working cooperatively to determine why their results are different.
Statistical analysis will never provide a definitive answer to who is “right” and who is “wrong”. That is not what statistics do. Basically, think of a bell or normal curve. There is never one correct answer, just a distribution of possible answers.
Science on the other hand, requires a mathematical description of a physical phenomena. One that can give one and only one prediction. Even quantum physics can only give probabilities. If climate science were being truthful the involved scientists would give similar probability answers and declare that they will never KNOW for sure where, what, and when certain things WILL happen. Look at model outputs. Do they look like an electron cloud where you can never be sure where an electron, i.e., temperature might be? That gives Dr. Frank’s predictions of uncertainty much more traction.
“Statistical analysis will never provide a definitive answer to who is “right” and who is “wrong”.
I guess we’re just stuck with what is “probably” right. But when that probability is so close to unity that the difference can not be calculated on my engineering laptop, it’s time to get off our collective butts.
In other words it isn’t possible to roll your number on a crap table, right? Or pick the right number and color on a roulette wheel, right?
Temperature measurements are *not* close to zero uncertainty. They are independent, random variables which, when combined, have their uncertainties *ADD*.
Multiple measurements of different things are not likely to result in a distribution where all uncertainty cancels. I know that’s hard for you and your CAGW compatriots to believe but it *is* the truth.
Hey everybody … this guy has an “engineering laptop”.
Not only is he smart … he has the tools to show how smart & special he is.
He uses the words ‘unity’ & ‘butts’ in the same sentence.
Don’t let this gem get away.
How are those climate models working out?
We are number 80 now aren’t we?
Settled science!
My oldest child, a scientist, pointed out to me some years ago that a comparison of research on glamorous or sensational subjects shows it often receives the most funding when compared to mundane research but delivers far fewer benefits if any. This is especially noticeable in the EU when governments decide on what to fund and prove they are no good in choosing winners and losers. They would have more success simply flipping a coin.
Michael ==> What to fund, what to research, is a definite and serious form of bias.
A minor geographical quibble “in 5 Nunavut communities (in Alaska).”. Nunavut is in Canada last time I looked.
Kevin ==> Ooooh….Good Catch! Of course it is, the paper is even from some Canadian Government agency…..
Your Apollo 13 example has one distinction; they were Engineers not Scientists. This is an important distinction because Engineers have to solve real world problems Scientists have to publish papers in order to get their next grant.
The problem is not model myopia but modellers’ hubris.
More specifically, the idea that passing peer review makes something correct. It should just mean that the editor think it’s worthy of throwing into the discussion with no glaringly obvious faults. Its not reason for the media to write “scientists think”
If your results differ significantly from others, your conclusion and not just discussion, should include the results of any equally valid analysis. No “We found” but “another equally valid approach suggests that science has not helped much in furthering understanding in this case”
Excellent post.
In an age that asks, “what authority” the only authority to be found are experts: scientists! So, anyone seeking to leverage authority, will leverage experts/scientists to propel the agenda, (usually more power and money) Can’t do a pandemic or a global reset without the authority of scientists.
This is what happened to the media’s authority. it got leveraged. People listen because they don’t ever stop talking but who really trusts the media and by what authority do they speak?
All that is left are scientists and with it science and it’s being sucked into the same authority vacuum that the media has been spit out of.
So yeah, if science wants to avoid the path our media has taken it’s need to build a very powerful shield around itself.
Good Luck
Incentives matter. Roman engineers and builders were obliged to sleep under viaducts they’d built while legions marched over their new creations. No doubt sometime their sleep was uneasy but it’s likely that they built the best viaducts they possibly could: indeed, some are still standing.
Something similar might encourage researchers to be very careful before publishing predictions.
Re locking them in a room, which is something I generally agree with. In the movie the engineers had a very obvious and focussed objective – save the astronauts. Unfortunately, what with all the grants and careers at stake, getting the scientists to just agree on the objective would be impossible. The chance for agreeing a focussed objective passed 35 years ago. Most climate scientists now are really climate change scientists whose starting point is the assumption that CO2 drives everything.
4 Eyes ==> Yeah, maybe for CliSci but other fields are not so lost.
Kip,
Spot on, well illustrated and timely.
I would like a $ for each article I have written over the last 20 years, to stress the need to properly follow uncertainty and error procedures such as those from the Paris-based International Bureau of Weights and Measures.
The big, but repairable problem in climate studies is that very few authors have adequate understanding of uncertainty. If it was treated properly, I would guess that more than half of past papers would have been rejected before publication. That is one of the prime purposes of uncertainty analysis, but some authors have not got that far in understanding proper science.
Geoff S
Geoff ==> The Many-Analysts approach is incorrectly thought to be about uncertainty but it is really about “results depend on methods no the data”.
One big problem is that scientists, computer programmers, and statisticians today are taught *nonthing* about uncertainty and how to propagate it. It’s been that way since before 2000. Engineers are taught a little bit but typically have no dedicated instruction on the principles.
My Immunologist PhD son was in college around 2010 taking his undergrad work in microbiology. He was actually told by his advisor to not worry about taking math and statistics classes because he could always find a math major to do the statistics on data he collected. (Luckily he didn’t listen to that bad advice). So what you wind up with in so much of medical science today is the blind leading the blind, scientists who know nothing about statistical analysis and statisticians who know nothing about physical reality – leading to studies with questionable results which are unable to be replicated.
Climate science today is in the same boat. When you hear from CAGW advocates that uncertainty always cancels out if your sample is large enough its coming from scientists that know nothing about statistics (and the real world apparently) and statisticians that know nothing about physical science. The blind leading the blind!
Your son was taught how to properly propagate uncertainty. If I’m wrong, then please link us to the text(s) he was assigned.