Guest Essay by Kip Hansen — 22 June 2022
Why do we have so many wildly varying answers to so many of the important science questions of our day? Not only varying, but often directly contradictory. In the health and human diet field, we have findings that meat/salt/butter/coffee/vitamin supplements are good for human health and longevity (today…) and simultaneously or serially, dangerous and harmful to human health and longevity (tomorrow or yesterday). The contradictory findings are often produced through analyses using the exact same data sets. We are all so well aware of this in health that some refer to it as a type of “whiplash effect”.
[Note: This essay is almost 3000 words – not a short news brief or passing comment. It discusses an important issue that crosses science fields. – kh ]
In climate science, we find directly opposing findings on the amount of ice in Antarctica (here and here) both from NASA or the rate at which the world’s oceans are rising, or not/barely rising. Studies are being pumped out which show that the Earth’s coral reefs are (pick one) dying and mostly dead, regionally having trouble, regionally are thriving, or generally doing just fine overall. Pick almost any scientific topic of interest to the general public today and the scientific literature will reveal that there are answers to the questions people really want to know – plenty of them – but they disagree or directly contradict one another.
One solution to this problem that has been suggested is the Many-Analysts Approach. What is this?
“We argue that the current mode of scientific publication — which settles for a single analysis — entrenches ‘model myopia’, a limited consideration of statistical assumptions. That leads to overconfidence and poor predictions.
To gauge the robustness of their conclusions, researchers should subject the data to multiple analyses; ideally, these would be carried out by one or more independent teams. We understand that this is a big shift in how science is done, that appropriate infrastructure and incentives are not yet in place, and that many researchers will recoil at the idea as being burdensome and impractical. Nonetheless, we argue that the benefits of broader, more-diverse approaches to statistical inference could be so consequential that it is imperative to consider how they might be made routine.” [ “One statistical analysis must not rule them all” — Wagenmakers et al. Nature 605, 423-425 (2022), source or .pdf ]
Here’s an illustration of the problem used in the Nature article above:

This chart shows that nine different teams analyzed the UK data on Covid spread in 2020:
“This paper contains estimates of the reproduction number (R) and growth rate for the UK, 4 nations and NHS England (NHSE) regions.
Different modelling groups use different data sources to estimate these values using mathematical models that simulate the spread of infections. Some may even use all these sources of information to adjust their models to better reflect the real-world situation. There is uncertainty in all these data sources, which is why estimates can vary between different models, and why we do not rely on one model; evidence from several models is considered, discussed, combined, and the growth rate and R are then presented as ranges.” … “This paper references a reasonable worst-case planning scenario (RWCS).”
Nine teams, all have access to the same data sets, nine very different results, ranging from, “maybe the pandemic is receding” (R includes less than 1) to “this is going to be really bad” (R ranges from 1.5 to 1.75). How do policy makers use such results to formulate a pandemic response? The range of results is so wide that it represents the question itself: “Is this going to be OK or is it going to be bad?” One group was quite sure it was going to be bad (and they seem to have been right). At that time, with these results, the question remained unanswered.
Wagenmakers et al. then say this:
“Flattering conclusion
This and other ‘multi-analyst’ projects show that independent statisticians hardly ever use the same procedure. Yet, in fields from ecology to psychology and from medicine to materials science, a single analysis is considered sufficient evidence to publish a finding and make a strong claim. “ … “Over the past ten years, the concept of P-hacking has made researchers aware of how the ability to use many valid statistical procedures can tempt scientists to select the one that leads to the most flattering conclusion.”
But, not only tempted to select the procedures that lead to the “most flattering” conclusion, but also to the conclusion that best meets the needs of agreeing with the prevailing bias of their research field. [ ref: Ioannidis ].
Wagenmakers et al. seem to think that this is just about uncertainty: “The dozen or so formal multi-analyst projects completed so far (see Supplementary information) show that levels of uncertainty are much higher than that suggested by any single team.”
Let’s see where this goes in another study, “A Many-Analysts Approach to the Relation Between Religiosity and Well-being”, which was co-authored by Wagenmakers:
“Summary: In the current project, 120 analysis teams were given a large cross-cultural dataset (N = 10,535, 24 countries) in order to investigate two research questions: (1) “Do religious people self-report higher well-being?” and (2) “Does the relation between religiosity and self-reported well-being depend on perceived cultural norms of religion?”. In a two-stage procedure, the teams first proposed an analysis and then executed their planned analysis on the data.
Perhaps surprisingly in light of previous many-analysts projects, results were fairly consistent across teams. For research question 1 on the relation between religiosity and self-reported well-being, all but three teams reported a positive effect size and confidence/credible intervals that included zero. For research question 2, the results were somewhat more variable: 95% of the teams reported a positive effect size for the moderating influence of cultural norms of religion on the association between religiosity and self-reported well-being, with 65% of the confidence/credible intervals excluding zero.”
The 120 analysis teams were given the same data set and asked to answer two questions. While Wagenmakers calls the results “fairly consistent”, what the results show is that they are just not as contradictory as the Covid results. On the first question, 117 teams found a “positive effect size” whose CI excluded zero. All these teams agreed at least on the sign of the effect, but not the size. 3 teams found an effect that was negative or whose CI included zero. The second questioned fared less well. While 95% of the teams found a positive effect, only 65% had CIs excluding zero.
Consider such results for the effect of some new drug – the first question looks pretty good despite great variation in positive effect size but the second question has 45% of analysis teams reporting positive effects which had CIs that included zero – which means a null effect. With such results, we might be “pretty sure” that the new drug wasn’t killing people, but not so sure that it was good enough to be approved. I would call for more testing.
But wait …. can’t we just average the results of the 120 teams and get a reliable answer?
No, averaging the results is a very bad idea. Why? It is a bad idea because we do not understand, at least at this point, why the analyses arrived at such different results. Some of them must be “wrong” and some of them may be “right”, particularly with results that contradict one another. In 2020, it was wrong, incorrect that Covid was receding in the UK. Should the incorrect answers be averaged into the maybe-correct answers? If four drug analyses say “this will harm people” and 6 analyses say “this will cure people” – do we give it a 60/40 and approve it?
Let’s look at a sports example. Since soccer is the new baseball, we can look at this study: “Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results”. (Note: Wagenmakers is one of a dizzying list of co-authors). Here’s the shortest form:
“Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship.”

If you want to understand this whole Many-analysts Approach, read the soccer paper linked just above. It concludes:
“Implications for the Scientific Endeavor: It is easy to understand that effects can vary across independent tests of the same research hypothesis when different sources of data are used. Variation in measures and samples, as well as random error in assessment, naturally produce variation in results. Here, we have demonstrated that as a result of researchers’ choices and assumptions during analysis, variation in estimated effect sizes can emerge even when analyses use the same data.
The main contribution of this article is in directly demonstrating the extent to which good-faith, yet subjective, analytic choices can have an impact on research results. This problem is related to, but distinct from, the problems associated with p-hacking (Simonsohn, Nelson, & Simmons, 2014), the garden of forking paths (Gelman & Loken, 2014), and reanalyses of original data used in published reports.“
It sounds like Many-Analysts isn’t the answer – many analysts produce many analyses with many, even contradictory, results. Is this helpful? A little, as it helps us to realize that all the statistical approaches in the world do not guarantee a correct answer. They each produce, if applied correctly, only a scientifically defensible answer. Each new analysis is not “Finally the Correct Answer” – it is just yet another analysis with yet another answer.
Many-analyses/many-analysts is closely related to the many-models approach. The following images show how many-models produce many-results:

[ Note: The caption is just plain wrong about what the images mean….see here. ]

Ninety different models, projecting both the past and future, all using the same basic data inputs, produce results so varied as to be useless. Projecting their own present, (2013) Global Temperature 5-year Running Mean has a spread of 0.8°C with all but two of the projections of the present being higher than observations. This unreality widens to 1°C nine years in CMIP5’s future in 2022.
And CMIP6? Using data to 2014 or so (anyone know the exact date?) they produce this:

Here we are interested not in the differences between observed and modeled projections, but in the spread of the different analyses – many show results that are literally off the top of the chart (and far beyond any physical possibility) by 2020. The “Model Mean” (red bordered yellow squares) is nonsensical, as it includes those impossible results. Even some of the hindcasts (projections of known data in the past) are impossible and known to be more than wrong (for instance, 1993 and 1994 shows one model projecting temperatures below -0.5) while another in 1975-1977 hindcasts temperatures a full degree too high).
A 2011 paper compared different analyses of possible sea level rise in 5 Nunavut communities (in Alaska). It presented this chart for policymakers:

For each community, the spread of the possible SLR given is between 70 and 100 cm (29 to 39 inches) — for all but one locality, the range includes zero. Only for Iqaluit is even the sign (up or down) within their 95% confidence intervals. The combined analyses are “pretty sure” sea level will go up in Iqaluit. But for the others? How does Whale Cove set policies to prepare for either a 29 inch drop in sea level or an 8 inch rise in sea level? For Whale Cove, the study is useless.
How can multiple analyses like these add to our knowledge base? How can policymakers use such data to make reasonable, evidence-based decisions?
Answer: They can’t.
The most important take-home from this look at the Many-Analysts Approach is:
“Here, we have demonstrated that as a result of researchers’ choices and assumptions during analysis, variation in estimated effect sizes can emerge even when analyses use the same data.
The main contribution of this article is in directly demonstrating the extent to which good-faith, yet subjective, analytic choices can have an impact on research results.” [ source ]
Let me interpret that for you, from a pragmatist viewpoint:
[Definition of PRAGMATIST: “someone who deals with problems in a sensible way that suits the conditions that really exist, rather than following fixed theories, ideas, or rules” source ]
The Many-Analysts Approach shows that research results, both quantitative and qualitative, are primarily dependent on the analytical methods and statistical approaches used by analysts. Results are much less dependent on the data being analyzed and sometimes appear independent of the data itself.
If that is true, if results are, in many cases, independent of the data, even when researchers are professional, unbiased and working in good faith then what of the entire scientific enterprise? Is all of the quantified science, the type of science looked at here, just a waste of time, useless for making decisions or setting policy?
And if your answer is Yes, what is the remedy? Recall, Many-Analysts is proposed as a remedy to the situation in which: “in fields from ecology to psychology and from medicine to materials science, a single analysis is considered sufficient evidence to publish a finding and make a strong claim.” The situation in which each new research paper is considered the “latest findings” and touted as the “new truth”.
Does the Many-Analysts Approach work as a remedy? My answer is no – but it does expose the unfortunate, for science, underlying reality that in far too many cases, the findings of analyses do not depend on the data but on the methods of analysis.
“So, Mr. Smarty-pants, what do you propose?”
Wagenmakers and his colleagues propose the Many-Analysts Approach, which simply doesn’t appear to work to give us useful results.
Tongue-in-cheek, I propose the “Locked Room Approach”, alternately labelled the “Apollo 13 Method”. If you recall the story of Apollo 13 (or the movie), the solution to an intractable problem was solved by ‘locking’ the smartest engineers in a room with a mock-up of the problem with the situation demanding an immediate solution and they had to resolve their differences in approach and opinion to find a real world solution.
What science generally does now is the operational opposite – we spread analytical teams out over multiple research centers (or lumped into research teams at a “Center for…”) and have them compete for kudos in prestigious journals, earning them fame and money (grants, increased salaries, promotions based on publication scores). This leads to pride-driven science, in which my/our result is defended against all comers and contrary results are often denigrated and attacked. Science Wars ensue – volleys of claims and counter-claims are launched in the journals – my team against your team – we are right and you are wrong. Occasionally we see papers that synopsize all competing claims in a review paper or attempt a meta-analysis, but nothing is resolved.
That is not science – that is foolishness.
There are important issues to be resolved by science. Many of these issues have plenty of data but the quantitative answers we get from many analysts vary widely or are contradictory.
When the need is great, then the remedy must be robust enough to overcome the pride and infighting.
Look at any of the examples in this essay. How many of them could be resolved by “locking” representatives from each of the major currently competing research teams in a virtual room and charging them with resolving the differences in their analyses in an attempt to find not a consensus, but the underlying reality to the best of their ability? I suspect that many of these attempts, if done in good faith, would result in a finding of “We don’t know.” Such a finding would produce a list of further research that must be done to resolve the issue and clarify uncertainties along with one or more approaches that could be tried. The resultant work would not be competitive but rather cooperative.
The Locked Room Approach is meant to bring about truly cooperative research, in which groups peer-review each other’s research designs before the time and money are spent; in which groups agree upon the questions needing answers in advance; agree upon the data itself, ask if it is sufficient or adequate or is more data collection needed?; and agree which groups will perform which necessary research.
There exist, in many fields, national and international organizations like the AGU, the National Academies, CERN, the European Research Council and the NIH that ought to be doing this work – organizing cooperative focused-on-problems research. There is some of this being done, mostly in medical fields, but far more effort is wasted on piecemeal competitive research.
In many science fields today, we need answers to questions about how things are and how they might be in the future. Yet researchers, after many years of hard work and untold research dollars expended, can’t even agree on the past or on the present for which good and adequate data already exists.
We have lots of smart, honest and dedicated researchers but we are allowing them to waste time, money and effort competing instead of cooperating.
Lock ‘em in a room and make ‘em sort it out.
# # # # #
Author’s Comment:
If only it were that easy. If only it could really be accomplished. But we must do something different or we are doomed to continue to get answers that contradict or vary so widely as to be utterly useless. Not just in CliSci, but in medicine, the social ‘sciences’, biology, psychology, and on and on.
Science that does not produce new understanding or new knowledge, does not produce answers that society can use to find solutions to problems or science that does not correctly inform policy makers, is USELESS and worse.
Dr. Judith Curry has proposed such cooperative efforts in the past such as listing outstanding questions and working together to find the answers. Some efforts are being made with Cochrane Reviews to find out what we can know from divergent results. It is not all hopeless – but hope must motivate action.
Mainstream Climate Science, those researchers that make endless proclamations of doom to the Mainstream Media, are lost on a sea of prideful negligence.
Thanks for reading.
# # # # #
Why does Kip post a graph of temperatures since 1983 which has the actual surface observational data ending in 2013? Oh, yeah, that’s right … there’s been a huge temperature spike since then that has brought observed temperatures back up to around the mean level of the models (black line in the graph).
Can’t display the data over the past decade or so; it doesn’t support the “skeptical” agenda.
That’s ANOTHER lie, MGC. The 2016 El Nino spike brought the actual temperature close to the median of the model predictions but actuals are now back near the bottom.
Your lies aren’t going to convince anyone, MGC. Your history of being disproved has given you the reputation of someone to be distrusted.
re: “actuals are now back near the bottom”
Attached is the same graph with the HadCRUT4 data extended up to the present.
Your claim is nowhere near correct. As usual.
Don’t you ever get tired of being wrong, Meab?
Leave it up to you, MCG, to post a chart that PROVES that you are wrong. Do you even look at the stuff you post?
The actuals ARE near the bottom of the range of predictions. LOOK! Open your eyes.
Jesus.
The last several years have zig-zagged above and below the mean trend line. And the latest point is less than half the distance between the mean and the very bottom.
“Look! Open your eyes.”
In 6 of the last 8 years actuals have been below the median prediction. Actuals are now near the bottom of the range, as I said. You lied and were stupid enough to post a plot that proved that you lied.
I see a collection of predictions that are almost all way above actuals nearly all the time – because that’s what it is. You see a tiny number of years that don’t fit the overall pattern and think you can support your dishonest alarmist position by calling attention to just those few exceptions.
When a thinking person sees a pile of manure they think “this is shit and I better not step in it.” When a maggot sees a pile of manure they think “lunch”. You’re like the maggot.
re: “Actuals are now near the bottom of the range”
And the Meabian falsehoods just keep on coming.
Splitting the range into four quadrants, two above the mean, two below, actuals are in the 3rd quadrant. They’d have to be in the 4th quadrant to be “near the bottom of the range”.
The Las Vegas sports book approach might work better for climate research. Have the research groups publish their predictions for 2025. Agree on a actual measure (like UAH) to be used to score which prediction comes closest. Let the betting begin. The winning prediction (closest to UAH in 2025) will become the favorite for 2030. The odds will soon favor the (few) reasonable models and discount the alarmist (RCP8.5) models.
And then we need to punish people like Mann who delete the unfavorable data to come up with an answer that matches his prediction.
Another blind parroting of anti-science nonsense. Mann’s results have been corroborated over and over and over and over and over and over and over again, by a variety of researchers, from a variety of scientific disciplines, from all over the world, using a variety of different techniques.
Pretending otherwise is just plain old lying.
ACK! More irony.
“A Disgrace To The Profession”
Speaking of “another blind parroting of anti-science nonsense” …
Meab ==> And for medical research?
A similar approach is already used in medical research. It’s called the free market. Biopharmaceuticals that have a track record of success attract investment. Those that kill patients tend to go bankrupt.
The reason that the free market doesn’t work for climate research is because governments keep funding climate researchers no matter how badly they have performed.
Several such bets between scientists and “skeptics” have been performed over the years. Not surprisingly, the “skeptics” lost pretty much every time.
Here’s one example:
This scientist keeps winning money from people who bet against climate change“It’s like taking candy from a baby.”
https://mashable.com/article/climate-change-science-bet
Really? Another trotting out of that same tired old tropospheric hot spot cherry pick from John Christy?
The cherry picked region that Christy analyzed constitutes only a tiny 5% of the volume of the troposphere. It is cherry picked because it just happens to be the one region with the largest negative divergence between observations and models (less warming than models projected).
There are also other regions of the earth that are warming much faster than models projected. But so called “skeptics” never make mention of those regions now, do they?
No, of course not. Doesn’t support the anti-science agenda.
My region was cold. Gosh the winter was awful with the 4th coldest April ever.
Fourth coldest Spring (March to June) ever recorded going back 130 years for the Puget Sound in Washington State.
Weather is highly variable but the predicted equatorial tropospheric hot spot still hasn’t happened.
MGC doesn’t understand that if GCM models get this wrong, then they’re missing critical atmospheric physics. It doesn’t surprise me that he doesn’t understand as he’s the idiot who claims that Global Warming will simultaneously cause fresh water to become a scarce resource while also causing the world to be inundated with rain.
Yet another round of mindless cherry picking.
The “equatorial tropospheric hot spot” constitutes a mere 5% of the total volume of the troposphere. Folks who pretend that this little 5% discrepancy somehow “invalidates” the greenhouse effect are completely missing the forest for the trees.
Oh, and climate change actually is causing water to become scarce in some places (like the U.S. Southwest) even while rainfall increases in many other locations.
What’s been happening in places like the U.S. Southwest is exactly what was predicted decades ago by those so-called “failed” climate models. But folks like Meab just stick their heads in the sand babble “Nuh Uh because I say so” over and over and over again.
Such tragic anti-science nonsense.
“Oh, and climate change actually is causing water to become scarce in some places (like the U.S. Southwest) even while rainfall increases in many other locations.”
The US Southwest has been semi-arid desert and/or arid desert FOR MILLENIA! That’s why the climate there is classified as *desert*!
Water has *always* been scarce in the US Southwest. That’s why plant life like cactus and scrub mesquite evolved there!
The more things change the more they stay the same. History didn’t begin when you were born!
Tim ==> And if the present turns out to be part of a new Mega-Drought, it won’t be the first one — only the first one to take place when humans have built megalopolises in that desert.
Same tired old “skeptical” excuses, blindly echoed over and over and over again. Just pretend away the fact that this is exactly what the so-called “failed” models predicted decades ago.
So the GCMs were able to predict regional precipitation totals?
You are a liar.
These general regional trends were predicted, yes, decades ago. Sorry that you are unable to handle the truth.
“Same tired old “skeptical” excuses, blindly echoed over and over and over again. Just pretend away the fact that this is exactly what the so-called “failed” models predicted decades ago.”
In other words you have no actual rebuttal to offer as to what the history of the SW US has been over millenia.
So the climate models predicted deserts would remain deserts? ROFL!!
Where is the regional climate model for the SW US that actually the SW US remaining the desert that it has always been? Give us a link!
Gorman bleats: “So the climate models predicted deserts would remain deserts?”
No, models predicted that an already arid region would become even more arid. Duh.
Here is one example:
Model Projections of an Imminent Transition to a More Arid Climate in Southwestern North America
Seager et al Science 2007
“Here we show that there is a broad consensus among climate models that this region will dry in the 21st century and that the transition to a more arid climate should already be under way.”
And that’s exactly what’s happened.
Isn’t it amazing how so-called “skeptics” like Gorman like to pretend to themselves that they are “well informed” about climate change, yet they constantly reveal that they haven’t the first clue.
Perhaps such folks should look into reputable sources of climate information for a change.
How funny, LOL. Tell what the rainfall of arid and more arid actually is!
Somehow I suspect there is little difference. Probably within the uncertainty limits.
“No, models predicted that an already arid region would become even more arid. Duh.”
How does an arid desert become “more arid”?
““Here we show that there is a broad consensus among climate models that this region will dry in the 21st century and that the transition to a more arid climate should already be under way.””
In other words the Southwest is going to return to what it has always been! Exactly what I said!
“Isn’t it amazing how so-called “skeptics” like Gorman like to pretend to themselves that they are “well informed” about climate change, yet they constantly reveal that they haven’t the first clue.”
History didn’t begin when you were born. There *is* a reason why the Southwest has never been a highly populated area, not for thousands of years. Places like Las Vegas, etc in the Southwest were made temporarily habitable by man-made infrastructure and by natural variation of moisture. But the actual climate never really changed!
re: “How does an arid desert become “more arid”?”
This has gotta be one of the all-time most tragically ignorant Gormanian comments ever.
So typical.
In other words you actually have nothing to offer in rebuttal. Why am I not suprised?
The posted reference (Seager et al Science 2007) contains all the information required to explain, quantitatively, how “arid” becomes “more arid” and to completely refute your abysmally ignorant comments.
But of course you’ve blindly ignored that reference. “Why am I not surprised?” You Gormans shamefully continue to wallow in your truly tragic cesspool of willful “skeptical” ignorance.
I didn’t ignore the reference. Here is the abstract (the paper itself is paywalled and I’m not going to pay for something that is based only on models).
“How anthropogenic climate change will affect hydroclimate in the arid regions of southwestern North America has implications for the allocation of water resources and the course of regional development. Here we show that there is a broad consensus among climate models that this region will dry in the 21st century and that the transition to a more arid climate should already be under way. If these models are correct, the levels of aridity of the recent multiyear drought or the Dust Bowl and the 1950s droughts will become the new climatology of the American Southwest within a time frame of years to decades.”
here is a graph from the article:
Their model runs from 1900 to 2080. Hardly representative of the history of the desert SW in the US.
This *is* nothing more than the desert SW returning to its historical norms. That may be an inconvenient truth for you to accept but it is the truth nonetheless!
re: “This *is* nothing more than the desert SW returning to its historical norms”
Regardless whether this is a “return to historical norms” or not, the change is being driven, as the paper clearly states, by anthropogenic influences.
A statement you so conveniently ignored.
So typical.
“Regardless whether this is a “return to historical norms” or not,”
In other words, you are a faithful believer and no evidence will change your belief.
You reference is a MODEL! It’s a model formulated to show anthropogenic influences, whether they exist or not!
If it is nothing more than a return to historic norms then what, EXACTLY, set the historic norms? It certainly wasn’t anthropogenic. In order to show that is it anthropogenic today then the model would have to show *what* caused the historical climate and how that “what” isn’t the operative force today! And the model doesn’t do that! It doesn’t even attempt to quantify historic climate in the US SW! The model is just like you – history began when you were born!
re: “It’s a model formulated to show anthropogenic influences, whether they exist or not!”
And the delusional zero evidence Gormanian conspiracy theories just go on and on and on and on …
So tragically sad.
Ad hominem after ad hominem attack. Haven’t you learned by now that this kind of argument earns you no trust at best and a failing grade at worst.
You are terrible at being a troll. Most good trolls limit their exposure and continually regurgitate the same reference in post after post. You do none of this, only ad hominem attacks that people get tired of seeing. Good luck with that tactic!
Sorry that you are unable to handle the truth about your claims about climate models, Gorman.
You’ve tried to falsely pretend that the scientists have created models that will provide the result they want. “a model formulated to show anthropogenic influences, whether they exist or not”.
But you have zero evidence to back any of this claim. None. Nada. Zilch. Zippo. Squadoosh.
It is therefore not “ad hominem” to state that your claims are “delusional zero evidence conspiracy theories”.
That is a entirely valid description.
Zero evidence?
The present discussion concerns modeled climate projections of the earth’s surface in the Southwestern U.S .
But J Gorman wants to ridiculously pretend that a graph of atmospheric temperatures in the tropics, at altitudes several kilometers above the surface, no less, is somehow “relevant” to this discussion.
Unbelievably ludicrous.
I have to admit that I’m actually ashamed to even be “discussing” climate science at all with someone like J Gorman, who constantly proves, over and over and over and over and over and over and over again, that he hasn’t the first clue what he is talking about.
Why don’t you include pertinent information from your references? You don’t engender trust by cherry picking.
From the abstract you referenced.
“If these models are correct, the levels of aridity of the recent multiyear drought or the Dust Bowl and the 1950s droughts will become the new climatology of the American Southwest within a time frame of years to decades.”
The “recent drought, Dust Bowl drought, 1950’s drought.”. Sounds very much like nothing new is going to happen, doesn’t it?
No wonder the news article had to hype the article with a fake headline. Too bad you didn’t read the whole thing with understanding.
And here’s another Gorman so conveniently ignoring the fact that the paper states that these changes occurring are due to anthropogenic influence .
Also being ignored is the original issue which began this thread, that being that what “skeptics” like to pretend are “failed” climate models have … in reality … accurately predicted what is now occurring in the U.S. Southwest.
Yep, it’s a seemingly never ending parade of one “skeptical” fail after another after another after another after another with these Gormans!
Keep on spinning and maybe you can convert someone to your faith.
The abstract of the study said:
“If these models are correct, the levels of aridity of the recent multiyear drought or the Dust Bowl and the 1950s droughts will become the new climatology of the American Southwest within a time frame of years to decades.”
Like it or not, these “levels of aridity” have occurred before, and at best, the models indicate that they will occur again. You can’t spin it any other way.
Gormania keeps on ignoring the fact that the key point that began this entire thread was the false “skeptical” claim of so-called “failed” models.
Climate models accurately predicted, long ago, what is now occurring in the U.S. Southwest. And those predictions were accurate because the models did not ignore anthropogenic influences, as so-called “skeptics” would blindly do.
The evidence here demonstrates that these “failed model” claims are just flat out wrong. Just like practically everything else that so-called “skeptics” claim.
“Climate models accurately predicted, long ago”
ROFL!!
If I create a model showing that prairie grasses in the semi-arid Great Plains will grow 8′-10′ deep root systems because of “more arid” conditions is that predicting something unknown? Or is that predicting a return to historical norms?
How did those prairie grasses evolve to have the capability of such deep root systems? Has that evolution occurred just since the 1920’s? Or did it evolve thousands of years ago when conditions were similar to today?
Predicting a return to historical norms isn’t earth shattering! History didn’t start the day you were born!
And the mindless Gormanian hand waving sadly continues.
re: “Predicting a return to historical norms isn’t earth shattering”
It is not a return to “historical norms”; it is a return to historical lows.
Predicting almost exactly when this happens, and showing that it would not have happened if anthropogenic influences were ignored, says something.
But so-called “skeptics” like Gorman just blindly ignore what this is saying to us, because they can’t handle the truth.
Historical is the operative words. Nothing that hasn’t happened before, as you say. Funny how people think “lows” in the past aren’t precedent setting. Do you really think the forecasted lack of rainfall is unprecedented? I suppose a lack of CO2 caused that!
More sadly and tragically mindless Gormanian handwaving nonsense, representing a shameful inability to accept the demonstrated fact that the models have made accurate projections of Southwest U.S. climate.
Those accurate projections are predicated, by the way, on not following the so-called “skeptics” who would ignorantly and disingenously pretend away obviously important anthropogenic influences.
Yea – like that has never happened before. Only after we start driving SUVs does this happen.
Looks like you’re guilty of cherry-picking too!
Speaking of cherry picking, maggot, – climate models are all over the map predicting droughts here and droughts there. As many predictions haven’t come true as have, but you choose to lie and claim that climate models predicted the drought in the SW. They all didn’t agree.
It’s well known that climate models have exhibited no skill in predicting local long-term departures from normal rainfall (where normal means the local historical distribution). Claim they do and you’re lying again.
Listen up, maggot, read Cadillac Desert written in 1979 (well before the “Climate Crisis” scam) by Marc Reisner. It details the history of drought in the US SW. Want a contemporaneous account of a larger scale drought than the current drought that occurred before the “Climate crisis” scam? Read “The Grapes of Wrath” written by John Steinbeck in 1939.
re: “As many predictions haven’t come true as have”
Another made up out of thin air “skeptical” fairy tale. Typical.
The tropical hotspot is such a fundamental core concept to the global warming hypothesis as put forward that without it, the whole hypothesis falls apart. Most people, like MGC, have never bothered to go back and read the hypothesis that was presented so have little or no idea what they are discussing. It is not about ‘a bit of warming here and there’ it was about a fundamental shift in temperatures and weather patterns that couldn’t be explained by any other mechanism known up until that point. The fact that it couldn’t be supported by observations of the required hotspots seems to have escaped most people who persist in the ignorance that any amount of warming must be catastrophic anthropogenic global warming!
re: “The tropical hotspot is such a fundamental core concept to the global warming hypothesis as put forward that without it, the whole hypothesis falls apart”
Another well worn falsehood blindly parroted over and over and over again within the pseudo-scientific “skeptical” echo chamber.
This “hotspot” is not something that would occur “only” by greenhouse gas warming. Other warming mechanisms could create the “hotspot” as well. But “skeptics” falsely pretend otherwise.
“Most people, like Richard, have never bothered to go back and read the hypothesis that was presented so have little or no idea what they are discussing.”
So why are politicians focusing on CO2, then?
none of these people are using “data” in the pure sense of the word … what is the error band of the model 2+2 … trick question … there is no error band … the answer is 4 … forever … the 2 is real data … what these fools have is a measurement of 2 which may or may not actually be 2 (all measurements have error bands) … by treating the 2 as data when it is a “guess” not data, they are starting with a flawed process and simply hiding the flaws by calling it data … so in the climate world 2+2 should really be 2(+/-.5) + 2(+/-.5) which gives an answer of somewhere between 3 and 5 …
just because you can try to “measure” something doesn’t mean you have meaningful “data” in those measurements …
sure, its better than nothing … but not by much …
Dark ==> You are right somewhat about the tendency of modern science (all fields) to use “point data” ignoring that much of the real data are actually ranges.
This occurs because mathematicians have been utilized to do “statistical analysis” of numbers. How many statistical classes in math departments ever address measurement errors and uncertainty? Let’s use a climate science number, 0.01%!
I’ll do a short screed on climate science. When was the last study you saw that actually dealt with causative measurements? 99.99% of studies have to do with time series of temperature. In other words, trying to make a forecast of what will happen to temperature trends or what else correlate with temperature trends. The last study I read that dealt with causative measurements was Dr. Happer’s paper. It is replicable by anyone who wishes to do so.
Which brings us to your subject, analysis of data. Statistical analysis seldom will allow a mathematical derivation of causative phenomena that allow one to accurately PREDICT a physical response. All statistical analysis includes an error/uncertainty as shown with the variance/standard deviation statistical descriptors. Here is a website that provides a pretty good definition.
Descriptive Statistics – Examples, Types, Definition, Formulas (cuemath.com)
It is interesting to see the difference between descriptive and inferential descriptors and their uses.
A whole lot of inferential statistical analysis is being done in climate and other science. As one who spent years analyzing usage data for telephone central office equipment, people requirements, and budgets I can tell you bosses love predictions. Those making them from inferring what might happen based upon history, not so much. Inferences are not predictions and it is no wonder your study of different groups inferences shows what it does.
Measured data should *always* be specified as State Value +/- Uncertainty.
The rule in so much of science today is to assume that if you average enough data points then the uncertainty always cancels and can be ignored. It doesn’t matter what the data distribution looks like or whether there is any systematic uncertainty – uncertainty always cancels. It just makes it so much simpler to just ignore the propagation of uncertainty from the data elements into the final result.
I have five different probability/statistics textbooks I have collected since 2000. Not a single one has a single example of handling uncertainty in measured data. Not one! They all assume that all data is 100% accurate and so are the means calculated from the data.
They even go into how to handle samples from a population by calculating the means of the samples and then using those means to calculate the mean of the population. They *ALL* call the standard deviation of the sample means the “uncertainty” of the mean calculated from the sample means. They call it a “standard error” of the mean – when it has nothing to do with the accuracy of the mean calculated from the sample means but only how precisely the resultant mean has been calculated. If the standard deviation of the sample means is zero then the resultant mean is considered to be 100% accurate – even when the actual measurement data can have an uncertainty interval wider than the actual measurement itself!
Tim ==> If you read here regularly, you’ll know that I have beaten that drum for a decade…..
I know. And I appreciate it even if most do not!
Tim ==> And I do appreciate your support…..
Without uncertainty, all you have is a number. It’s not a measurement.
Michael Mann and Anthony Watts locked in a room… would be interesting.
Opus ==> Unfortunately, they could only talk about their opinions and convictions. If I remember correctly, Anthony did have lunch with Mann once — and reported he was quite cordial. (I could be wrong…anyone here with a food long memory>)
As part of my work, analysis was important but the analysis must be subjected to rigorous cross examination. When we were subjected to six sigma, my statistics partner used the line paraphrased from Mark Twain, ” there are liars, damned liars and then statisticians”. Always revert back to the science. Models built on coincident events only send you looking for the scientific connection and do not actually work for predicting anything.
Some time ago Astronomers were confused about the temperature of Jupiter, they thought it should be hundreds of degrees colder than it is because of it’s distance from the Sun. They finally concluded that Jupiters aurora warmed the planet. Well Earth also frequently experiences aurora some visible some not, I have not seen anywhere that the effect of such aurora has been factored into any of the climate models. Perhaps this is another one trillionth of one degree factor, among many, that make these models and the policies motivated by them unfit for purpose. In order to model something successfully you have to understand it and they don’t!
This is really not a problem – science, not to mention politics and religion, has always worked on the basis of debate and eventual resolution, one way or the other, even if the apparent “loser” refuses to concede defeat. Reality always intrudes eventually, sometimes it happens quickly, say over a few years time, and other times occurs over decades.
Nobody can call a time out on debate of any kind, or establish any kind of ground rules – which people will naturally ignore.
Look, there are still massive debates going on TODAY about the Civil War that ended 157 years ago, what caused it, what the issues were, how it should or should not have been prosecuted by either side, the rightness and wrongness of the two opposing causes, and how we should think about the same issues today. Society is still arguing over the Confederate battle flag, which Civil War veterans are appropriate to put up or maintain memorials to, how to name our ships and military bases, etc.
But no matter what one believes as a matter of opinion, the fact is that Union still won the war and the Confederacy lost the war, and that outcome has affected everyone in the nation ever since. Yet the debate still goes on..
Duane ==> Hmmmm….the causes of the Civil War are not subject to scientific consideration.
And science is not done by debate, though results can be debated to some advantage for the overall endeavor.
But you see, a debate is about convincing the audience “who” is right. It does not determine which scientific results are correct.
Those who would debate would be better engaged in working cooperatively to determine why their results are different.
Statistical analysis will never provide a definitive answer to who is “right” and who is “wrong”. That is not what statistics do. Basically, think of a bell or normal curve. There is never one correct answer, just a distribution of possible answers.
Science on the other hand, requires a mathematical description of a physical phenomena. One that can give one and only one prediction. Even quantum physics can only give probabilities. If climate science were being truthful the involved scientists would give similar probability answers and declare that they will never KNOW for sure where, what, and when certain things WILL happen. Look at model outputs. Do they look like an electron cloud where you can never be sure where an electron, i.e., temperature might be? That gives Dr. Frank’s predictions of uncertainty much more traction.
“Statistical analysis will never provide a definitive answer to who is “right” and who is “wrong”.
I guess we’re just stuck with what is “probably” right. But when that probability is so close to unity that the difference can not be calculated on my engineering laptop, it’s time to get off our collective butts.
In other words it isn’t possible to roll your number on a crap table, right? Or pick the right number and color on a roulette wheel, right?
Temperature measurements are *not* close to zero uncertainty. They are independent, random variables which, when combined, have their uncertainties *ADD*.
Multiple measurements of different things are not likely to result in a distribution where all uncertainty cancels. I know that’s hard for you and your CAGW compatriots to believe but it *is* the truth.
Hey everybody … this guy has an “engineering laptop”.
Not only is he smart … he has the tools to show how smart & special he is.
He uses the words ‘unity’ & ‘butts’ in the same sentence.
Don’t let this gem get away.
How are those climate models working out?
We are number 80 now aren’t we?
Settled science!
My oldest child, a scientist, pointed out to me some years ago that a comparison of research on glamorous or sensational subjects shows it often receives the most funding when compared to mundane research but delivers far fewer benefits if any. This is especially noticeable in the EU when governments decide on what to fund and prove they are no good in choosing winners and losers. They would have more success simply flipping a coin.
Michael ==> What to fund, what to research, is a definite and serious form of bias.
A minor geographical quibble “in 5 Nunavut communities (in Alaska).”. Nunavut is in Canada last time I looked.
Kevin ==> Ooooh….Good Catch! Of course it is, the paper is even from some Canadian Government agency…..
Your Apollo 13 example has one distinction; they were Engineers not Scientists. This is an important distinction because Engineers have to solve real world problems Scientists have to publish papers in order to get their next grant.
The problem is not model myopia but modellers’ hubris.
More specifically, the idea that passing peer review makes something correct. It should just mean that the editor think it’s worthy of throwing into the discussion with no glaringly obvious faults. Its not reason for the media to write “scientists think”
If your results differ significantly from others, your conclusion and not just discussion, should include the results of any equally valid analysis. No “We found” but “another equally valid approach suggests that science has not helped much in furthering understanding in this case”
Excellent post.
In an age that asks, “what authority” the only authority to be found are experts: scientists! So, anyone seeking to leverage authority, will leverage experts/scientists to propel the agenda, (usually more power and money) Can’t do a pandemic or a global reset without the authority of scientists.
This is what happened to the media’s authority. it got leveraged. People listen because they don’t ever stop talking but who really trusts the media and by what authority do they speak?
All that is left are scientists and with it science and it’s being sucked into the same authority vacuum that the media has been spit out of.
So yeah, if science wants to avoid the path our media has taken it’s need to build a very powerful shield around itself.
Good Luck
Incentives matter. Roman engineers and builders were obliged to sleep under viaducts they’d built while legions marched over their new creations. No doubt sometime their sleep was uneasy but it’s likely that they built the best viaducts they possibly could: indeed, some are still standing.
Something similar might encourage researchers to be very careful before publishing predictions.
Re locking them in a room, which is something I generally agree with. In the movie the engineers had a very obvious and focussed objective – save the astronauts. Unfortunately, what with all the grants and careers at stake, getting the scientists to just agree on the objective would be impossible. The chance for agreeing a focussed objective passed 35 years ago. Most climate scientists now are really climate change scientists whose starting point is the assumption that CO2 drives everything.
4 Eyes ==> Yeah, maybe for CliSci but other fields are not so lost.
Kip,
Spot on, well illustrated and timely.
I would like a $ for each article I have written over the last 20 years, to stress the need to properly follow uncertainty and error procedures such as those from the Paris-based International Bureau of Weights and Measures.
The big, but repairable problem in climate studies is that very few authors have adequate understanding of uncertainty. If it was treated properly, I would guess that more than half of past papers would have been rejected before publication. That is one of the prime purposes of uncertainty analysis, but some authors have not got that far in understanding proper science.
Geoff S
Geoff ==> The Many-Analysts approach is incorrectly thought to be about uncertainty but it is really about “results depend on methods no the data”.
One big problem is that scientists, computer programmers, and statisticians today are taught *nonthing* about uncertainty and how to propagate it. It’s been that way since before 2000. Engineers are taught a little bit but typically have no dedicated instruction on the principles.
My Immunologist PhD son was in college around 2010 taking his undergrad work in microbiology. He was actually told by his advisor to not worry about taking math and statistics classes because he could always find a math major to do the statistics on data he collected. (Luckily he didn’t listen to that bad advice). So what you wind up with in so much of medical science today is the blind leading the blind, scientists who know nothing about statistical analysis and statisticians who know nothing about physical reality – leading to studies with questionable results which are unable to be replicated.
Climate science today is in the same boat. When you hear from CAGW advocates that uncertainty always cancels out if your sample is large enough its coming from scientists that know nothing about statistics (and the real world apparently) and statisticians that know nothing about physical science. The blind leading the blind!
Your son was taught how to properly propagate uncertainty. If I’m wrong, then please link us to the text(s) he was assigned.
He was *NOT* taught how to properly propagate uncertainty. And I don’t have his textbook, it’s probably on his bookshelf. I *do* have my other son’s textbook. “Probability and Statistics For Engineers and Scientists”, 2nd Ed, Anthony Hayter. There is not a single example in the textbook about propagating uncertainty. The closest it comes to is how to add the variance of independent, random variables (much like temperatures) with and without covariance.
But variance is *not* uncertainty although it is a cousin.
Why don’t *YOU* link us to a copy of a standard college textbook that properly handles the propagation of uncertainty!
Bob has an engineers laptop that can show probability close to unity. He doesn’t need any stinkn text book.
Then please link us to a textbook that does properly propagate uncertainty. Or could it be that Dr. Frank is so far head of every statistician that he alone knows? Or is it that Dr. Evil conspiracy that causes every credible statistician to avoid citing Dr. Franks seminal work? Even though the big $ would certainly come from finding what has been missing for centuries of study and rebuilding statistics from the ground up.
“But variance is *not* uncertainty although it is a cousin.“
This is your claim. It is quite convenient, as it requires no ground up derivation like the rest of the dull, boring field. And since it is your claim, then it’s up to you to explain the difference. Hint: “Cousins” don’t get it….
Uncertainty: The Soul of Modeling, Probability & Statistics LINK https://www.google.com/books/edition/Uncertainty/gLmuDAAAQBAJ?hl=en
“Then please link us to a textbook that does properly propagate uncertainty.”
https://www.amazon.com/Introduction-Error-Analysis-Uncertainties-Measurements/dp/093570275X
Taylor’s textbook covers how to propagate uncertainty for situations where you are measuring different things using different devices AND where you are measuring the same thing multiple times using the same device.
This textbook should be a required course for any physical science or engineering major.
here’s another one: https://www.amazon.com/Practical-Physics-4ed-G-Squires/dp/0521779405
And then there is Bevington’s text: https://www.amazon.com/Reduction-Error-Analysis-Physical-Sciences/dp/0072472278
But he states right up front that he only addresses multiple measurements of the same thing assuming systemic uncertainty is negligible.
Why these textbooks are not used at the college level is beyond me. I suspect it is because it is math grad students and/or professors that teach statistics courses to both science and engineering majors. Mathematicians never become familiar with handling uncertainty and therefore neither do their students. The books above are *not* statistics texts, they are aimed at physical science and engineering.
“This is your claim. It is quite convenient, as it requires no ground up derivation like the rest of the dull, boring field. And since it is your claim, then it’s up to you to explain the difference. Hint: “Cousins” don’t get it….”
You would be far better served by remaining silent and not exhibiting your lack of knowledge on this subject rather than confirming your lack of knowledge for everyone to see.
Variance and uncertainty don’t have the same units! Therefore they are *NOT* the same thing although they are indicators of the same thing.
Variance gives you insight into the expectation of the next value, the higher the variance the more possible values the next value can take on – i.e. uncertainty grows as variance grows. Variance and standard deviation are *NOT* the same thing. Variance considers the entire population, standard deviation (at least in the case of a normal distribution) only tells you about part of the distribution, typically about 68%.
Uncertainty intervals are an educated judgement on the accuracy of a value. The wider the uncertainty interval the more uncertain the stated value is. The actual “true value” should be somewhere in the uncertainty interval, it is just unknown as to where!
So both are indicators of uncertainty, they *are* cousins.
“Variance and uncertainty don’t have the same units!“
I agree. But just for fun, what do you think are the units of uncertainty? I suspect that they are the same as those for standard deviation/error.
“Uncertainty intervals are an educated judgement on the accuracy of a value.“
Since your making these “educated judgments”, then you must be making them on how they are distributed, correct?
“Taylor’s textbook covers how to propagate uncertainty for situations where you are measuring different things using different devices AND where you are measuring the same thing multiple times using the same device.”
Here’s a conveniently non paywalled version of this textbook.
https://www.niser.ac.in/sps/sites/default/files/basic_page/John%20R.%20Taylor%20-%20An%20Introduction%20to%20Error%20Analysis_%20The%20Study%20of%20Uncertainties%20in%20Physical%20Measurements-University%20Science%20Books%20(1997).pdf
In fossil fuel evaluations, we “measure different things with different devices”, all the time, and once the total standard error of each measurement process is estimated, along with any correlations between the different sources, we freely use them together We don’t do anything differently based on the fact that the resulting estimates are often gleaned from over a dozen of these “different devices”. So, since I searched the text unsuccessfully for this treatment of “different devices”, and carelessly mssed it, would you please point out this treatment to me?
Also, please note that the Taylor book always defines “uncertainties” as quantifiably distributed. With the same units as standard deviation/error. I.e., not some ghostly, undefinable, parameter. They therefore are statistically evaluable, together, from any source.
“I agree. But just for fun, what do you think are the units of uncertainty? I suspect that they are the same as those for standard deviation/error.”
I *know* what the units are. I’m surprised you have to ask.
“Since your making these “educated judgments”, then you must be making them on how they are distributed, correct?”
Nope. Uncertainty doesn’t have a distribution. The “true value” has a probability of 1. All other values in the uncertainty interval have a probability of zero. The issue is that you simply don’t know which value is the true value.
“So, since I searched the text unsuccessfully for this treatment of “different devices”, and carelessly mssed it, would you please point out this treatment to me?”
My guess is that you didn’t actually read the text at all!
Check out page 10: “Whenever you can repeat the same measurement several times, the spread of your measured values gives a valuable indication of the uncertainty in your measurements”
“Repeated measurements such as those in (1.3) cannot always be relied on to reveal the uncertainties. First, we must be sure that the quantity measured is really the same quantity each time.”
Page 45: “In Section 2.5, I discussed what happens when two numbers x and y are measured and the results are used to calculate the difference q = x – y. We found that the uncertainty in q is just the sum ẟq ≈ ẟx + ẟy of the uncertainties in x and y.
You should also read Section 3.3 for understanding.
Both Taylor and Bevington explicitly talk about measuring the same thing multiple times and measuring different things. You just have to read for understanding.
“Also, please note that the Taylor book always defines “uncertainties” as quantifiably distributed. With the same units as standard deviation/error. I.e., not some ghostly, undefinable, parameter. They therefore are statistically evaluable, together, from any source.”
So what? Do you *really* think you are stating something we all don’t know? Just because they all have the same units that doesn’t mean they are the same things! And uncertainty is *NOT* error. Error may be a part of the uncertainty interval but it is *not* the same thing as uncertainty! (consider measuring device resolution and its relationship to uncertainty. resolution limits are *not* errors but certainty contribute to uncertainty!)
“Taylor’s textbook covers how to propagate uncertainty for situations where you are measuring different things using different devices AND where you are measuring the same thing multiple times using the same device.”
STILL waiting on you to point out Taylor’s “different devices” treatment.
“Nope. Uncertainty doesn’t have a distribution.”
Might want to read chapter 4. Taylor goes into detail on how to find the standard deviation of what the chapter heading calls “random uncertainties”. FYI, parameters with standard deviations are, by definition, distributed.
“The “true value” has a probability of 1. All other values in the uncertainty interval have a probability of zero.“
Source? Taylor sure goes to a lot of trouble in chapter 4 to describe just the opposite.
P98
“This standard deviation of the measurements x1….xn, is an estimate of the average uncertainty of the measurements x1….xn, and is determined as follows….
Oh, BTW, look at fig 4.2. If you were correct about
“The “true value” has a probability of 1. All other values in the uncertainty interval have a probability of zero.”
then those 4 target scatters would all be represented by a hole in the middle with a ring around it.
What do you think you are asserting here? x1 … xn ARE MEASURMENTS OF THE SAME THING! Exactly what I have been trying to tell you!
Temperature measurements ARE NOT MULTIPLE MEASUREMENTS OF THE SAME THING!
Why is this so hard for CAGW advocates to understand?
And, once again, you are stuck in the same box as bellman where all error is random and cancels! Do you honestly believe that no systematic error exists?
The reason you don’t get one-hole groupings is that you can neither quantify the random errors associated with each shot let alone the systematic errors! That’s why you get a spread.
Each shot is a separate, random, independent event. Each one has a different combination of random and systematic errors, just like temperature measurements from different sites are. And if you can’t quantify either the random or systematic errors then you have to consider them as part of the total uncertainty interval and propagate them as such.
BTW, Figure 1 is the one you want to look at to get the definitions, not Fig 2! You can’t even get this straight!
“BTW, Figure 1 is the one you want to look at to get the definitions, not Fig 2! You can’t even get this straight!”
Fig 2 is fig 1 without the targets. My observation about how FOSUR w.r.t. your fact free
“The “true value” has a probability of 1. All other values in the uncertainty interval have a probability of zero.”
applies equally to both.
I already did. And you didn’t bother to read the quotes I provided apparently.
Go look at Page 45 again, this time for meaning!
And, ONCE AGAIN, you didn’t bother to read for meaning!
You have to be willing to put some effort in if you are ever going to learn. One of my first EE professors “learned” me that truism.
Chapter 4, Pg 93
“As noted before, not all types of experimental uncertainty can be assessed by statistical analysis based on repeated measurements. For this reason, uncertainties are classified into two groups, the random uncertainties, which can be treated statistically, and the systematic uncertainties, which cannot. This distinction is described in Section 4.1. Most of the remainder of this chapter is devoted to random uncertainties. Section 4.2 introduces, without formal justification, two important definitions related to a series of measured values x_1, …, x_n, all of some single quantity x.” (bolding mine, tpg)
Page 94: “Experimental uncertainties that can be revealed by repeating he measurements are called random errors; those that cannot be revealed in this way are called systematic.”
tpg: Since temperature measurements cannot be repeated, their uncertainties fall into the category of systematic – i..e they are not conducive to statistical analysis. And the temperatures *are* measured by different devices, by definition.
When you have SINGLE measurements of different things where does the distribution of values come from? When you combine Northern Hemisphere temperature measurements with Southern Hemisphere temperature measurements what kind of a distribution do you get? It will be *at least* a bi-modal distribution. What does the standard deviation of a bi-modal distribution tell you?
If you pick up every random board you see in the ditch while you are driving what kind of a distribution will their measurements give you? What will the mean tell you? What will the standard deviation tell you? The statistical descriptive factors are only useful in establishing some kind of expectation as to what the next board you find in the ditch will be! What will that expectation be?
You didn’t read Taylor’s book at all!
Take Section 4.4: The Standard Deviation of the Mean
“If x_1, …, x_n are the results of N measurements of the same quantity x, then, as we have seen, our best estimate for the quantity x is their mean x-bar” (bolding mine, tpg)
Look at Section 4.6, Page 106
“In the past few sections, I have been taking for granted that all systematic errors were reduced to a negligible level before serious measurements began. Here, I take up again the disagreeable possibility of appreciable systematic errors. In the example just discussed, we may have been measuring m with a balance that read consistently high or low, or our timer may have been running consistently fast or slow. Neither of these systematic errors will show up in the comparison of our various answers for the spring constant k. As a result, the standard deviation of the mean σ_k can be regarded as the random component ẟk_ran of the uncertainty ẟk but is certainly not the total uncertainty ẟk. ”
….
“No simple theory tells us what to do about systematic errors.”
….
“Suppose now we have been told that the balance used to measure m and the clock used for T have systematic uncertainties up to 1% and 0.5%, respectively. We can then find the systematic component of ẟk by propagation of errors; the only question is whether to combine the errors in quadrature or directly”
———————————————————
Temperature measurements from independent sites *always* have systematic error. That systematic error is impossible to quantify over time. Since you are only taking ONE reading for each temperature and not repeated measurements the random error portion of the uncertainty is impossible to quantify. Thus the uncertainty interval for each measurement must include an estimate of any random error PLUS an estimate of any systematic error. Your only choice when combining this single measurement with another single measurement is to do what Taylor says, propagate the entire uncertainty interval from each measurement into the total.
You are just like so many other CAGW advocates on here – bellman, bdgwx, etc. You want to assume that all error cancels. It just makes things so much easier to handle, right? Plus you don’t have to worry about the final uncertainty interval being wider than the temperature differences you are trying to identify!
STOP TRYING TO FOOL US INTO THINKING YOU HAVE READ TAYLOR! You haven’t. My guess is that you never will. At least bellman actually provides cherry-picked quotes and equations from Taylor, he just doesn’t understand what he is referencing. You haven’t even cherry-picked *anything*!
You didn’t refute a thing I wrote. Not one.
Additionally, you shamelessly bring in and conflate random and systemic uncertainty. No one doubts that systemic uncertainty exists. The name of the game is to out it and correct for it. Your conundum is that for you to charge that data that is inconvenient for you has (unspecified) systemic uncertainty, you must have some idea of what it is. If you know that, you can do your best to correct for it. It’s even more bogus to assume that the data for which you make the fact free charge that it has “systemic uncertainty” has it changing such that it effects trends. FYI, neither global temp nor sea level data have trends that are significantly changed with even the wildest posits of changing systemic uncertainty.
“Temperature measurements from independent sites *always* have systematic error.”
And? Each such error is, by definition, biased one way, which doesn’t change either it’s trends, or their standard errors. I.e., they don’t “add up”. And in the case of temp (and sea level) measurements, even if these “systemic errors” came and went over time, the magnitude of the changes dwarf them, resulting in no significant change in the true trends.
You’re flawed ideas about error propagation in trends doom you to Dr. Frankian level scorn, sighs, and STH derision by those who learned statistics from the ground up. I.e., most of the above ground tech community.
I refuted it all. You just have to learn how to read!
Needlessly? As I noted, there isn’t a measurement device today that is 100% accurate, especially temperature measuring devices. in the field. Random AND systemic uncertainty applies to each and every temperature measurement made by these devices. Your only rebuttal seems to be a form of denial!
How do you identify the random and systematic uncertainty in a temperature measuring device at Forbes Field, Topeka, KS? Does someone haul out a calibration lab to the site every morning to make sure it is in calibration? HOW DO YOU CORRECT FOR EITHER?
You *still* haven’t read Taylor! You must have some idea of the INTERVAL within which the true value will lie. You do *NOT* need to have any idea of what the systematic uncertainty actually is. When you are making one measurement of one thing that also applies to random error since you won’t have a distribution that allows you to more accurately assess where the true value actually lies.
The Federal Meteorological Handbook No. 1 specifies the allowable uncertainty interval for field temperature measuring devices as +/- 0.6C. They do *NOT* specify what portion of that interval is random and which portion is systematic. It is *still* an uncertainty interval!
Again, it is obvious you haven’t actually read Taylor. You haven’t even read the quotes from his book that I provided for you!
Taylor: “No simple theory tells us what to do about systematic errors.”
And that systematic uncertainty *can* affect trends. What do you think UHI is? And it *certainly* affects temperature trends!
Those trends are based solely on the trend line of the stated value and include no propagation of error at all. Even the Argo floats used to measure sea temps have an uncertainty interval of +/- 0.5C. That interval size totally masks the temperature differences trying to be identified. If the uncertainty intervals were actually shown they would dwarf the trend lines established by the stated values. The trend line could actually be up, down, or sideways – there really isn’t any way to tell once the uncertainty is considered!
“And? Each such error is, by definition, biased one way, which doesn’t change either it’s trends, or their standard errors. “
ROFL! Uncertainty is specified by +/-! That means the systemic bias can be in EITHER DIRECTION! There is no “definition” that states uncertainty is always in one direction. If you think there is then provide a link or quote from an established source that says that!
Take a liquid-in-glass thermometer. The uncertainty in the reading can be different based on whether the temperature is going up or is going down. Plus in one direction and minus in the other direction.
You can’t even get this simple fact correct!
I provided the quote from Taylor that refutes this assertion. Apparently, as usual, you didn’t actually bother to read for meaning, either my post or Taylor’s statements.
Again: “We can then find the systematic component of ẟk by propagation of errors; the only question is whether to combine the errors in quadrature or directly””
If you would actually read Taylor you would find that the uncertainties ADD, either in quadrature or directly.
You are still lost in the weeds concerning multiple measurements of the same thing and multiple measurements of different things.
Another false assertion! If the uncertainty interval is +/- 0.6C for FMH No. 1 measurement stations and we are trying to identify a 1.6C degree difference over 80 years that’s an annual difference of 0.02C – which is dwarfed by the uncertainty of the temperature measurements.
Your entire post is based on untrue assertions and the argumentative fallacy of Argument by Dismissal.
Let’s see:
Those who have learned statistics from the ground up have no grounding in uncertainty. No traditional college-level statistic textbooks that I can find cover uncertainty in any way, shape, or form. And you have *never* referenced one that actually does.
Thus the dependence of those “experts” on saying that all measurement uncertainty is either random and cancels or is irrelevant.
“above ground tech community” I have found that most technicians have a far better grounding in uncertainty than most theoretical scientists, mathematicians, and computer programmers. They actually work with measurement devices that have uncertainty, both random and systematic, that must be accounted for when testing real world equipment. You can’t even get this assertion correct!
Read this document from NIST and try to understand why traceable, calibrated measurements are required in science.
1 (nist.gov)
This document alone is 60 pages. The associated references number to the hundreds of pages. There are many, many people and businesses that live and die by the need for assessing the resolution, precision, accuracy, and uncertainty in these measurements.
See where it gives some definitions here:
This entire way way way WAY overly long, overly verbose string of “skeptical” handwaving can be succinctly summarized as follows:
“Skeptics” : There’s too much uncertainty for climate models to make any reliable predictions.
Realists: The climate models are, of course, not perfect. Yet the model means have made quite reliable predictions for several decades now. That track record demonstrates that your claims are not correct.
“Skeptics” : Bu bu bu bu bu …. you haven’t read Taylor’s textbook!
OMG LOL. What a joke!
Yeah, what a joke — on you!
Really, J Gorman? Yet another tiredly lame, ignorant parroting of this ridiculously cherry picked region that is only 5% of the troposphere?
That’s right: this graph you referenced is totally cherry picked – it represents the one little teeny tiny 5% region of the troposphere that happens to show the largest negative divergence between observations and model projections.
And you laughably fell for this ridiculous misrepresentation (why am I not the least bit surprised) hook line and sinker.
You also blindly ignore the fact that there are other regions of the globe that have exhibited the exact opposite divergence: much more observed warming than was projected.
Such tragically ignorant foolishness. A truly abominable disgrace.
And so very typical of so-called “skeptics”.
Typical response from you. Disparage with no proof whatsoever.
You should be embarrassed that you can never supply references or support for any of your assertions. The comments you make about this simply can’t be taken as proof of anything without some displayed evidence.
Only simpletons make unsupported comments, keep up the poor work!
re: “Only simpletons make unsupported comments, keep up the poor work”
Talk about “the kettle calling the pot black”. The only “poor work” here, Gorman, is your own “no-questions-asked” blindly ignorant acceptance of disingenuous propaganda. Only “simpletons” are fooled into falling for that kind of stuff.
See my reply to your other identical copy and paste “objection” in these comments. It spells out in detail the supporting information, exactly as was stated.
Sorry that you remain so tragically unable to handle reality.
Surely trying to predict future atmospheric temperatures is a fools errand.
Why? Because the atmosphere is a very small part of a very big system which includes the oceans.
Consider El Niño temperature peaks which are very obvious in the lower temperature trace of all the data sets.
These temperature spikes actually represent loss of ocean heat from the El Niño hotspots to space via the atmosphere. The atmosphere is transiently gaining a lot of heat but the total SYSTEM is losing heat and thus COOLING.
To understand a system you must look at the whole SYSTEM
Jim Steel is over this
Consider Copernicus.He was able to develop his theory of.a solar centric system because he was able to recognise that the Earth centric science of the time was inadequate in describing observations.
All the experts of he time were proven wrong because they all had the wrong starting point and did not comprehend the extent of the system they were trying to describe.
They didn’t know it, but they weren’t looking from “far enough away”
Copernicus recognised the “system” he was trying to describe and in doing so revolutionised the science.(even though it took 100 years for science and religion to catch up!)
Is current “Climate science any different?
AS described above when increased atmospheric warming actually means a decrease in total energy in the system (called cooling) as heat energy leaves the system to space via the atmosphere
Extending that thought process it means that the ups and downs of the temperature trace are probably functions of ever changing oceanic surface temperature changes and that above “average” atmospheric temperatures actually represent a cooling of the “system”
Also the ElNino peaks clearly seen in the graphs , because they are local not Global effects, probably mean that attempts to describe World temperatures in such terms as ,well, practically useless.
The issue is that if the oceans are included fully in true system , because the thermal mass is 1000 times that of the atmosphere, they reduce any atmospheric centric effects to the miniscule (think CO2)
It all depends where you look at it from and who is doing the looking
Perhaps we need a great new “RESET”
just thinking
PS same issue with any complex issue such as Covid management, Energy supply etc
Lindsay ==> I thought the same thing: https://judithcurry.com/2016/02/02/discussion-can-we-hit-the-restart-button/
Thanks Kip, always difficult to change mindsets. (Copernicus hypothesis took 100 years)
Perhaps there is an opportunity her to push a very simple thesis based on the unarguable EL NINO data by posing a very simple question..
If the EL Nino hot spot is losing heat to space corresponding to the rise in average atmospheric world temperature, is the enthalpy of the total system increasing decreasing or static?
(or something like that.)
We just need one IPCC climate scientist with big enough gonads to do what all scientists are supposed to do and question dogma.
It is inevitable that scientific truth will out. Just when??
Lindsay ==> Yes, even the IPCC knows that ““The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.” “
One of the best critiques of scientific papers I have read. Thank you.
I thoroughly approve of the locked room approach which might make quite a few chancers think that another career choice altogether would be much more appropriate for them and far less dangerous to others.
UK-Weather Lass ==> Thank you….and for those who aren’t in the UK or don’t watch British TV:
chancer: a scheming opportunist
Quantitative models are an alternative for science, they are not science. Questionaires are not even that. So tell policy makers the truth, we do not know the answer.
Alexander ==> It is very hard even for normal everyday humans, you and your neighbors and me, to say “I don’t know”.
One of the amusing anecdotes of my many travels is telling how men in the Dominican Republic will *always* gladly give directions to “the post office” or whatever when asked, very friendly, but do so even when they have absolutely no idea where it is…..
Kip- Great essay!
Tom.1 ==> Thank you, sir.
Well done, but one consideration remains, that being agenda. I believe Eisenhower warned of the agenda effect when political entities control the focus of research. Just how shall we keep the politicians and social engineers out of the room?
Mark ==> Don’t know . . . the Locked Room should contain serious scientists who are concerned that their respective answers to the same question from the same data differ so much.
Very nice paper. It certainly addresses the issue with trying to make predictions of physical phenomena by using statistics. The many-analyst problem is nothing more than another statistical distribution within which there is no certain answer to a question, only probabilities.
How well can you predict what will happen tommorrow. Next year? In effect our future can be considered a roll of the dice. We can influence which numbers appear on the faces and the number of facrs, but we cant influence the roll itself.
Where we make a mistake is to think we can average the rolls and form a reliable prediction. The average of a pair of dice is 7. But that doesnt mean you will roll a 7.
This is the mistake climate models make. They put forward a model mean as a prediction while ignoring the result of a pair of dice actually follows a random walk.
Ridiculous. ferdberple ignores the fact that the models have quite accurately projected the future for several decades now. It is not merely a roll of dice random walk as he falsely pretends.
“accurately projected”? You have an unprecedented definition of accurately projected!
Here’s J Gorman once again so tragically demonstrating how laughably easy it is to fool folks (like Gorman himself) who won’t look carefully into the details of the data.
The graph that Gorman posts represents a mere 5% of the troposphere. It is a ridiculously cherry picked graph, the one little teeny tiny region of the troposphere that happens to exhibit the largest negative divergence between observations and model projections.
There are other areas of the globe that have exhibited the exact opposite divergence, where actual warming was significantly larger than models had estimated.
But so-called “skeptics” have never, ever mentioned those areas, now, have they? Oh heavens no! To actually acknowledge such facts would rip the fairy tale “skeptical” world of anti-science delusions to shreds!
And we can’t have THAT now, can we?
Typical response from you. Disparage with no proof whatsoever.
You should be embarrassed that you can never supply references or support for any of your assertions. The comments you make about this simply can’t be taken as proof of anything without some displayed evidence.
Only simpletons make unsupported comments, keep up the poor work!
It is again so tragic to see that you never bothered to look into the details of that laughably ridiculous cherry pick, J Gorman. So typical of so-called climate “skeptics”, just blindly swallowing whatever pseudo-scientific propaganda you’re fed, no questions asked.
Talk about “poor work”.
As usual, others have to do the real research work for you. Here’s a link to Christy’s actual publication:
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018EA000401
The focus of the analysis was the 200- to 300-hPa layer of the troposphere between 20N and 20S latitudes.
The spherical section 20N to 20S latitudes covers 34% of the earth’s surface.
The 200- to 300-hPa layer of the troposphere comprises, at most, only about 15% of the entire tropospheric volume in these latitudes.
So overall, the region of analysis for Christy’s analysis was 15% of 34%, or yes, around just 5% of the entire three dimensional global tropospheric volume. A small cherry picked portion.
Since the Global Average Temperature involves even less of the total atmosphere, doesn’t even measure actual soil temperature, and covers just a small percentage of the oceans, just how accurate is *it*?
*YOU* are making assumptions about the GAT based on an even smaller sample. You are hoist on your own petard!
Yet another comically ridiculous handwaving Gormanian excuse.
The issue here is not “accuracy”. It is how representative of global changes is any particular measurement sample.
A teeny little high altitude region of the troposphere, in just the tropics, nowhere else, that was disingenuously cherry picked because it just happens to be the one region that displays the widest negative divergence between model projections and observations is so obviously not representative of global changes.
It is also totally non-representative of how well climate models have projected global changes. It’s a grossly disingenuous misrepresentation.
But so-called “skeptics” ridiculously try to pretend otherwise.
And then they still wonder why they are not taken seriously.
“You are hoist on your own petard”
“The issue here is not “accuracy”. It is how representative of global changes is any particular measurement sample.”
If the sample is not accurate then it *can’t* be representative of ANYTHING!
“A teeny little high altitude region of the troposphere, in just the tropics, nowhere else,”
A teeny little sample of ocean temperatures, not everywhere, just a few places” extrapolated to be a global ocean average temp.
A teeny little sample of land temps in Africa, not all over Africa but just a few places extrapolated to be an average temperature for Africa!
Once again, you are hoist on your own petard!
“It is also totally non-representative of how well climate models have projected global changes. It’s a grossly disingenuous misrepresentation.”
Climate models *do* project global changes. However, projecting a linear regression line is the WORST way to forecast what is going to happen in the future! It’s why the models turn into linear equations of the form y = mx+b after just a few years. Pick your value of “m” and just run with it, no need to worry about actual physics of the atmosphere. No need to worry about cyclical processes and their impact on the globe. No need to worry about uncertainties in your projected values.
Once again, you are hoist on your own petard!
re: “A teeny little sample of ocean temperatures, not everywhere, just a few places”
So tragically and laughably wrong. Not “just a few places” but, via Argo floats, thousands upon thousands of places worldwide.
re: “A teeny little sample of land temps in Africa, not all over Africa but just a few places extrapolated to be an average temperature for Africa!”
And the mindless Gormanian ignorance sadly continues. Not just “a few places”. Hundreds and hundreds of locations on that continent.
re: “projecting a linear regression line blah blah blah blah blah … “
This is not how climate models make projections. Yet another tragic example of Gormanian ignorance.
Once again, practically everything that Gorman trots out is just so ridiculously wrong.
And yet he still wonders why his anti-science, anti-reality babbling is not taken seriously.
What a joke.
There are only about 3000 Argo floats to cover the entire earth. If they were evenly spread throughout the oceans that would be one per 350,000 sqkm. And you think that gives an accurate picture of the global condition of the oceans? Especially when they are *NOT* evenly spread?
Really? Beginning in 1900 there were hundreds and hundreds of temperature measuring locations in Africa? Say there are 500 measuring stations today of the quality of the USCRN network and they are spread all over the continent. That would be one station every 60,000 sqkm. That’s a square about 250 km on a side!
If the temperature profile at each station approximates a sine wave then for two separated stations you get t1 = sin(x) and t2 = sin(x+ⱷ). The correlation between the two is cos(ⱷ). ⱷ is a function of its own. ⱷ = f(distance, elevation, humidity, pressure, wind, surface below the measuring device, etc). Considering distance alone at 80km gives a correlation factor of less than 0.8. All the rest drive the correlation even lower. So using a square 250km on a side to estimate the temperature everywhere in that 60,000 sqkm is almost impossible.
It simply doesn’t matter what the math is inside the black box of a model. What matters is the output. And the output from almost all of them is a linear equation of the form y = mx+b. Just put that label on the black box and you won’t be far off. See the attached graph of CMIP6. Tell me those graphs are not very close to linear equations!
I am surprised you can even see your feet based on the amount of smoke you are blowing!
Here we go with the same tired old pseudo-science “objections” again. Even thousands and thousands of measurement points all over the earth “aren’t enough” for disingenuous climate crybaby crackpots like Gorman.
But what is really so tragically ironic here is to see these “skeptics” like Gorman on one hand whining about global measurement coverage, yet at the same time blindly trotting out a study like Christy’s, which is an order of magnitude less globally representative, with no questions asked whatever, and pretending (lying) that it “is” representative.
So obviously biased and two-faced. Not a shred of integrity.
You, like bigoilbob, have absolutely *NO* understanding of uncertainty, accuracy, and precision.
All thousands of measurement points provide is precision in calculating a mean. They do *NOT* provide any inherent accuracy because of the uncertainties associated with each individual measurement. Those individual measurements of different things do *NOT* provide an uncertainty distribution that cancels out. The uncertainty adds with each individual measurement included in the data set. Sooner or later the accumulated uncertainty overwhelms any differences you are trying to identify.
re: “The uncertainty adds with each individual measurement included in the data set. Sooner or later the accumulated uncertainty overwhelms any differences you are trying to identify.”
And here we are again with these same tired old laughably ludicrous pseudo-scientific falsehoods.
If this were really true, then the “accumulated uncertainty” should lead to much more drastic changes from one time period to the next than what we actually see.
The fact that the same measurements repeated a month later do not vary tremendously from the prior month completely falsifies your “accumulated uncertainty overwhelms any differences” handwaving nonsense.
Differences in the exact same measurements repeated a month later are generally quite small. “Accumulated uncertainty” does not overwhelm differences.
You simply haven’t the vaguest clue what you are talking about. None. But what else is new.
Let’s scale it back to ten a day ok? This is getting tiresome.
Haven’t yet hit “ten” myself today, but whatever you say, Charles. You da boss!
And let’s not be misled into forgetting about all the disingenuous two-faced “skeptical” pretending (lying) about that Christy sample that is an order of magnitude less globally representative than the global temperature dataset we’re talking about here.
Gorman is just trying to distract attention away from those ridiculous misrepresentations with his “accumulated uncertainty” nonsense.
re: “Tell me those graphs are not very close to linear equations”
Unbelievably ridiculous.
Just because a graph looks like it is somewhat close to linear does not mean that it was generated by simple linear projection. DUH.
The utterly pathetic Gormanian grasping at the flimsiest of laughably flimsy straws sadly continues. Such an abominable disgrace.
It simply doesn’t matter what the black box is doing inside. What matters is what the output is. And the output is a linear projection. You can deny that all you want but it’s right out there for all to see.
You speak about other people being deniers – heal thyself physician!
re: “It simply doesn’t matter what the black box is doing inside. What matters is what the output is. And the output is a linear projection.”
Pure, unadulterated, laughably false Gormanian jibber jabber. As usual.
The output is not just a linear projection. It looks linear for a little while, but eventually changes slope as the climate adjusts to new forcing factor levels and depending upon which emissions scenario is being modeled.
And this latest vituperative vomitus is especially ridiculous given the fact that, in order to try to “support” your “simple linear projection” lie, you also included in your opening salvo these ludicrous climate model lies: “no need to worry about actual physics of the atmosphere. No need to worry about cyclical processes”.
It’s genuinely comical watching you Gormans so tragically face-plant, time after time after time after time.
Funny how even well known meteorologists have the same opinion using the same data.
https://twitter.com/BigJoeBastardi/status/1540945743925710849?t=C4qUv07bXbmEB8cOr-wAhA&s=19
Yeah, Joe Bastardi is “well known”. Well known as a climate charlatan.
In that tweet, Joe speaks of “tactics of distortion and deception”; yet in referencing that disingenuous little 5% cherry pick, that is exactly what he is guilty of doing himself.
And you fell for the propaganda hook, line, and sinker yet again.
Ad hominem without any proof! Nobody trusts anything you say without evidence. You’re down the rabbit hole, i.e., where the sun don’t shine.
J Gorman once again so sadly demonstrates such woeful ignorance.
Joe B’s disingenuous parroting of Christy’s shameful misrepresentation, and his accusing others of “distortion & deception” when he is actually doing that himself, is itself the (more than obvious) evidence that he is, yes, just another climate charlatan.
Evidence that Gorman is, once again, too blind to see. So disgraceful.
How many “researchers” could even “explain the math” behind their analysis?