Guest Essay by Kip Hansen — 18 October 2022
Every time someone in our community, the science skeptic or Realists® community, speaks out about uncertainty and how it affects peer-reviewed scientific results, they are immediately accused to being Science Deniers or of trying to undermine the entire field of Science.
I have written again and again here about how the results of the majority of studies in climate science vastly underestimate the uncertainty of their results. Let me state this as clearly as possible: Any finding that does not honestly include a frank discussion of the uncertainties involved in the study, beginning with the uncertainties of the raw data and then all the way through the uncertainties added by each step of data processing, is not worth the digital ink used to publish it.
A new major multiple-research-group study, accepted and forthcoming in the Proceedings of the National Academy of Sciences, is set to shake up the research world. This paper, for once, is not written by John P.A. Ioannidis, of “Why Most Published Research Findings Are False” fame.
The paper is: “Observing Many Researchers Using the Same Data and Hypothesis Reveals a Hidden Universe of Idiosyncratic Uncertainty”. [ or as .pdf here ].
This is good science. This is how science should be done. And this is how science should be published.
First, who wrote this paper?
Nate Breznau et many many al. Breznau is at the University of Bremen. For co-authors, there is a list of 165 co-authors from 94 different academic institutions. The significance of this is that this is not the work of a single person or a single disgruntled research group.
What did they do?
The research question is this: “Will different researchers converge on similar findings when analyzing the same data?”
They did this:
“Seventy-three independent research teams used identical cross-country survey data to test an established social science hypothesis: that more immigration will reduce public support for government provision of social policies.”
What did they find?
“Instead of convergence, teams’ numerical results varied greatly, ranging from large negative to large positive effects of immigration on public support.”
Another way to look at this is to look at the actual numerical results produced by the various groups, asking the same question, using identical data:

The discussion section starts with the following:
“Discussion: Results from our controlled research design in a large-scale crowdsourced research effort involving 73 teams demonstrate that analyzing the same hypothesis with the same data can lead to substantial differences in statistical estimates and substantive conclusions. In fact, no two teams arrived at the same set of numerical results or took the same major decisions during data analysis.”
Want to know more?
If you really want to know why researchers who are asking the same question using the same data arrive at wildly different, and conflicting, answers you will really have to read the paper.
How does this relate to The Many-Analysts Approach?
Last June, I wrote about an approach to scientific questions named The Many-Analysts Approach.
The Many-Analysts Approach was touted as:
“We argue that the current mode of scientific publication — which settles for a single analysis — entrenches ‘model myopia’, a limited consideration of statistical assumptions. That leads to overconfidence and poor predictions. …. To gauge the robustness of their conclusions, researchers should subject the data to multiple analyses; ideally, these would be carried out by one or more independent teams.“
This new paper, being discussed today, has this to say:
“Even highly skilled scientists motivated to come to accurate results varied tremendously in what they found when provided with the same data and hypothesis to test. The standard presentation and consumption of scientific results did not disclose the totality of research decisions in the research process. Our conclusion is that we have tapped into a hidden universe of idiosyncratic researcher variability.”
And, that means, for you and I, that neither the many-analysts approach or the many-analysis-teams approach will [correction — deleting the word not ] solve the Real World™ problem that is presented by the inherent uncertainties of the modern scientific research process – “many-analysts/teams” will use slightly differing approaches, different statistical techniques and slightly different versions of the available data. The teams make hundreds of tiny assumptions, mostly considering each as “best practices”. And because of these tiny differences, each team arrives at a perfectly defensible results, sure to pass peer-review, but each team arrives at different, even conflicting, answers to the same question asked of the same data.
This is the exact problem we see in CliSci every day. We see this problem in Covid stats, nutritional science, epidemiology of all types and many other fields. This is a separate problem from the differing biases affecting politically- and ideologically-sensitive subjects, the pressures in academia to find results in line with current consensuses in one’s field and the creeping disease of pal-review.
In Climate Science, we see the mis-guided belief that more processing – averaging, anomalies, krigging, smoothing, etc. — reduces uncertainty. The opposite is true: more processing increases uncertainties. Climate science does not even acknowledge the simplest type of uncertainty – original measurement uncertainty – but rather wishes it away.
Another approach sure to be suggested is that the results of the divergent findings should now be subjected to averaging or finding the mean — a sort of consensus — of the multitude of findings. The image of results shows this approach as the circle with 57.7% of the weighted distribution. This idea is no more valid than the averaging of chaotic model results as is done in Climate Science — in other words, worthless.
Pielke Jr. suggests in a recent presentation and follow-up Q&A with the National Association of Scholars that getting the best real experts together in a room and hashing these controversies our is probably the best approach. Pielke Jr. is an acknowledged fan of the approach used by the IPCC – but only long as their findings are untouched by politicians. Despite that, I tend to agree that getting the best and most honest (no-dog-in-this-fight) scientists in a field, along with specialists in statistics and evaluation of programmatic mathematics, all in one virtual room with orders to review and hash out the biggest differences in findings might produce improved results.
Don’t Ask Me
I am not an active researcher. I don’t have an off-the-cuff solution to the “ Three C’s” — the fact that the world is 1) Complicated, 2) Complex, and 3) Chaotic. Those three add to one another to create the uncertainty that is native to every problem. This new study adds in another layer – the uncertainty caused by the multitude of tiny decisions made by researchers when analyzing a research question.
It appears that the hope that the many-analysts/many-analysis-teams approaches would help resolve some of the tricky scientific questions of the day has been dashed. It also appears that it may be that when research teams that claim to be independent arrive at answers that have the appearance of too-close-agreement – we ought to be suspicious, not re-assured.
# # # # #
Author’s Comment:
If you are interested in why scientists don’t agree, even on simple questions, then you absolutely must read this paper, right now. Pre-print .pdf is here.
If it doesn’t change your understanding of the difficulties of doing good honest science, you probably need a brain transplant. …. Or at least a new advanced critical thinking skills course.
As always, don’t take my word for any of this. Read the paper, and maybe go back and read my earlier piece on Many Analysts.
Good science isn’t easy. And as we ask harder and harder questions, it is not going to get any easier.
The easiest thing in the world is to make up new hypotheses that seem reasonable or to make pie-in-the-sky predictions for futures far beyond our own lifetimes. Popular Science magazine made a business-plan of that sort of thing. Today’s “theoretical physics” seems to make a game of it – who can come up with the craziest-yet-believable idea about “how things really are”.
Thanks for reading.
# # # # #
Interesting article, Kip. Thank you. Much to think about, concerning the central finding.
But wait.
From the Breznau, et al paper, in Implications, p.15 of the pdf:
“Third, countering a defeatist view of the scientific enterprise, this study helps us appreciate the knowledge accumulated in areas where scientists do converge on expert consensus – such as human impact on the global climate or a notable increase in political polarization in the United States over the past decades.”
LOL.
David ==> Everyone has to throw the dog a bone…..and even the most skeptical climate skeptics, such as myself and Anthony Watts, acknowledge that humans and human civilization have impacted, and will continue to impact, the climate.
Congratulations on actually reading the paper! Well one….so very few do.
Yeah, Kip, its amazing the number of commentors that continue to mischaracterize the paper. It looks like that comes from various pet peeves.
Kip,
Maybe it is better to attribute that line to simple ignorance of the harm from throw-away lines. Few intelligent sciuentists would agree that there is convergence of expert consensus on climate and few hard scientists would even regard that as a matter worth the worry. What matters is the result of the research, with its uncertainty. Nothing else like consensus matters much. Many oft-mentioned advances in science are demolitions of the prevailing consensus.
I read the paper, but I do not expect congrats. Any here who have commented without reading the paper are part of the ignorance problem. They deserve to be chastised. Geoff S
Geoff ==> You are right, of course. But people are people and we are not all the same…my effort is to educate.
Kip,
Mine too. Sometimes that involves writing that has potential to upset. Realism trumps emotion in hard science. Cheers. Geoff S
Wasn’t there a reference or two in the Climategate emails about doing things for “The Cause”?
Doesn’t sound like CliSi is unbiased research or even a hard science.
William Briggs has a good write-up about these findings:
All Those Warnings About Models Are True: Researchers Given Same Data Come To Huge Number Of Conflicting Findings
Paul ==> Briggs is, by nature and training, a statistician and gives his statisticians take on the study. He is right about models.
However, the import and significance of this study for the broader field of scientific research goes far beyond the impotence of models.
kip
“Every time someone in our community, the science skeptic or Realists® community, speaks out about uncertainty and how it affects peer-reviewed scientific results, they are immediately accused to being Science Deniers or of trying to undermine the entire field of Science.
nope nope nope
Every time someone in our community, the science skeptic or irRealists® community, claims with certainty that
that is when you DENY known science with CERTAINTY we sometimes, not everytime calll you deniers.
you could say….
but you dont you are certain
certain you are right.
Mosher ==> You are talking to the wrong guy. Your tar brush is far to wide and misapplied here today. In other words, you are doing what you are complaining about. You take offense for an accusation that does not apply to you personally — though it is true in a broader sense, and not just in CliSci.
In my own case, you either didn’t read (or you have forgotten):
https://wattsupwiththat.com/2018/08/25/why-i-dont-deny-confessions-of-a-climate-skeptic-part-1/
https://wattsupwiththat.com/2018/08/27/why-i-dont-deny-confessions-of-a-climate-skeptic-part-2/
For myself, I can assure you that I do not say 1, 3 or 4. Just the opposite. I do have serious doubts about the numerical results claimed for “GAST”, but freely and vigorously acknowledge that, if nothing else, it has pleasantly warmed a little since the end of the Little Ice Age.
Why don’t you actually read and discuss to main topic of the essay? I’d be interested in your rational thoughts.
Tell you what, show us the EVIDENCE THAT GLOBAL TEMPERATURE MEANS ANYTHING.
If the Earth’s global temp is increasing then radiation from the earth to space should be decreasing, i.e., trapped energy causing heat (temperature) increase is heat that is retained. Let us know when that has been proven with EVIDENCE and not models.
mosh don’t do evidence, only drive-bys in black SUVs.
All you’ve done here is create a series of strawman arguments you can argue against. None of them hold water when reviewed with a modicum of critical thinking. Stop putting words in people’s mouths.
I made the case that the Empirical Rule in statistics allows one to estimate the standard deviation in a population sample from the range. (https://wattsupwiththat.com/2017/04/23/the-meaning-and-utility-of-averages-as-it-applies-to-climate/) Based on that, the standard deviation of the global temperature should be several ten’s of degrees. Yet, the typical person supporting the claim that many readings allows an increase in precision of the global mean temperature claims that we can calculate it to a precision of +/-0.005 deg C. That does not square with the estimate of several ten’s of degrees C for even a 2 sigma uncertainty. Yet, no one explained why the estimate is at least three orders of magnitude larger than the claimed precision obtained from averaging. They stick to their claim of the Central Limit Theorem justifying at least an order of magnitude greater precision for the average than the original temperature measurements.
For those that don’t remember, the Empirical Rule basically says that most of the values in a distribution will be within 3 sigma’s of the mean. That is very nearly the total range of values. For earth that range is about +/- 150F so the 3 sigma value would be about 75F. If you want to use 1 sigma as the uncertainty it would be 75/3 F = 25F. One can quibble about the coverage factor and such but it still gives an uncertainty estimate that is several orders of magnitude greater than the differences the clisci alarmists attempt to hang their hats on.
I had never heard of the Empirical Rule till Clyde posted about it. I’m still amazed at the tidbits you throw out Clyde!
You might be even more amazed at the tidbits that are still in my refrigerator! 🙂
The claim the random variables (station records/averages) are samples. Yet they then divide by the √N to get an SEM. They don’t even realize the SD of the sample distribution IS THE SEM. Worse, they then think that defines the precision of the mean rather than the INTERVAL where the mean may lay.
“DENY known science”
Why didn’t you just say “settled science” instead of “known science”?
Isn’t “settled science” what you really meant?
Or maybe you meant “political science”?
That’s where the “Green” comes from today.
[N.B. I posted this before reading all of the comments.]
Only one team, right up front just looking at the data, said the data was insufficient to reach a conclusion. Through the process only about 13% of the teams ultimately concluded that the data was insufficient. About 87% said “What the hell, we’ll just forge ahead, damn rigorous data analysis.” Paleoclimatology and its darling, Michael Mann, come to mind first and foremost.
The results will be valuable if the government-funders require grant recipients, at a minimum, preregister their studies and clearly show their data collection, data analyses, model selections and each and every decision made along the way. The funding entities should also fund independent parallel studies with the same objectives, having no connections to the original grant recipients nor their institutions and having not seen the other study. Additionally, because this is all publicly funded science, all work products and study decisionmaking should be posted online such that scientists and citizen-scientists can replicate (or not) studies on a massive, world-wide scale. No more pal review.
While the study is admirable, they just had to throw in climate change and U.S. politics at the end.
Dave ==> Lamentable but from the author’s point of view, self-protective. They will be, nonetheless, attacked as it they had not said so.
kip
If you are interested in why scientists don’t agree, even on simple questions, then you absolutely must read this paper, right now. Pre-print .pdf is here.
If it doesn’t change your understanding of the difficulties of doing good honest science, you probably need a brain transplant. …. Or at least a new advanced critical thinking skills course.
“This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases.”
most discussions? focus on systematic bias?
yes including STRUCTURAL uncertainty. the bias that occurs because of analyst choices
we deal with this all the time.
example: hadcrut versus giss versus BEST all different methods/analyst choices
example RSS versus UAH
skeptics only want to look at UAH and ignore RSS
but RSS actually quantifies its structural uncertainty. UAH does not.
so i take it only skeptics with good brain transplants will loook at RSS instead of UAH
Mosher ==> This essay is not about RSS or UAH or BEST or GAST at all, is it?
It is about the finding that small decisions about how to carry out an analysis to ask a particular question of a data set can change, even reverse, findings, when done by well-meaning serious acknowledged unbiased expert teams.
In this comment, you mix, without attribution, quotes from my essay and quotes from the authors of the paper under consideration. I cannot speak for the authors
Your war on your beloved imaginary enemy “The Skeptic” is misguided.
Sure, there are silly teenagers here spouting nonsense in comments — just as bad and ill-informed as the Junior Climate Warriors commenting at ClimateCentral and the like.
You should concentrate on what the authors here say, and ignore the “sillies” — I do.
HEY!
“Sure, there are silly teenagers here spouting nonsense in comments — just as bad and ill-informed as the Junior Climate Warriors commenting at ClimateCentral and the like.”
I’m not a teenager! 😎
(That was meant as a joke for those who didn’t realize.)
Phew…. was almost fooled….
Something to consider that plays into this are the unstated (and usually unexamined) assumptions that guide the researchers.
I think that the fact that there are to often unstated assumption of the researchers is the point.
(In CliSci, not a lot of “Green” to those who don’t tow the green line.)
I’ve read this study twice. My largest take away is that each and every decision made in analyzing data can send you off into a different conclusion, some of simply must be incorrect. When I think of climate science several things come to mind.
1. Are temp databases consistent in homogenizing, infilling, and weighting when creating their dataset.
2. Are adjustments to data ever legitimate for the sole purpose of “creating” LONG records?
3. Is station data considered a population or samples. Different inferences are made from each.
4. Are the station averages IID samples?
5. Why are variances and/or standard deviations NEVER quoted for global average temperatures? Are they not available?
In Climate Science, we see the mis-guided belief that more processing – averaging, anomalies, krigging, smoothing, etc. — reduces uncertainty. The opposite is true: more processing increases uncertainties. Climate science does not even acknowledge the simplest type of uncertainty – original measurement uncertainty – but rather wishes it away.
there is NO documented belief that more processing reduces uncertainty.
you dont understand spatial stats, or structural uncertainty or anything about uncertainty.
im certain of that
You *have* to be kidding, right? The citations have been provided over and over at wust. For instance:
-1. John Taylor, “Introduction to Error Analysis”, Page 61, Eq. 3.18
let q = x/u, then ẟq/q = sqrt[ (ẟx/x)^2 + (ẟu/u)^2 ]
Since in an average u is a constant then ẟu = 0 and the uncertainty becomes
ẟq/q = ẟx/x
If x is a sum of measurements, x1 + x2 + x3 … + xn, then ẟx = sqrt[ ẟx1^2 + ẟx2^2 + … ẟxn^2]
-2. JCGM 100:2008
Eq. 1, Pg 20:
let Y = f(x1,x2,…,xn)
then, Eq 10, Pg 31
u_c^2(y) = Σ(∂f/∂x_i)^2 u^2(x_i) from i=1 to i = N
If x_N is the number of measurments then it is a constant and, once again, u^2(x_N) = 0 and you are left with the the total uncertainty being the root-sum-square of the measurement uncertainties.
How many more citations would you like?
As has been pointed out over and over this leads to the uncertainty in a daily mid-range temperature being
u_midrange = sqrt[ u_tmax^2 + u_tmin^2}. If the individual temperature measurements have an uncertainty of _+/- 0.5C then the daily mid-range temperature has an uncertainty of +/- 0.7C.
If you combine 30 of these daily mid-range values into a monthly average the total uncertainty becomes u_monthly = sqrt[ 30 * u_midrange^2] = u_midrange * sqrt(30) = +/- 3.8C.
Your uncertainty range keeps on growing as you average the monthly values to get an annual average and grows even more when you try to find a 10yr, 20yr, or 30yr annual average.
Every time you average independent, random, measurements of different things each having uncertainty the total uncertainty is at least the root-sum-square of the individual measurement uncertainties. This is most of the problem with the GAT. It becomes unusable and not fit for purpose because of the uncertainty associated with it.
Anomalies do *NOT* reduce uncertainty. Anomalies are calculated using values that have uncertainty. A mid-range temperature has an uncertainty. The average subtracted from it has an uncertainty. It doesn’t matter if you add or subtract values with uncertainty, the total uncertainty is an addition. T_midrange – Tavg = T_anomaly. Then u_Tanomaly = sqrt[ (u_Tmidrange)^2 + (u_Tanomaly)^2]
UNCERTAINTY GROWS!
If you calculate 30 anomalies and then average them to get a monthly average anomaly the uncertainty GROWS EVEN MORE because you have added the uncertainty of the multi-year average to each daily uncertainty. Your uncertainty would be less if you just calculated the average T_midrange value and subtracted just that value from the multi-year average value. But the uncertainty would still grow in either case.
”
Smoothing doesn’t lessen uncertainty. You are still doing addition, subtraction, dividing, and multiplication to do the smoothing. The uncertainty grows with each processing step you take!
BOTTOM LINE; You can’t reduce the uncertainty of independent, random measurements of different things with processing. You can only increase uncertainty with processing of that data. Just as Kip said.
Tim ==> Please send your email adress to me at my first name at i4.net.
Tim ==> Can you please send your email address again? Internet Pixies disappeared it….
done
Mosh,
I have a number of tables of the recorded (from NOAA via NWS) record highs and lows for my my little spot on the globe. I started collected some in 2007. (I used the WayBackMachine to get the oldest I could find from 2002.)
Long story short, about 10% of the record highs and lows have been “adjusted”. Not broken, old records changed and new “records” for highs lower values than old ones and vice versa.
(2012 had lots of “adjustments” between the April list and the June list!)
Whoever pushed the button on the blender, why should I trust them?
If the measured and recorded DATA wasn’t sound at the time … what do we have?
The new values are theatrically true?
A computer model is more trustworthy than an eyeball looking at the thermometer 50 to 80 years ago?
Gunga,
Only last evening I saw a step-like time series change in 2012 for about a dozen Australian stations. Work from Chris Gillham at his waclimate blog looking at data during changes from LIG to electronic thermometry.
Are you aware of any conference or international event that might have led to this? I am not. The magnitude is about 0.1 to 0.3 deg C Tmax in that step, varies with station. Too big to ignore. Geoff S
No, I’m not aware of any conferences or international event.
I did find this:
https://wattsupwiththat.com/2012/09/26/nasa-giss-caught-changing-past-data-again-violates-data-quality-act/
And I think that was James Hansen’s last or next to last year as director of NASA GISS.
Did you see what Mosher had to say on that thread?
“Since the past is an estimate …” and “… we don’t know the temperature of the past.”. Apparently sheets from the past containing temperature recordings are unreadable, e.g., we don’t know what they were so it is estimated.
I suspect he was talking about unmeasured location but that is a deflection as usual. Anthony was discussing recorded temps. Nothing new under the sun, same old, same old!
I didn’t look at the comments. My Google search was trying to find out what had changed at GISS in 2012.
I had a vague recollection that around that time James Hansen had been replaced by Gavin Schmidt.
Also that the program (program language?) used had been changed. That link is the best I found to sum it up.
I’m just “Mr. Layman”. But even a layman could notice back in 2007 with Al Gore promoting his “Inconvenient Truth” doctoredmentary that “something ain’t right here”.
That’s when I first copy/pasted the NWS record highs and lows for day into Excel. Sort by date. Most of the record highs for the day then were before 1950 and most of the record lows were after 1950.
(I didn’t “find” WUWT till 2012.)
I just looked at the comments and found that I had put up a list of the changes made in the April 2012 list of record highs and the 2007 list of record highs.
(The formatting wasn’t preserved in the copy/paste.)
It should be intuitively obvious that the greater the distance an interpolated or extrapolated point is from a measured point, the greater the unreliability. Thus, many, if not most, gridding algorithms weight the points closest to the point to be interpolated more heavily than the distant ones.
This is why you saying “Nope, nope, nope” as your ‘contribution’ to a discussion has little impact on the reader.
All except the Kelvin scale. So, that means not all.
Another of your fragments that doesn’t explain anything.
Mosher,
What is the Standard Deviation of the distribution used in computing the average?
kip
“It appears that the hope that the many-analysts/many-analysis-teams approaches would help resolve some of the tricky scientific questions of the day has been dashed”
dashed?
really?
you know i went through yuour article and thought i would just comment on all the unqualified claims you made. especially those with no evidence.
heres the thing.
if you go looking for multiple interpretations of the same data you will find it this is why
theory is always underdetermined, this is why Q exists why 9-11 truthers exist
why biden can deny there is a recession.
mosh is a reality denier.
Mosher ==> Are you ever going to comment on the study at hand — or just make snarky, questionable, personal attacks based on your favorite talking points?
The study is not about interpretation of the data….it is about analysis and analysis methods and what happens to results when decisions are made in the analysis approach.
You’ve slipped your gears and wandered off into your political fantasies.
+100
kip pretends this study breaks ground!!!!!
This new study adds in another layer – the uncertainty caused by the multitude of tiny decisions made by researchers when analyzing a research question.
look we investigate this all the time
https://eapsweb.mit.edu/news/2021/quantifying-parameter-and-structural-uncertainty-climate-modeling#:~:text=The%20structural%20uncertainty%20comes%20from,small%2Dscale%20processes%20entirely%20correctly.
https://journals.ametsoc.org/view/journals/bams/86/10/bams-86-10-1437.xml
Historically, meteorological observations have been made for operational forecasting rather than long-term monitoring purposes, so that there have been numerous changes in instrumentation and procedures. Hence to create climate quality datasets requires the identification, estimation, and removal of many nonclimatic biases from the historical data. Construction of a number of new tropospheric temperature climate datasets has highlighted previously unrecognized uncertainty in multidecadal temperature trends aloft. The choice of dataset can even change the sign of upper-air trends relative to those reported at the surface. So structural uncertainty introduced unintentionally through dataset construction choices is important
sceptics want to IGNORE structural uncertainty
https://twitter.com/climateofgavin/status/880790799532871680
https://www.remss.com/missions/ssmi/uncertainty/
you wont find this in roy spencers work
Steven,
By your logic, homogenisation is not needed. The daily arithmetic average of many observations in a region will include a distributional scatter that shows in the uncertainty. The individual observations are valid and should not be altered. Abundant methods exist for using them without changing them, as I am sure your statistical learning agrees. You do not have to ignore structural uncertainty, you simply need to know how to process it, even if it gives an uncertainty answer big enough to challenge your preconceptions. Geoff S
kip
this paper is pretty interesting
let me share my experience.
in reviewing skeptical work on the temperature series i always take note of the decisions they make
and sift it to pull records they like
without exception they never test the use of alternative data. like does our result hold up using RSS or JMA or AIRS?
roll tape
https://wattsupwiththat.com/2022/10/08/tokyo-mean-september-temperatures-have-seen-no-warming-in-34-years-jma-data-show/
you see you dont care about uncertainties or scientific purity or you would have been all over that chart!!!
no skeptic here saw fit to ask about uncertainties?
why?
you liked the answer.
roll tape
https://wattsupwiththat.com/2022/10/05/the-new-pause-lengthens-to-8-years/
As always, the Pause is calculated as the longest period for which the least-squares linear-regression trend up to the most recent month for which the UAH global mean surface temperature anomaly is available is zero.
1 ONE .dataset uah
2 a dataset with no published uncertainties
pot kettle black much
desperate much?
What are you whining about? If the monthly uncertainty is +/- 3.8C then you will find measurements on that graph that exceed it. The range of values is from about 26C to 22C or a 4C range – which exceeds the typical uncertainty interval.
There were only like 50 total comments on that thread. Are you expecting everyone to post on every thread? I didn’t even read it! It wasn’t addressing a GLOBAL average in any case, only a local average. The biggest takeaway is that there is an obvious natural variation from year to year at that location. It appears that well-mixed, global CO2 is not a significant control knob for that location.
It would be helpful if uncertainty bars were shown on the graph. It would be much simpler to tell if the temperature measurements are meaningful. Frankly, I can’t even tell from the site how the monthly averages were derived. That alone is an issue affecting uncertainty.
Steven,
So you cannot define me as a skeptic, since skeptics always choose the smallest data set, they filter it Steven, read the essay Tom Berger and I wrote in WUWT 5 days ago. Do any of your criticisms of skeptics apply to our skeptical essay? I think not. Geoff S
I lost count of how many strawmen he erected here.
Mosh ==> You are still fighting your imaginary enemy “The Skeptic” …. and you have misunderstood the paper discussed here.
Mosher,
https://wattsupwiththat.com/2022/10/08/tokyo-mean-september-temperatures-have-seen-no-warming-in-34-years-jma-data-show/
You missed the whole point. I suspect on purpose.
If there is a location that has no warming, yet the average is supposed to be +1.5 degrees, guess what? You need a location that has a 3 degree rise! TELL US WHERE THAT IS!
You and others hide behind averages. An average implies that there are data both above and below the mean. That is what a standard deviation is for. Tell us where the cooler and warmer locations actually are! Tell us what the Variance or Standard Deviation for the Global Average Temperature! Not the anomaly, the actual temperature. Have you even calculated the combined variance for the GAT? Has anyone?
How many papers and pronouncements do you need to be shown that everywhere is warming at or above the GAT anomaly? Have you ever listed locations that are both above and below the average and what the values are?
You castigate sceptics for pointing out anomalous data. That is not a cogent argument at all. You need to answer why those are not anomalous. If you would publish a variance for the AVERAGES you would go a long way to resolving the issues.
I’m still waiting for one of the ‘professionals’ to show the trends for the Köppen climate classes instead a single number for the whole world.
I’d even settle for a breakdown by latitude bands!
I read this study. Here is a pertinent section from it.
Like it or not, this paper bolsters a lot of what we sceptics have been saying about uncertainty, even on this site. The paper basically says there is NO transfer function that will allow changing historical temperature records while maintain any level of certainty.
Holy crap, how many people here have said that homogenizing, infilling, and and adjusting raise uncertainty. This is exactly what this paper says is the probable outcome.
Jim Gorman,
Too tiresome to check for specifics but they tested a couple of hundred variables.
Expertise was one. They report that even highly appreciated experts arrived at different findings. My analysis would say that those highly credentialed were not competent, as claimed. All but one of them must have failed this mini exam, possibly all.
I would suggest another variable: Have you as a participant ever read the Guide To Uncertainty in Modelling? Yes to the left of me, no to the right of me while we count. Geoff S
Geoff,
Do you mean “clowns to the left of me, jokers to the right”?
Also it seems I forgot to add the quote from the paper!
https://journals.ametsoc.org/view/journals/bams/86/10/bams-86-10-1437.xml
You are unjustifiably using your Broad Brush. Up stream you asked for citations. How about you providing citations?
I’m not going to take the time to chase it down, but I think I remember reading the estimated uncertainties for the UAH data.
Uncertainty is scientific, certainty is religious. Climate Alarmism illustrates this perfectly.
Just loved it!
Truth to power, with a thermonuclear garnish.
Nice job
PEter ==> Thank you.
An enjoyable read. The study suggests it’s important that epistemic humility be a part of the process. I think everyone can agree with that attribute. Except Mosher. That always seems to be missing in his arsenal.
There is a fourth “C” not listed but which so many supposedly scientific studies and “scientists” lack – “Common Sense”…
Kip, while I see validity to the arguments and findings, I suspect that a major factor is that the research problem tackled in this comparative test is basically a sociopolitical, psychological / humanities type problem __ trying to decipher people’s “feelings.” Pseudo-science from the outset. One would be hard pressed to believe that any of the data are representative of … what? The scale is far too large and uncontrolled as well. Too many degrees of freedom. Humans are complicated, aren’t they?
There is still plenty of room for disagreement with any research, but starting from the other end of the spectrum, they should try a carefully executed, narrow scope experimental design. Use a small-scale research project, controlling for more of the variables and using a predetermined statistical method and well-defined hypothesis. For example, employ a “simple” field plot study. (Not so simple as it might at first appear if one has ever done it).
Beginning small, from there I expect that as one increases the scale and complexity, it would reveal rapid divergence among research teams. By the time they are done, one would be forced to conclude that global scale, multidimensional, multivariate and time-dependent inferences are just a shot in the dark. If “climate” researchers were to be honest, they would conclude (indeed already should be concluding) that there is no need for alarm, but with a tiny hint of doubt … “what if we are wrong?” Since in many of their eyes the consequences of being wrong could be very serious, they default on the side of extreme caution (translation: they tune their models toward high side bounding estimates that overstate the likely impacts.), then do not caution others as they draw inappropriate conclusions from what they are reading. That may have been reasonable 40 years ago, but ongoing measurement and research are showing equilibrium climate sensitivity is likely toward the low end of the range and that their models greatly exaggerate projected warming and the anthropogenic contribution compared to measured changes (even using flawed, biased “data sets.”). Meanwhile, the rates of change are slow and steady and magnitude is within the range of natural variability, meaning that this is NOT an emergency.
As a side note, I repeatedly read “research” summaries from various universities posted on WUWT, usually with ridiculous results and conclusions. Especially for those that are “impact of climate change on ___”, they are invariably amateurish, high school science fair quality. No controls, no experimental design, no hypothesis, no predetermined test statistic … Simply bad “science” by weak principle investigators trying to get some grad students their degrees and more publications, while feeding like pigs from the climate funding trough.
The so-called research question is hopelessly vague. This explains the results of the study.
David ==> But, you see, that is quite intentional! The study designers wanted a very Real World question – the kind of question (in the social sciences) quite likely to be asked by a Prime Minister or a President or a Congress.
It is a question being asked by Congressmen and Senators in the United States “Will more immigration will reduce public support for government provision of social policies.”
It is very like the questions being asked in CliSci: What has caused the warming since the Little Ice Age? How much of that warming has been anthropogenic? Will suppressing the use of fossil fuels harm or improve world/national economies?