A Hidden Universe of Uncertainty

Guest Essay by Kip Hansen — 18 October 2022

Every time someone in our community, the science skeptic or Realists® community, speaks out about uncertainty and how it affects peer-reviewed scientific results, they are immediately accused to being Science Deniers or of trying to undermine the entire field of Science.  

I have written again and again here about how the results of the majority of studies in climate science vastly underestimate the uncertainty of their results.  Let me state this as clearly as possible:  Any finding that does not honestly include a frank discussion of the uncertainties involved in the study, beginning with the uncertainties of the raw data and then all the way through the uncertainties added by each step of data processing, is not worth the digital ink used to publish it.

A new major multiple-research-group study, accepted and forthcoming in the Proceedings of the National Academy of Sciences, is set to shake up the research world.   This paper, for once, is not written by John P.A. Ioannidis, of “Why Most Published Research Findings Are False” fame.  

The paper is:   “Observing Many Researchers Using the Same Data and Hypothesis Reveals a Hidden Universe of Idiosyncratic Uncertainty”[ or as .pdf here ].

This is good science.  This is how science should be done. And this is how science should be published. 

First, who wrote this paper? 

Nate Breznau et many many al.   Breznau is at the University of Bremen.  For co-authors, there is a list of 165 co-authors from 94 different academic institutions.   The significance of this is that this is not the work of a single person or a single disgruntled research group. 

What did they do?

The research question is this:  “Will different researchers converge on similar findings when analyzing the same data?”

They did this:

“Seventy-three independent research teams used identical cross-country survey data to test an established social science hypothesis: that more immigration will reduce public support for government provision of social policies.”

What did they find?

“Instead of convergence, teams’ numerical results varied greatly, ranging from large negative to large positive effects of immigration on public support.”

Another way to look at this is to look at the actual numerical results produced by the various groups, asking the same question, using identical data:

The discussion section starts with the following:

“Discussion:   Results from our controlled research design in a large-scale crowdsourced research effort involving 73 teams demonstrate that analyzing the same hypothesis with the same data can lead to substantial differences in statistical estimates and substantive conclusions. In fact, no two teams arrived at the same set of numerical results or took the same major decisions during data analysis.”

Want to know more?

If you really want to know why researchers who are asking the same question using the same data arrive at wildly different, and conflicting, answers you will really have to read the paper. 

How does this relate to The Many-Analysts Approach?

Last June, I wrote about an approach to scientific questions named The Many-Analysts Approach. 

The Many-Analysts Approach was touted as: 

“We argue that the current mode of scientific publication — which settles for a single analysis — entrenches ‘model myopia’, a limited consideration of statistical assumptions. That leads to overconfidence and poor predictions.  ….  To gauge the robustness of their conclusions, researchers should subject the data to multiple analyses; ideally, these would be carried out by one or more independent teams.

This new paper, being discussed today,  has this to say:

“Even highly skilled scientists motivated to come to accurate results varied tremendously in what they found when provided with the same data and hypothesis to test. The standard presentation and consumption of scientific results did not disclose the totality of research decisions in the research process. Our conclusion is that we have tapped into a hidden universe of idiosyncratic researcher variability.”

And, that means, for you and I, that neither the many-analysts approach or the many-analysis-teams approach will [correction — deleting the word not ] solve the Real World™ problem that is presented by the inherent uncertainties of the modern scientific research process – “many-analysts/teams” will use slightly differing approaches, different statistical techniques and slightly different versions of the available data.  The teams make hundreds of tiny assumptions, mostly considering each as “best practices”.  And because of these tiny differences, each team arrives at a perfectly defensible results, sure to pass peer-review, but each team arrives at different, even conflicting, answers to the same question asked of the same data.

This is the exact problem we see in CliSci every day.  We see this problem in Covid stats, nutritional science, epidemiology of all types and many other fields. This is a separate problem from the differing biases affecting politically- and ideologically-sensitive subjects, the pressures in academia to find results in line with current consensuses in one’s field and the creeping disease of pal-review.

In Climate Science, we see the mis-guided belief that more processing – averaging, anomalies, krigging, smoothing, etc. — reduces uncertainty.  The opposite is true: more processing increases uncertainties. Climate science does not even acknowledge the simplest type of uncertainty – original measurement uncertainty – but rather wishes it away.

Another approach sure to be suggested is that the results of the divergent findings should now be subjected to averaging or finding the mean — a sort of consensus — of the multitude of findings. The image of results shows this approach as the circle with 57.7% of the weighted distribution. This idea is no more valid than the averaging of chaotic model results as is done in Climate Science — in other words, worthless.

Pielke Jr. suggests in a recent presentation and follow-up Q&A with the National Association of Scholars that getting the best real experts together in a room and hashing these controversies our is probably the best approach.  Pielke Jr. is an acknowledged fan of the approach used by the IPCC – but only long as their findings are untouched by politicians. Despite that, I tend to agree that getting the best and most honest (no-dog-in-this-fight) scientists in a field, along with specialists in statistics and evaluation of programmatic mathematics, all in one virtual room with orders to review and hash out the biggest differences in findings might produce improved results. 

Don’t Ask Me

I am not an active researcher.  I don’t have an off-the-cuff solution to the “ Three C’s” — the fact that the world is 1) Complicated, 2) Complex, and 3) Chaotic. Those three add to one another to create the uncertainty that is native to every problem.  This new study adds in another layer – the uncertainty caused by the multitude of tiny decisions made by researchers when analyzing a research question. 

It appears that the hope that the many-analysts/many-analysis-teams  approaches would help resolve some of the tricky scientific questions of the day has been dashed.   It also appears that it may be that when research teams that claim to be independent arrive at answers that have the appearance of too-close-agreement – we ought to be suspicious, not re-assured.

# # # # #

Author’s Comment:

If you are interested in why scientists don’t agree, even on simple questions, then you absolutely must read this paper, right now.  Pre-print .pdf is here.

If it doesn’t change your understanding of the difficulties of doing good honest science, you probably need a brain transplant. ….  Or at least a new advanced critical thinking skills course.

As always, don’t take my word for any of this.  Read the paper, and maybe go back and read my earlier piece on Many Analysts.

Good science isn’t easy.  And as we ask harder and harder questions, it is not going to get any easier. 

The easiest thing in the world is to make up new hypotheses that seem reasonable or to make pie-in-the-sky predictions for futures far beyond our own lifetimes.  Popular Science magazine made a business-plan of that sort of thing. Today’s “theoretical physics” seems to make a game of it – who can come up with the craziest-yet-believable idea about “how things really are”.

Thanks for reading.

# # # # #

4.9 22 votes
Article Rating
219 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Carlo, Monte
October 17, 2022 6:18 pm

In Climate Science, we see the mis-guided belief that more processing – averaging, anomalies, krigging, smoothing, etc. — reduces uncertainty. The opposite is true: more processing increases uncertainties. Climate science does not even acknowledge the simplest type of uncertainty – original measurement uncertainty – but rather wishes it away.

Absolutely, Kip! But in fact it goes even deeper than this, these traits result from climate science practitioners having not even Clue #1 about what measurement uncertainty is, and isn’t.

tmatsi
Reply to  Carlo, Monte
October 17, 2022 8:10 pm

Averaging models that are all wrong is just the average of errors. If two wrongs do not make a right then multiple wrongs surely make things worse.

Reply to  tmatsi
October 18, 2022 1:49 pm

wrong.

lets suppose you have 100 weather models.
25 show the hurricane landing in the panhandle of florida
25 show fort myers
50 show tampa.

you have to preposition repair trucks — desantis did this

WHERE?

https://www.firstcoastnews.com/article/weather/hurricane/what-are-spaghetti-plots/77-842923b7-6ac8-4e00-ab5d-3d83b19e3fd6

if past experience says
“the average of models gives you the best prediction”

what do you use?

uniformity principle says what

markx
Reply to  Steven M Mosher
October 18, 2022 5:21 pm

wrong.

lets suppose you have 100 weather models.

25 show the hurricane landing in the panhandle of florida

25 show fort myers

50 show tampa.

you have to preposition repair trucks — desantis did this

WHERE?

Gee. That is disappointing.

One would expect, of all people, that Mosher would understand the difference between a mathematical conclusion and a “best guess of where we situate the trucks that we will almost certainly end up moving again anyway…”

Jim Gorman
Reply to  Steven M Mosher
October 19, 2022 1:17 pm

“The average of models gives you the best prediction”

Bull puckey!

Would you fly to Mars when all the trajectories we’re calculated by averaging multiple “models” together to get the one in use?

How about flying in a plane where three models, none of which are individually correct, are averaged to find the flight envelope parameters? Would you volunteer to be the test pilot on the initial flight?

That whole assumption is scientifically preposterous! Only someone with no responsibility or accountability would propose that as an adequate solution to a problem.

Think about what you are saying and it’s affect on people! Do you think waiting until 2 – 3 hours before landfall is enough time to get to the lumberyard, buy plywood, nail it up, pack your family and belongings, and hit the road? Why do you think people all up and down the coast were preparing days before landfall. Because NOT ONE MODEL nor their average could pinpoint where it was going to go. Uniformity principle my butt!

I may sound like a broken record but when are you going to quote some variance computation values on Global Average Temperatures data to go along with the averages? Are they too hard to calculate?

Michael S. Kelly
Reply to  tmatsi
October 19, 2022 8:48 pm

Two wrongs don’t make a right, but three rights do make a left…

oeman 50
Reply to  Carlo, Monte
October 18, 2022 9:03 am

One of the most revealing moments of my undergraduate studies was determining the magnitude of error in one of my labs. As many of you know, I have to take the error in each measurement and then multiply as in the calculations. I did not like the result! I could do better than that! But it was reality. Putting error bars on the graphs of other labs was a visual example. I don’t think these climate scientists like error bars….

Tim Gorman
Reply to  Kip Hansen
October 18, 2022 10:53 am

They MUST know that when averaging results, you do not divide the error by the number of measurements along with the total.”

How would most of them come to know this? They are *never* taught it today in any undergraduate or graduate course. They can only learn it through direct experience and how many of them have that direct experience on which to base their judgement about results.

*I* only learned it in my EE and Physics labs. And even then it had to be pointed out by the lab teachers how to discern it. When you put a circuit together using 10% tolerance components it’s easy to get results that don’t match calculations. If you are never forced to go back and figure out why it happens you’ll never learn about uncertainty!

Reply to  Tim Gorman
October 18, 2022 1:55 pm

tim, youve proven over and over again you dont know what a spatial average is, how its estimated and how total uncertainty is estimate

Carlo, Monte
Reply to  Steven M Mosher
October 18, 2022 4:09 pm

Clown.

Tim Gorman
Reply to  Steven M Mosher
October 18, 2022 4:37 pm

How do you spatially average temperatures? Temperatures depend on all kinds of factors including humidity, terrain, geography, elevation, wind, land use under the measurement station, etc. These can vary greatly in just a few miles. Unless all of the various factors can be quantified and included in any function trying to relate the temperature between two locations then no amount of spatial averaging can be anything except totally uncertain!

You’ve been asked this before and refused to address the issue. How do you spatially average the temperatures shown on the attached weather map for 6:30pm, 10-18-22? Temperatures vary as much as 8F over a distance of 20 miles.

The best you could do would be a straight interpolation for a point between locations and there is simply no guarantee that such a procedure would provide an accurate value at all!

I understand spatial averaging just fine. It can work well for things that are not variable over time, such as for underground deposits of minerals. It doesn’t work for crap for something like surface temperatures where so many different variable factors are in play.

image_2022-10-18_183330058.png
Reply to  Kip Hansen
October 18, 2022 1:53 pm

do you a have a specific case of libel to make to just insult everyone equally

Jim Gorman
Reply to  oeman 50
October 18, 2022 10:04 am

That’s one of the reasons I believe Dr. Frank’s comments on error. This is part of his work in helping scientists design experiments to minimize measurement errors. Many, many scientists and mathematicians have no direct practical physical experience in using measurements to build things. Master machinists, auto mechanics, carpenters learn and use metrology concepts every day but don’t know it. Ask a finish carpenter why he spent $1500 or more on a top of the line miter saw. And hundreds on a miter gauge. He’ll tell you go look at my joints. How many scientists an mathematicians use a tool like this every day?

https://www.amazon.com/Incra-MITER1000SE-Miter-Special-Telescoping/dp/B0007UQ2EQ/ref=asc_df_B0007UQ2EQ?tag=bingshoppinga-20&linkCode=df0&hvadid=79920803409825&hvnetw=o&hvqmt=e&hvbmt=be&hvdev=m&hvlocint=&hvlocphy=&hvtargid=pla-4583520382642971&psc=1

Or this one?

https://www.festoolcanada.com/products/sawing/sliding-compound-miter-saws/575306—ks-120-reb-usa#Overview

Reply to  Jim Gorman
October 18, 2022 2:16 pm

if frank were right he woulndt bump his ass when he jumps

Carlo, Monte
Reply to  Steven M Mosher
October 18, 2022 4:10 pm

Are you this nasty in RL?

Jim Gorman
Reply to  Steven M Mosher
October 18, 2022 6:36 pm

I’m sure he loves you too.

Reply to  oeman 50
October 18, 2022 1:52 pm

go through past posts here and note the lack of error bars

find 1 here
https://wattsupwiththat.com/2022/10/16/solar-sensitivity/

pot kettle black

Richard from Brooklyn (south)
Reply to  oeman 50
October 19, 2022 2:57 pm

I teach law graduates court practice and procedure (the skill of advocacy). Last Tuesday each had to argue in class that their client would likely suffer ‘extreme hardship’ based on a given set of facts. I stopped one student when they identified one reasonably expected event giving rise to another reasonably expected event which (they argued) would likely cause extreme hardship.
I explained that if the first on its own likely caused extreme hardship then they might have an argument. But 2 uncertainties combined could not logically cause extreme hardship (much lowered possibility of that consequence). The student immediately understood and reshaped their argument.
The principles in logic and uncertainties when reaching conclusions apply in the practice of law as well as science. (well at least in the competent practice of law!)

Reply to  Carlo, Monte
October 18, 2022 2:12 pm

these traits result from climate science practitioners having not even Clue #1 about what measurement uncertainty is

another certainty espoused by skeptics.

i wish i had a guage block for every sceptic who was certain about measurement uncertainty.

Carlo, Monte
Reply to  Steven M Mosher
October 18, 2022 4:11 pm

Another content-free mosh drive-by.

Frank from NoVA
Reply to  Steven M Mosher
October 18, 2022 6:42 pm

guage

noun

Common misspelling of gauge.

Jim Gorman
Reply to  Steven M Mosher
October 18, 2022 6:50 pm

I’m impressed that you have a clue what gauge blocks even are. Tell us what YOU have used them for! Or, did you just look up what they are?

They do have a purpose when measuring rotating objects. Any idea what that might be?

MarkW2
Reply to  Carlo, Monte
October 18, 2022 3:44 pm

Climate scientists don’t understand statistics. In fact most scientists don’t understand statistics. I’ve personally had to correct pretty basic statistical errors made by eminent professors.

This is one of the biggest problems with the peer-review system, because the peer reviewers don’t understand statistics either!!!

This is a huge issue in climate science.

Drake
Reply to  MarkW2
October 20, 2022 7:24 am

McIntire of Climate Audit often mentioned that EVERY scientific paper using “numbers” needed a statistician coauthor.

Do ANY climate “scientist” follow that recommendation. NO.

Steve Reddish
October 17, 2022 6:31 pm

The study and data given to the teams concerned a question involving people. This is the reason for so many different conclusions. People cannot be put into boxes.

Steve Reddish
Reply to  Kip Hansen
October 17, 2022 6:47 pm

The question involved data about different peoples, both the immigrants, and the receiving nations. Thus no common response should be expected. All the different peoples involved could not resolve to one result, or box.

Lewis P Buckingham
Reply to  Steve Reddish
October 17, 2022 7:04 pm

But then, that’s the problem.
With the same data we still don’t know, or can reliably forecast, what happens to the economy if immigration is increased.
At least reliably from the same data.

MarkW
Reply to  Steve Reddish
October 17, 2022 7:07 pm

Data is data. Nobody is being put into a box.
That individuals can choose to interpret the data in front of them differently and hence come to different results is the whole point of the experiment.

Clyde Spencer
Reply to  MarkW
October 17, 2022 9:17 pm

I suppose the question is whether personal biases are influencing the conclusions that they come to.

George Daddis
Reply to  Clyde Spencer
October 18, 2022 7:18 am

If personal biases were in play, then it can be argued the same applies to “Climate studies”.

Steve Reddish
Reply to  MarkW
October 17, 2022 10:35 pm

One last try:
The question actually put to the teams of analysts was whether:
“greater immigration reduces support for social policies among the public.”

Wouldn’t the data be different for Ukraine immigration into the US than for US immigration into India?

Just a random example to point out that the answer would be different for every case studied, so the data probably wouldn’t support any particular conclusion overall.

Of course, the question that follows is “How can some analyst teams conclude yes, while others conclude no? Shouldn’t every team conclude “Maybe”?

If that was your point, I agree. What I don’t understand is the objections made to my point that there probably isn’t a resolvable yes or no answer to the original question that was put to the analyst teams.

Editor
Reply to  Steve Reddish
October 17, 2022 11:14 pm

I think the point is that by giving every team the exact same data, it didn’t matter whether it was about Ukraine immigration into the US or US immigration into India. It was just data. There is an expectation in much of science that the same data really should lead to the same results. The scientific reality is that “we” get these results and “they” get those results, so “they” are wrong.

AndyHce
Reply to  Mike Jonas
October 18, 2022 1:16 am

I don’t know what the data here is but it seems very unlikely that it is in any way similar to the data from observing galaxies far far away or running a fluid and gas mixture through a complex pipe arrangement. If it is instead about people’s responses to events involving other people and political processes, rather different kinds of mental processes are likely to be going on inside the heads of anyone trying to analyze the data.

Certainly there can be multiple biases about what to expect or how to approach for both kinds of data but the world views of those trying to make sense of the data is likely to be a much greater factor in the later case.

With hard data about inanimate physical processes, the analysts would have to have considerable background knowledge about the field to even begin an analysis. This would likely direct their reasoning in rigorous ways, correctly or not.

With data about people’s reactions to social and political events, almost everyone is likely to have their own opinion to begin with and will tend to relate the question to a wide variety of their own ideas about persons, societies, governments, and many other social organizations.

Steve Reddish
Reply to  Mike Jonas
October 18, 2022 5:44 am

Which I never tried to counter. My point has always been that the data should have been inconclusive due to the uniqueness of each case, so no team should have had grounds for a solid yes or no. My position actually supports the position of others here that something is wrong with the way several of those teams were doing their analysis. (57.7 % of the teams agreed with me)

So, why the downvotes, I ask.

Tim Gorman
Reply to  Steve Reddish
October 18, 2022 10:03 am

Why the downvotes? Because data is data. It really doesn’t matter if the data was made up out of whole cloth. You would expect that similar data would provide similar study outputs. What the outputs vary so greatly then how do you assign a weighting to each output to determine what output the “true value” and which ones are not?

Kips main point, and the point of the study, was to show that uncertainty doesn’t necessarily decrease when different analysis tools and processes produce different results. Yet there is a growing assumption among many in the scientific community that study outputs are 100% uncertain – e.g. climate model outputs.

For climate studies the uncertainty associated with any single study should be *at least* the spread of all model outputs since there is no way to identify the actual “true value” when so many of them vary so much from actual observations.

Steve Reddish
Reply to  Tim Gorman
October 19, 2022 7:41 am

Perhaps I’m being misunderstood because my extremely poor internet has caused me to make overly concise posts. My point never was that there wasn’t a problem with the conclusions the teams produced. Nor was I missing the point about lack of uncertainty consideration. My point was that any team producing either a yes or a no, was forcing a conclusion that wasn’t supported by the data. Both sides were wrong because “All the different peoples involved could not resolve to one result, or box”. (Actually, the majority of the teams came to the same conclusion as I did.) The point I was making was that too often, conclusions are totally unwarranted by the evidence (tree rings?).

Don
Reply to  Mike Jonas
October 18, 2022 8:51 am

But the data doesn’t answer the question. The question concerns the future thus the scientists have to make assumptions about when in the future and how much greater immigration.

Drake
Reply to  Don
October 20, 2022 7:29 am

The “scientists” make assumptions, THEN analyze the data to fit their assumptions. So you get a wide range of “answers” due to the wide range of assumptions.

You know, just like climate “scientists”.

Reply to  Steve Reddish
October 18, 2022 6:43 am

You have identified the problem with any social “science” analysis – but missed the point of this study.

The researchers all had the SAME data set. The wildly different results are not from differences in the data, but from the differences in the analyses.

This is the problem with ANY multivariate analysis. Depending on how you slice, aggregate, etc., you can come up with no significant correlations (as most of the researchers here did), spuriously significant correlations, or you MIGHT find a real correlation. Which one is correct cannot be determined.

Climate “science,” as practiced by the “consensus,” is an example of the problem that doesn’t involve data from people, but physical measurements. They find a correlation between just two variables – CO2 level and temperature. Which, as posts here have shown over and over again, is a correlation that does exist – but is insignificant when more variables are considered.

Steve Reddish
Reply to  writing observer
October 19, 2022 7:44 am

Please read my response to Tim Gorman, just above.

Pflashgordon
Reply to  Steve Reddish
October 19, 2022 1:40 pm

I agree, Steve. Human data are not “just data.” Research teams know that to be a fact. I will practically guarantee that they did not simply use the raw numbers, state a simple hypothesis, and select a test statistic. Given this nasty problem and questionable data set, they would immediately try to tease out some means to control for the many variables. That would involve everything from “what was asked?”, “to whom was it asked?”, to expert opinion to citing other research findings on related topics or variables. The scale is too large, with too many uncontrolled degrees of freedom.

This is not a basic high school statistics problem from the text book.

Tim Gorman
Reply to  MarkW
October 18, 2022 4:33 am

Mark,

As you so rightly point out, data is data. The differences shown here are a result of the process of analyzing that data, not how the data was initially gathered or what the population was that generated the data.

If you will, each analysis represents a model. Each model is different in its choice of the tools to use in analyzing the data and in the order in which the tools are used.

It’s almost like a Monte Carlo study in which initial conditions are varied to see the results on the output. Only in this case it’s not the initial conditions that are varied but the analyzation process itself.

This is so similar to the “model” approach in the clisci field it is amazing. Kip really did his research and critical thinking on this subject. He is to be commended for this.

The clisci models vary all over the map and are mostly wrong when validated against actual observation data. As someone else pointed out, using the average of the ensemble is just finding the average error of the ensemble members! If the models are wrong the average will be wrong as well!

Somewhere in one of the threads on wuwt long ago, someone pointed out that many of the climate models give different results if the order of calculation inside the models are changed. That’s exactly like what this study is highlighting. When the order of processing changes the results then which order of processing is correct? What is the uncertainty associated with each result? It’s like measuring the same thing using a different tool each time. There is no guarantee that measurement error will cancel in such a situation leading to a determination of a “true value”. Some level of uncertainty will remain and is probably closely related to the variance of the data set.

Richard Page
Reply to  Steve Reddish
October 18, 2022 5:58 am

Individuals can, indeed, be unpredictable. However with humans being social herd animals, our group behaviour tends to be a little bit more predictable, on average – fewer extremes in behavioural patterns and less individual outliers. These research groups are looking at group behaviour and psychology, not individuals.

Steve Reddish
Reply to  Steve Reddish
October 17, 2022 6:38 pm

I would rather see this type of test run on something physical. Then, the test will be about which teams get true results. Then we might be able to evaluate techniques and get people to realize uncertainty matters.

Steve Reddish
Reply to  Kip Hansen
October 17, 2022 7:14 pm

In the case of humans, individual differences do matter, in both the subject bodies, and the analytical teams. The data set may not even contain any commonalities to be harvested.

Richard Page
Reply to  Steve Reddish
October 18, 2022 6:01 am

No not really. There is a world of difference between group behaviour and the behaviour of a lone individual.

Steve Reddish
Reply to  Richard Page
October 20, 2022 5:36 am

Cultures are individualistic in that each culture responds differently to interactions with other cultures. Which is what this survey tried to predict. The particular cultures involved can determine whether the immigrants assimilate or form enclaves. The data on compatibility between cultures is as varied as compatibility between individuals. And just as unpredictable.

Reply to  Steve Reddish
October 18, 2022 6:46 am

Climate “science” is just that type of test. With the same results.

Richard Goodley
Reply to  Steve Reddish
October 18, 2022 7:45 am

define .. true

MarkW
Reply to  Steve Reddish
October 17, 2022 7:04 pm

Comment withdrawn

Last edited 3 months ago by MarkW
Steve Reddish
Reply to  MarkW
October 17, 2022 7:43 pm

Hi Mark. I always enjoy your responses, so I would like to know – Are you withdrawing your 7:07 comment, or some other comment?

Howard Dewhirst
October 17, 2022 6:41 pm

These are spectacular results and need to be considered, but as the subject is a social science hypothesis with a fair degree of emotion, could it not be that some at least of the results reflect inbuilt unconscious biases towards one end or the other? To over simplify, left leaning researchers might already favour acceptance of immigration, right learners may favour the opposite. Certainly the uncertainties in say historical temperature data is one that should be examined, but agin, so called deniers will favour some results over those uncertainties, and alarmist others?

Dave Fair
Reply to  Kip Hansen
October 18, 2022 10:07 am

+42X42^42! People, read (and attempt to understand) the study before commenting.

MarkW
Reply to  Howard Dewhirst
October 17, 2022 7:21 pm

That is possible, however the fact that different groups of researchers would be subject to this kind of, even unconscious bias, is the whole purpose of this study.
Too many people claim that scientists are completely free of any kind of bias when reviewing data.

Clyde Spencer
Reply to  MarkW
October 17, 2022 9:22 pm

Yet, how often do we hear the description of a scientist as being a “disinterested observer?”

Clyde Spencer
Reply to  Howard Dewhirst
October 17, 2022 9:20 pm

… right learners may favour …

Is this a Freudian Slip? 🙂

Last edited 3 months ago by Clyde Spencer
Editor
October 17, 2022 6:48 pm

Thanks, Kip, most fascinating.

w.

MARTIN BRUMBY
Reply to  Willis Eschenbach
October 17, 2022 7:25 pm

Certainly is, Willis.

But is this an exception to what I suspect is the general rule, that the number of authors of a paper is inversely proportional to the value of the paper?

Hmmmmm

Editor
Reply to  MARTIN BRUMBY
October 17, 2022 11:17 pm

You can sort of see it as 73 papers each with a modest number of authors.

Bob
October 17, 2022 6:55 pm

Social sciences aside what strikes me is credible researchers are given the same data and the same hypothesis to work with and do not reach a consensus. To me that invalidates the 97% consensus claptrap. The 97% consensus on global warming is every bit as believable as the Soviet leaders winning election by a 99% margin. Neither are believable.

Mayor of Venus
Reply to  Bob
October 17, 2022 10:00 pm

It’s never clearly defined what exactly the 97% agree on. I believe the agreement is only that the increase of carbon dioxide causes some global warming. Therefore, as a luke-warmist, I and all other luke-warmists are members of the 97%. Only those expecting absolutely no warming from additional greenhouse gases are in the 3%. Thus, we luke-warmists who expect minor beneficial warming this century, are in the same group with those worried that warming is an existential threat. The major media present global warming as a serious problem, with 97% consensus, but we luke-warmists never signed on to the “serious problem” part.

George Daddis
Reply to  Mayor of Venus
October 18, 2022 7:27 am

Yes, like many things in CliSci summary and reporting, it is the INTERPRETATION of the results by someone else that most of the public see.

Examples are BHO’s statement that the Doran and Zimmerman study proved 97% of scientists said Global Warming was dangerous. The study said no such thing.

Of course the IPCC does the same misinformation/misinterpretation with their Summary for Policy Makers.

Bob
Reply to  Mayor of Venus
October 18, 2022 6:39 pm

What you say is true, I would also be included in the 97% using there criteria. That is partly my point, they cherry picked what studies to include and cloud what it means to agree. Agree to what ? We don’t know. This study provided the same data and the same hypothesis to all participants and the results don’t come close to 97% agreement. I don’t believe CO2 is the control knob for earth’s climate and I don’t believe earth’s climate will reach a tipping point. Both those things must be true for the global warming alarmist’s claims to be true.

JCM
October 17, 2022 7:11 pm

” this study helps us appreciate the knowledge accumulated in areas where scientists do converge on expert consensus – such as human impact on the global climate” p.10

JCM
Reply to  JCM
October 17, 2022 7:29 pm

it’s a peculiar line. For in physical sciences there does tend to be a convergence on a certain paradigm, and then that paradigm shifts from time to time, especially on complex subjects. A cherry picked example that may not be appropriate in the context of the article.

Alternatively, it could be viewed to suggest the convergence of consensus on such a complex subject as climate is an anomalous outlier, one which appears to be an unnatural occurrence in the context of the study.

Dave Fair
Reply to  JCM
October 17, 2022 10:56 pm

Remember Dr. Roger Pielke, Jr. asserted that politics must be kept out of any study of alternative studies to be valid. The entire UN IPCC CliSciFi effort is nothing but politics, and blatantly so.

Jim Gorman
Reply to  JCM
October 18, 2022 5:52 am

Part of the problem is in the mathematics used. If everyone used the same tools in the same fashion and order the results should have been similar. The fact that a broad range of results occured tells me the math used varied considerably. Why is that? I really don’t know but that is what should be investigated.

A simple tool of sampling is insuring that samples are IID, independent and identical distributions. How many do that or simply input the data and let software do it summary thing?

Jim Gorman
Reply to  Kip Hansen
October 18, 2022 8:14 am

This site discusses the many ways of sampling. https://www.scribbr.com/methodology/sampling-methods/#:~:text=There%20are%20two%20types%20of%20sampling%20methods%3A%201,other%20criteria%2C%20allowing%20you%20to%20easily%20collect%20data.

One can even make a decision whether you are dealing with a population or a sample.

So, yes, there are differences in how the data was analyzed. I guess my point was that if there are different results, then different choices of assumptions were made. If these assumptions were properly documented and math methods appropriately applied, the different results would be explained.

Too much of science ignores the assumptions behind the analysis especially when it comes to statistics.

Jim Gorman
Reply to  Kip Hansen
October 18, 2022 9:39 am

I am sure that is true. Too many learn a given method from a professor or mentor and never learn the assumptions that go along with that method! Many never learn there are all kinds of tests and assumptions that apply for using them. One big one is about stationary or non-stationary time series.

I did some analysis of July and August temps in six cities scattered across the U.S. several years ago. The Standard Deviations I were seeing in the different summaries were in the 2 – 4 °C range. That solidified my opinion that temps in the hundredths or thousandths of a degree is simply insane. The confidence levels just preclude that precision. And, that doesn’t even include measurement uncertainty.

I’ll be posting some of my findings in Geoff’s last thread.

Reply to  Kip Hansen
October 18, 2022 9:43 am

FWIW: I graduated college with a degree in physics and math. Pretty much everything was clear to me, with one exception: data analysis that we did for our physics labs.

Jim Gorman
Reply to  metamars
October 18, 2022 10:38 am

What wasn’t clear? How measurements could vary between trials? That is the very basis for the study of uncertainty in metrology!

JCM
Reply to  JCM
October 17, 2022 7:40 pm

splitting hairs a bit more – I for one think humans probably do have some impact on climates, just for different reasons than normally discussed. so the example is unclear to me.

JCM
Reply to  JCM
October 17, 2022 7:50 pm

The paranoid egoist part of me feels like they stuck it in just so so-called climate skeptics couldn’t get much mileage out of the the article, but that’s probably stretching reality.

Redge
Reply to  JCM
October 17, 2022 10:26 pm

It looks like a throwaway line to ensure publication

ferdberple
Reply to  JCM
October 17, 2022 10:36 pm

Otherwise climate science might also suffer from uncertainties. Cant publish npc findings.

MACK
October 17, 2022 7:19 pm

Rather than get “the best real experts together in a room and hashing these controversies”, it would be more honest for climate scientists to admit that their models and forecasts are so riddled with uncertainties and assumptions that no sensible policy advice can be produced.

Clyde Spencer
Reply to  MACK
October 17, 2022 9:32 pm

But then, the more outspoken public figures like Mann prefer to sue rather than engaging in public debate, and the likes of griff dive bombs with anecdotal data that he is unwilling to defend, and Mosher drives by and lobs fragmented sentences without capitalization or substantive support of his opinion. Stokes offers up sophistry, usually of the “look a squirrel!” variety. So, how can there be dialogue unless that alarmists are willing to engage on a level playing field?

S Browne
October 17, 2022 7:20 pm

I think ‘science skeptic’ is a misleading phrase. Skepticism is not of science itself, but of unsound, overly broad, unjustified and simply false claims and conclusions made under the banner of science. All scientific claims initially should be met with skepticism until verified repeatedly, understood theoretically, and are successful in predicting.

Clyde Spencer
Reply to  S Browne
October 17, 2022 9:33 pm

I suspect that you would get agreement on your opinion from most self-described skeptics, but few alarmists.

DMA
October 17, 2022 7:31 pm

 “Let me state this as clearly as possible: Any finding that does not honestly include a frank discussion of the uncertainties involved in the study, beginning with the uncertainties of the raw data and then all the way through the uncertainties added by each step of data processing, is not worth the digital ink used to publish it.”
See Pat Frank’s frank discussion of uncertainties in GCMs

S Browne
October 17, 2022 7:33 pm

Please note the double-negative in the following statement from the article which I believe says the opposite of what the author intended or at least makes it very unclear.

And, that means, for you and I, that neither the many-analysts approach or the many-analysis-teams approach will not solve the Real World™ problem”

Huh?

AndyHce
Reply to  S Browne
October 18, 2022 1:22 am

With great certainty I state that the “will not solve” should be “will solve”.

dbidwell
Reply to  Kip Hansen
October 18, 2022 9:03 am

Proper English would have used “neither” and “nor” together in the sentence. Rewritten it should have been: ““And, that means, for you and I, that neither the many-analysts approach nor the many-analysis-teams approach will solve…” Clearly showing the “not” in the final portion of the sentence is not needed.

Dave Fair
Reply to  Kip Hansen
October 18, 2022 10:18 am

Kip, I understood the meaning very clearly; I don’t read with the objective of pointing out the irrelevant.

S Browne
October 17, 2022 7:43 pm

This from near the end of the article:

“I don’t have an off-the-cuff solution to the “ Three C’s” — the fact that the world is 1) Complicated, 2) Complex, and 3) Chaotic.”

What, pray, is the difference between ‘complicated’ and ‘complex’? I think most people would consider those words to be synonymous, as does the thesaurus and dictionary.

A many proof-reader approach would help before publishing articles.

Last edited 3 months ago by Steve B.
Chris Hanley
Reply to  S Browne
October 17, 2022 8:52 pm

According to this site there is a shade of difference:
“Complex is used to refer to the level of components in a system. If a problem is complex, it means that it has many components. Complexity does not evoke difficulty.
On the other hand, complicated refers to a high level of difficulty. If a problem is complicated, there might be or might not be many parts but it will certainly take a lot of hard work to solve”.
I guess it may depend on the context.

Chris Hanley
Reply to  S Browne
October 17, 2022 9:02 pm
Richard Page
Reply to  S Browne
October 18, 2022 6:11 am

It’s subtle but I would read the 2 words as having different meanings as well. Complex as in many different processes interacting with one another as part of the whole, and complicated as in difficult to perceive or understand, hard to grasp the individual parts or processes.

S Browne
October 17, 2022 7:55 pm

“Good science isn’t easy.”

Amen to that!

dk_
October 17, 2022 8:45 pm

“…to test an established social science hypothesis: that more immigration will reduce public support for government provision of social policies.”

So this is about social “science,” and not about observations of physical phenomena? It seems like a mathematical treatment of the feelings (from surveys) of one mythical entity (the public) about the relationship between another undefined social entity (government) and an unquantifiable function (social policy).

Perhaps this study says more about sociology as a science than it says about the scientific method?

If one presents the same data set to disparate groups of physicists or chemists should we expect the same sort of result?

Climate Science, if ever it is defined well enough to warrant the capitalization, may be perfectly scientific — right up until it predicts the unknowable future with arrogant certainty, attempts to influence for political or economic gain, or impose arbitrary behavior on the unanointed.

Steve Reddish
Reply to  dk_
October 17, 2022 9:47 pm

dk_, you are making the point I was thinking when I made my 3rd post up above. Good to see somebody has the same thoughts.

Last edited 3 months ago by Steve Reddish
dk_
Reply to  Steve Reddish
October 18, 2022 1:23 am

Steve, see Geoff Sherrington comment, below. I think we may be converging at different angles.
Perhaps I’m confused about the working definitions of science versus social engineering, but running a bunch of pseudo-statistical calculations around “data” from questionaires and surveys seems a little more like advertising and polling to me: more suitable for evaluating propaganda. In the context of social engineering it is almost inevitable that different groups will produce biased results, and that results curve above looks somehow familiar to me from that arena, not science.
It is possible that the article, exposing fuzzy-science bias, is a wake-up for the better-natured social scientist/engineering crowd, but I’m not holding my breath. On past performance, many or most will ignore it and/or discredit the authors, publisher, and study.

AndyHce
Reply to  dk_
October 18, 2022 1:28 am

It is possible that, real world, “more immigration” has had some effect on “public support for government provision of social policies” and that therefore there is a correct answer. However, what that answer actually is will, as daily evidence displays, depend markedly on who is expressing the answer.

dk_
Reply to  AndyHce
October 18, 2022 1:00 pm

Or is the answer predicted by who is defining the terms? The article seems to me to show bias whenever the terms are arbitrary. Can any amount or degree of sophisticated maths be applied to opinion and achieve a scientific result?

Dave Fair
Reply to  dk_
October 18, 2022 10:24 am

This study is very applicable to science. Just review Stephen McIntyre and Ross McKitrick’s take-down of Michael Mann’s MBH 98&99 studies. It was data processing errors, misuse of statistics and modeling errors that brought them down.

dk_
Reply to  Dave Fair
October 18, 2022 1:13 pm

I think we agree that emotional manipulation of information removes any result from the realm of science and puts any claims into the realm of politics and social engineering. IMO Ross/McKitrick is a demonstration of that, just as Mann is a continuing horrid example of anti-science charlatanism.
If we agree that the study demonstrates that social science isn’t necessarily science, we are really on the same page. My words, if interpreted as meaning that the article has nothing to do with science, may be poorly chosen.

Dave Fair
Reply to  dk_
October 18, 2022 4:48 pm

Social science isn’t. It is mental masturbation dressed up with sciency sounding words. It reflects the ideology and politics of its practitioners. Alot of it is just made up out of whole cloth, no real data collection, analysis nor modeling.

dk_
Reply to  Dave Fair
October 18, 2022 8:29 pm

Agreed D.F. Key word: alot. But not all. I think that there are good scientists and sometimes good practice in psychology, sociology, anthropology, and economics. But these fields are easily turned into anti-science political platforms for deeply biased opinions to be presented at the same or greater value than the sciences.

Clyde Spencer
October 17, 2022 9:14 pm

Kip, you have stepped into something that has been bothering me for some time. This blog is populated by what are probably the brightest and best educated commenters in the blogosphere. I’m of the opinion that there is a significant fraction of graduate engineers and geologists, most of whom graduated before the educational system was seriously corrupted by administrators more concerned about growing their student population than about maintaining standards.

Yet, it is common that not only the alarmists and skeptics can’t agree, but even the skeptics can’t agree on interpretations and methodology. I can somewhat rationalize the tension between the alarmists and skeptics because of some of the things that T. C. Chamberlain warned about in his paper The Method of Multiple Working Hypotheses. But, I’m dismayed at the inability of skeptics to present a more unified front.

I don’t know what the answer is, or if there even is one. I think that more careful attention to definitions would help. However, you have just provided evidence that suggests that modern science is broken.

Reply to  Clyde Spencer
October 18, 2022 5:06 am

There are many questions where the right answer is “No one knows”. People with college degrees, especially advanced degrees, are very reluctant to say: “I don’t know” or “no one knows”.

Every time i read a science article, I ask myself if the author has convinced me that he (or she) knows what he is talking about. One clue is that the author analyzes the data he used and explained how it could be inaccurate or not useful for his conclusion. And why alternative sources of data were not used.
I want a science author to be skeptical about himself! Tell us where he could have gone wrong. Everyone makes mistakes.

I have an advanced degree, but I am an expert in “We don’t know that” ! My first step to becoming an expert in “We don’t know that” is to immediately assume predictions of the future will be wrong. As a result, much of modern climate science scaremongering is ignored.

Every single prediction of environmental doom since the 1960s has been wrong, yet we still read new scary predictions every week. In fact, CAGW is nothing more than a prediction. It is not reality. And Nut Zero is based on the prediction of CAGW. This world needs a lot more people who are willing to say: “We don’t know that”.

Last edited 3 months ago by Richard Greene
Dave Fair
Reply to  Kip Hansen
October 18, 2022 10:32 am

“We Don’t Really Know!” doesn’t get the grants from governments’ need for studies to support their pre-determined policies. Ideology and toadyism really matter in those arenas.

William D. Larson
Reply to  Richard Greene
October 18, 2022 5:25 pm

As I recall, Feynman once said (or wrote) that in every scientific paper the author should include, besides his own arguments for his conclusions, the best arguments AGAINST his conclusions. Myself, I have never seen any author do this. If anyone actually did this in climate science, then we should see a long list of (conscious) assumptions used and how one could argue against each one. Feynman, you gave it your best, but others are not up to your standard.

Matt Kiro
Reply to  Clyde Spencer
October 18, 2022 6:00 am

What I believe that I find at WUWT is that multiple processes from varying people in different fields lead to almost the same results.
There have been two or three ways I’ve seen that arrive as ECS to be between 1.6 and 2.0, instead of the large uncertainty from CMIP6 reports.

Willis and RickW both show that their are limiting factors to the temperature in the tropics because of the properties of water.

Many people explain how energy and heat move through the Earth’s atmosphere and oceans which show why the higher latitudes show higher increase in average temperature.

All these examples come to very similar conclusions using different methods.
Being skeptical is a very important process of science. And if so many people here can provide work showing how the models are not correct based on physics, or math equations or just compiling data and showing it to us, it really is a unified front to the climate scaremongers, it is just a very wide front attacking from many different positions.

Tim Gorman
Reply to  Kip Hansen
October 18, 2022 10:36 am

Does it mean that modern science is broken? I don’t think so, “

It is broken in the sense that far too many “believers” simply ignore uncertainty of their results. They just blithely go down the road believing they have found the “true and only answer”.

It’s my opinion that far too many academics today have no “real world” experience of metrology. All measurements have two parts, the “stated value” and a “measurement uncertainty interval”. If you’ve never been caught up by the measurement uncertainty issue in the real world it is far too easy to dismiss it and just analyze the stated values – especially since that is exactly what every single basic college level statistics textbook I have purchased actually teaches. None of them provide data sets with “stated value +/- measurement uncertainty” examples. It’s all “stated value” only.

It’s not until you get caught with waves in the ceiling drywall because of measurement uncertainties or you wind up ruining a crankshaft because you ignored measurement uncertainty in ordering the rod bearings that you *really* understand that measurement uncertainty actually exists and has real world consequences that can’t be ignored.

October 17, 2022 9:14 pm

Many climate “studies” are predictions of doom with no data
There are no data for the future climate
Predictions of environmental doom have been 100% wrong since the 1960s
So I have 100% certainty that the next prediction of doom will be wrong too.

Redge
Reply to  Richard Greene
October 17, 2022 10:29 pm

So I have 100% certainty that the next prediction of doom will be wrong too.

Error bars, Richard, error bars

🤣

Richard Page
Reply to  Redge
October 18, 2022 6:13 am

Ah yes; doom +/- doom!

Redge
Reply to  Richard Page
October 18, 2022 7:23 am

That’s better Richard

Old Man Winter
October 17, 2022 10:14 pm

Roger Pielke Jr’s “a recent presentation and follow-up Q&A” Kip mentioned
above is very informative & well worth watching as he explains all of the
nuances & details of the data he presents.

Redge
October 17, 2022 10:21 pm

“many-analysts/teams” will use slightly differing approaches, different statistical techniques and slightly different versions of the available data.

Isn’t this the reason scientists are supposed to publish the full method and data so others can follow their reasoning and check the results?

(Obviously not climate seancetists refusal to hand over data in case McIntyre found something wrong with the data.)

Richard Page
Reply to  Redge
October 18, 2022 6:15 am

It’s one of the reasons, yes – another is to check that there are no glaring errors in the data or analysis.

George Daddis
Reply to  Kip Hansen
October 18, 2022 7:37 am

The other element is beware of extrapolating from the exact conditions of the experiment.
Arrhenius came to some conclusions under a strict set of conditions.
But our atmosphere is not a closed system, the variables are not independent etc.
So Svante gave us a clue, but not the answer.

Tim Gorman
Reply to  Kip Hansen
October 18, 2022 10:43 am

In many studies it’s almost impossible to fully document all variables let alone duplicate them. In virology studies the lab animals should have the exact same genome if full duplication is to be expected. That’s almost an impossibility. Thus the need for addressing possible uncertainties in the results on at least a subjective basis. If this were adequately done then at least part of the “replication problems” of today would be solved.

ferdberple
October 17, 2022 10:31 pm

What the results show is that if you repeat a study enough times you will eventually get the answer you are looking for. At which point you throw away the other studies and publish the “correct” result.

Super computers allow climate models to automate this process.

Dave Fair
Reply to  ferdberple
October 18, 2022 12:06 pm

And the admitted practice of adjusting parameters to get an ECS that “looks about right.”

Gary Pearse
October 17, 2022 10:34 pm

Kip, we did manage to settle on scientific understandings that allow predictability in the hard sciences to the degree that we have built a marvelous intricate technological civilization.

Social sciences were almost totally corrupted by the left ~ 70yrs ago and today it is even worse. The terms themselves we’re defined incorporating political bias. (‘Ills of society because of corporate greed’ sort of thing).

I wish they hadn’t chosen a loaded topic like reaction of people to accelerated immigration.There is no way with the battery of computer statistical tools and the actual practice of p-Hacking almost acceptable practice and ‘feelings’ of how outcomes should come out that it’s no surprise the range of conclusions is all over the map.

Medical research is full of bias but at least the world does have good quality care available and enormous problems have been solved. It would have been better to choose a non-controversial medical science topic and don’t identify by name the condition or the medication used just incase Trump happened to mention it.

In the social science of anthropo global warming, I have often observed that the findings of study actually best supports a conclusion different than that of the researcher.

Dave Fair
Reply to  Gary Pearse
October 17, 2022 11:08 pm

Yes, just compare the UN IPCC reports’ SPMs to the bodies of the report chapters.

Editor
October 17, 2022 11:07 pm

I have argued to a reputable scientific organisation that it should never seek to have an authoritative voice on science, because scientific knowledge changes with the publication of every new scientific paper. I was not shouted down. Actually, the reaction was rather quiet.

Geoff Sherrington
October 17, 2022 11:48 pm

Kip,
Thank you for showing this paper. I strongly agree with your comments such as “In Climate Science, we see the mis-guided belief that more processing – averaging, anomalies, krigging, smoothing, etc. — reduces uncertainty. The opposite is true: more processing increases uncertainties. Climate science does not even acknowledge the simplest type of uncertainty – original measurement uncertainty – but rather wishes it away.” And “Any finding that does not honestly include a frank discussion of the uncertainties involved in the study, beginning with the uncertainties of the raw data and then all the way through the uncertainties added by each step of data processing, is not worth the digital ink used to publish it.”
However, I disagree with your contention that “This is a separate problem from the differing biases affecting politically- and ideologically-sensitive subjects, the pressures in academia to find results in line with current consensuses in one’s field and the creeping disease of pal-review.
Houston, we have one very big dominant problem. The people you are analysing are every day hacks of limited intelligence who do not know about the finer points of proper hard science. There are no significant separate problems, there is this one big problem than overshadows all else. These hacks do not care about the various requirements for proper expression of uncertainty. How many times have you seen a climate paper with no references at all to errors and uncertainty? Close to 100%? Maybe 97%, to use a sciency number? These hacks accept pal review and current consensus because they are too lazy and ignorant to do otherwise.
Here is a quick comment or two on this paper, which is reluctant to find a cause of ignorance.
This paper is very much a German effort. Most authors from abroad have German names, e.g. the sole one from Australia is Max Grömping, USA has Jonathan Mijs and Amie Bostic and more, UK has Eike Mark Rinke, Kaspar Burger and others like Roxanne Connelly which is arguably non-Germanic (sarc). Germany, with places like Potsdam Institute with Hans Joachim Schellnhuber is central to the movement to exclude hydrocarbon fuels. Most major scientific journals are owned or controlled by Germans. The dreaded World Economic Forum has a German grand-daddy.
These observations might well mean nothing. However, a paper with a strong Germanic content of authors, using the same data to independently test the same prominent social science hypothesis, raises thoughts about the ability of Germans to be original creators of concepts as opposed to willing slaves of the orders from above. Are these authors drawn from one group of aligned thinkers, or is the authorship a loose group of fiercely independent researchers aiming dominantly for truths versus group beliefs?
I cannot fully answer this question, but I think it should be asked. The authors conclude inter alia “These results call for epistemic humility and clarity in reporting scientific findings.”
The paper, page 3, notes of its working teams “46% had a background in sociology, 25% in political science and the rest in economics, communication, interdisciplinary or methods-focused degree backgrounds. Eighty-three percent had experience teaching courses on data analysis and 70% had published at least one article or chapter on the substantive topic of the study or the usage of a relevant method.” I see no mention of specialism in hard science or statistics.
On page 4, “To remove potentially biasing incentives, all researchers from teams that completed the study were ensured co-authorship on the final paper regardless of their results.” This confirms thoughts of stupidity. Which genuine, skilled researcher would enlist in a study under any precondition or promise of recognition, before the quality of the final paper was known? This move to reduce bias used a bias. Ignorance assisted the ignorant?
Page 5. “Therefore, we standardized the teams’ results for each coefficient for stock and flow of immigration post hoc.” These teams participated on the promise of recognition and then stood by in obedience while their results were altered by the authors?
Sorry, I cannot go further. This paper is social junk, far away from the realities of hard science.
Footnote:
Five days ago, Tom Berger and I showed on WUWT a great deal of hard science analysis of some weaknesses with the primary temperature data used by the Australian BOM to present the public with a hypothesis that global warming is an existential crisis. It was part three of three parts for which I earlier invited BOM to participate. Crickets.
Geoff S

It doesnot add up
Reply to  Geoff Sherrington
October 18, 2022 5:38 am

Germanic thinkers include mathematicians such as Leibniz, Euler, Weierstrass, and scientists such as Einstein, Hertz, Helmholz, as well as philosophical thinkers such as Kant, Wittgenstein and Hegel, and political thinkers such as Marx and Hitler. Those dealing in real science and have made great advances for humanity, and even demonstrated the limits of such an approach through Gödel’s Theorem. The influence of the less scientific thinkers has been rather less benign.

Gary Pearse
Reply to  It doesnot add up
October 18, 2022 11:03 pm

Yeah but things were different during the Enlightenment! List your favorite German scientists and philosophers of the last 5 decades!

Geoff Sherrington
Reply to  Kip Hansen
October 18, 2022 1:42 pm

Kip,
Far from a rant. Germanism is a possible unstated exogenous variable. Overall German thought is different to that in many other countries, as displayed by Potsdam, Energiewende, immigration policy to name just some.
My hypothesis is that overall standards of authors in climate research are lower than other sectors of science. Ignorance is rife. You should not extend the findings of an ignorant paper to less ignorant sectors of science. You should ask for a retraction.
I gave some examples of how bad it was before I felt like a chunder and gave up.
It is not science, it is social babble, Geoff S.

H. D. Hoese
Reply to  Geoff Sherrington
October 18, 2022 8:04 am

True, but I know a lot about German mentality, my father couldn’t speak English when he went into the first grade, was in WWII, died having lost his German. His father was an excellent auto mechanic, especially in diagnosis and could read and write in two languages. I am currently reading a few very old (1855-1878) German marine science papers, good as they come. I counted 7 universities from the US including The University of Texas Rio Grande Valley. There are real statisticians that teach education and social science majors, not sure how widespread or distributed. Every US university I knew of had a wide range of members from dumb to brilliant with diverse parentage. Read their last paragraph.

Geoff Sherrington
Reply to  H. D. Hoese
October 18, 2022 9:45 pm

To expand, I am suggesting that ignorance of science is a contributing factor in this paper’s findings. Some 80% of these researchers are or have been teachers. Bad teaching helps to lead to scientific ignorance.
I don’t imagine teachers would identify themselves as a variable affecting the outcome of their paper.
Likewise, I don’t imagine Germans would identify themselves as a variable either. Yet, as I have noted already, there are German traits that are stronger than in other countries, like being the centre of the poor science promoting climate activism and doing ignorant national experiments like Energiewende.
It is an error to exclude such factors because they are thought to be impolite. Geoff S

Jim Gorman
Reply to  Geoff Sherrington
October 19, 2022 6:31 am

I am reminded of my time in college. There were several things the different teachers basically imparted as common practice or actual fact. Without the ability to actually replicate a lot of things, they were accepted as fact and later regurgitated. Oh my, was that not exactly the correct thing to do! Question everything, including your teachers, they are not infallible.

I can’t believe climate science has reached the point where so much is being taught only upon faith without evidence. Kinda like teaching that gravity is an attractive force that acts at a distance totally ignoring Einstein’s theories.

M Courtney
October 17, 2022 11:52 pm

So there ia a widerange of interpretations fo the same data derived from the best practises of the researchers.
It would be interesting to see the results grouped by nation to see if these biases are cultural or random.
And by gender too.

October 18, 2022 1:57 am

One simple explanation of the classical Philosophical Problem Of Induction is that there is intrinsically a one-to-many relationship between results and possible causes, ranging from the mundane to the absurd and supernatural.

The practical effect of this is that there can be no certainty in science. Whatever noumenal* entities we propose to explain observed effects, can never be held to certainly exist. Not gravity, not the Gods in Asgard, not quantum fields.

What class of noumena you decide are believable depends on what you currently accept are normal and – if not real – at least well established and moderately useful. E.g. one would prefer explanations based on gravity and physics than the Norse Gods et al. Not because these are not useful, but because they are simply not part of ones current worldview.

Oh, and in passing, modification of that worldview is the purpose of dialectical Marxism, so that one e.g. replaces simply being not very talented, with the idea of heroic injured victimhood caused by other people doing bad stuff to you. Thus dissent anger and hatred replace any desire for hard work humility or self improvement.

It is sad that academics have to do a massive study to reveal something any philosophy graduate and many other people who have bothered to read the subject could have told them.

*Kant’s word for that which is not phenomenal, but lies behind the world of phenomena, as it were, as an explanation of phenomena.

russiansdidit20161.jpg
Pariah Dog
Reply to  Leo Smith
October 18, 2022 4:58 am

“Whatever noumenal* entities we propose to explain observed effects, can never be held to certainly exist. Not gravity, not the Gods in Asgard, not quantum fields.”

Not quite. These entities, to use your word, most certainly exist, because if they didn’t the observable effect would not be observed. Getting to the most accurate description of said entities is what science is all about, and while the theory of gravity or quantum fields may be incomplete, there is/are at least some evidence to support them.

Unlike ancient Norse gods.

Geoff Sherrington
Reply to  Pariah Dog
October 18, 2022 1:51 pm

PD,
A good response, to which I would add agreement that in science, nothing can ever be proved to be exactly known or the exact truth. That is one reason why uncertainty estimates are brothers of measurements.
However, we slide rule warriors have done some rather accurate jobs threading some thin needles, like navigating space vehicles onto comets or nudging asteroid orbits.
We have a value of pi to a million places or more, but we do not know how to test if this is the right answer. Sooner or later, practicality trumps aspiration for perfection.
Geoff S

Ed Zuiderwijk
October 18, 2022 2:11 am

‘Will not solve’ should be ‘will solve’?

Joel O'Bryan
October 18, 2022 2:47 am

merely demonstrates that social science is not really science.
this wouldn’t happen is a real science like chemistry or physics.
and climate science is not a science either, but mostly opinionated guesses made to look like science.
Nobel laureate physicist Richard Feynman said exactly that over 45 years ago about social sciences.

David Dibbell
October 18, 2022 5:43 am

Interesting article, Kip. Thank you. Much to think about, concerning the central finding.

But wait.

From the Breznau, et al paper, in Implications, p.15 of the pdf:

“Third, countering a defeatist view of the scientific enterprise, this study helps us appreciate the knowledge accumulated in areas where scientists do converge on expert consensus – such as human impact on the global climate or a notable increase in political polarization in the United States over the past decades.”

LOL.

Dave Fair
Reply to  Kip Hansen
October 18, 2022 12:21 pm

Yeah, Kip, its amazing the number of commentors that continue to mischaracterize the paper. It looks like that comes from various pet peeves.

Geoff Sherrington
Reply to  Kip Hansen
October 18, 2022 2:01 pm

Kip,
Maybe it is better to attribute that line to simple ignorance of the harm from throw-away lines. Few intelligent sciuentists would agree that there is convergence of expert consensus on climate and few hard scientists would even regard that as a matter worth the worry. What matters is the result of the research, with its uncertainty. Nothing else like consensus matters much. Many oft-mentioned advances in science are demolitions of the prevailing consensus.
I read the paper, but I do not expect congrats. Any here who have commented without reading the paper are part of the ignorance problem. They deserve to be chastised. Geoff S

Geoff Sherrington
Reply to  Kip Hansen
October 18, 2022 9:50 pm

Kip,
Mine too. Sometimes that involves writing that has potential to upset. Realism trumps emotion in hard science. Cheers. Geoff S

Gunga Din
October 18, 2022 6:29 am

Wasn’t there a reference or two in the Climategate emails about doing things for “The Cause”?
Doesn’t sound like CliSi is unbiased research or even a hard science.

Paul Hurley (aka PaulH)
October 18, 2022 7:25 am

William Briggs has a good write-up about these findings:

All Those Warnings About Models Are True: Researchers Given Same Data Come To Huge Number Of Conflicting Findings

1. All models only say what they are told to say.

2. Science models are nothing but a list of premises, tacit and explicit, describing the uncertainty of some observable.

October 18, 2022 9:41 am

kip

Every time someone in our community, the science skeptic or Realists® community, speaks out about uncertainty and how it affects peer-reviewed scientific results, they are immediately accused to being Science Deniers or of trying to undermine the entire field of Science. 

nope nope nope

Every time someone in our community, the science skeptic or irRealists® community, claims with certainty that

  1. its not warming
  2. there is no global temperature
  3. c02 cant possiobly cause warming
  4. mans emmisions mount to nothing.

that is when you DENY known science with CERTAINTY we sometimes, not everytime calll you deniers.

you could say….

  1. itmight be warming
  2. i dont understand global temperature
  3. c02 could possiobly cause warming
  4. mans emmisions could be dangerous to

but you dont you are certain

certain you are right.

Jim Gorman
Reply to  Steven M Mosher
October 18, 2022 10:35 am

Tell you what, show us the EVIDENCE THAT GLOBAL TEMPERATURE MEANS ANYTHING.

If the Earth’s global temp is increasing then radiation from the earth to space should be decreasing, i.e., trapped energy causing heat (temperature) increase is heat that is retained. Let us know when that has been proven with EVIDENCE and not models.

Carlo, Monte
Reply to  Jim Gorman
October 18, 2022 4:18 pm

mosh don’t do evidence, only drive-bys in black SUVs.

Tim Gorman
Reply to  Steven M Mosher
October 18, 2022 11:03 am
  1. No one says it isn’t warming. They say that the *amount* of warming is hidden within the measurement uncertainties. You can’t even tell what temperatures are warming using a global average – is it because of Tmax increase? Tmin increase? A little of both?
  2. There *is* no global temperature. The global temperature is an average with such a wide uncertainty interval that it is actually impossible to quantify it. The satellite records are a good *metric* but even they don’t don’t determine a global temperature with small enough uncertainty to discern actual changes.
  3. Very few people say CO2 can’t cause warming. What they say is that the amount of warming it causes is indistinguishable because of measurement uncertainty.
  4. Happer et al has shown that additional CO2 has a diminishing return at best. No one that I’ve read has refuted that.

All you’ve done here is create a series of strawman arguments you can argue against. None of them hold water when reviewed with a modicum of critical thinking. Stop putting words in people’s mouths.

Clyde Spencer
Reply to  Tim Gorman
October 18, 2022 2:20 pm

I made the case that the Empirical Rule in statistics allows one to estimate the standard deviation in a population sample from the range. (https://wattsupwiththat.com/2017/04/23/the-meaning-and-utility-of-averages-as-it-applies-to-climate/) Based on that, the standard deviation of the global temperature should be several ten’s of degrees. Yet, the typical person supporting the claim that many readings allows an increase in precision of the global mean temperature claims that we can calculate it to a precision of +/-0.005 deg C. That does not square with the estimate of several ten’s of degrees C for even a 2 sigma uncertainty. Yet, no one explained why the estimate is at least three orders of magnitude larger than the claimed precision obtained from averaging. They stick to their claim of the Central Limit Theorem justifying at least an order of magnitude greater precision for the average than the original temperature measurements.

Tim Gorman
Reply to  Clyde Spencer
October 18, 2022 2:54 pm

For those that don’t remember, the Empirical Rule basically says that most of the values in a distribution will be within 3 sigma’s of the mean. That is very nearly the total range of values. For earth that range is about +/- 150F so the 3 sigma value would be about 75F. If you want to use 1 sigma as the uncertainty it would be 75/3 F = 25F. One can quibble about the coverage factor and such but it still gives an uncertainty estimate that is several orders of magnitude greater than the differences the clisci alarmists attempt to hang their hats on.

I had never heard of the Empirical Rule till Clyde posted about it. I’m still amazed at the tidbits you throw out Clyde!

Clyde Spencer
Reply to  Tim Gorman
October 18, 2022 5:56 pm

You might be even more amazed at the tidbits that are still in my refrigerator! 🙂

Jim Gorman
Reply to  Clyde Spencer
October 18, 2022 7:05 pm

The claim the random variables (station records/averages) are samples. Yet they then divide by the √N to get an SEM. They don’t even realize the SD of the sample distribution IS THE SEM. Worse, they then think that defines the precision of the mean rather than the INTERVAL where the mean may lay.

Gunga Din
Reply to  Steven M Mosher
October 18, 2022 3:17 pm

DENY known science”
Why didn’t you just say “settled science” instead of “known science”?
Isn’t “settled science” what you really meant?
Or maybe you meant “political science”?
That’s where the “Green” comes from today.

Dave Fair
October 18, 2022 9:51 am

[N.B. I posted this before reading all of the comments.]

Only one team, right up front just looking at the data, said the data was insufficient to reach a conclusion. Through the process only about 13% of the teams ultimately concluded that the data was insufficient. About 87% said “What the hell, we’ll just forge ahead, damn rigorous data analysis.” Paleoclimatology and its darling, Michael Mann, come to mind first and foremost.

The results will be valuable if the government-funders require grant recipients, at a minimum, preregister their studies and clearly show their data collection, data analyses, model selections and each and every decision made along the way. The funding entities should also fund independent parallel studies with the same objectives, having no connections to the original grant recipients nor their institutions and having not seen the other study. Additionally, because this is all publicly funded science, all work products and study decisionmaking should be posted online such that scientists and citizen-scientists can replicate (or not) studies on a massive, world-wide scale. No more pal review.

While the study is admirable, they just had to throw in climate change and U.S. politics at the end.

Last edited 3 months ago by Charlie Skeptic
October 18, 2022 9:56 am

kip

If you are interested in why scientists don’t agree, even on simple questions, then you absolutely must read this paper, right now.  Pre-print .pdf is here.
If it doesn’t change your understanding of the difficulties of doing good honest science, you probably need a brain transplant. …. Or at least a new advanced critical thinking skills course.

“This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases.”

most discussions? focus on systematic bias?

yes including STRUCTURAL uncertainty. the bias that occurs because of analyst choices

we deal with this all the time.

example: hadcrut versus giss versus BEST all different methods/analyst choices

example RSS versus UAH

skeptics only want to look at UAH and ignore RSS

but RSS actually quantifies its structural uncertainty. UAH does not.

so i take it only skeptics with good brain transplants will loook at RSS instead of UAH 

Gunga Din
Reply to  Kip Hansen
October 18, 2022 3:23 pm

HEY!
Sure, there are silly teenagers here spouting nonsense in comments — just as bad and ill-informed as the Junior Climate Warriors commenting at ClimateCentral and the like.”
I’m not a teenager! 😎
(That was meant as a joke for those who didn’t realize.)

Clyde Spencer
Reply to  Kip Hansen
October 18, 2022 6:03 pm

Something to consider that plays into this are the unstated (and usually unexamined) assumptions that guide the researchers.

Gunga Din
Reply to  Clyde Spencer
October 19, 2022 1:33 pm

I think that the fact that there are to often unstated assumption of the researchers is the point.
(In CliSci, not a lot of “Green” to those who don’t tow the green line.)

Jim Gorman
October 18, 2022 12:10 pm

I’ve read this study twice. My largest take away is that each and every decision made in analyzing data can send you off into a different conclusion, some of simply must be incorrect. When I think of climate science several things come to mind.

1. Are temp databases consistent in homogenizing, infilling, and weighting when creating their dataset.

2. Are adjustments to data ever legitimate for the sole purpose of “creating” LONG records?

3. Is station data considered a population or samples. Different inferences are made from each.

4. Are the station averages IID samples?

5. Why are variances and/or standard deviations NEVER quoted for global average temperatures? Are they not available?

October 18, 2022 1:44 pm

In Climate Science, we see the mis-guided belief that more processing – averaging, anomalies, krigging, smoothing, etc. — reduces uncertainty. The opposite is true: more processing increases uncertainties. Climate science does not even acknowledge the simplest type of uncertainty – original measurement uncertainty – but rather wishes it away.

  1. citatation please.

there is NO documented belief that more processing reduces uncertainty.

  1. we average because a global average requires interpolation or spatial averaging
  2. anomalies all temperature is an anomaly.
  3. smoothing. gives you estimates with unceratinty.

you dont understand spatial stats, or structural uncertainty or anything about uncertainty.
im certain of that

Tim Gorman
Reply to  Steven M Mosher
October 18, 2022 2:39 pm

citatation please”

You *have* to be kidding, right? The citations have been provided over and over at wust. For instance:

-1. John Taylor, “Introduction to Error Analysis”, Page 61, Eq. 3.18

let q = x/u, then ẟq/q = sqrt[ (ẟx/x)^2 + (ẟu/u)^2 ]

Since in an average u is a constant then ẟu = 0 and the uncertainty becomes

ẟq/q = ẟx/x

If x is a sum of measurements, x1 + x2 + x3 … + xn, then ẟx = sqrt[ ẟx1^2 + ẟx2^2 + … ẟxn^2]

-2. JCGM 100:2008

Eq. 1, Pg 20:

let Y = f(x1,x2,…,xn)

then, Eq 10, Pg 31

u_c^2(y) = Σ(∂f/∂x_i)^2 u^2(x_i) from i=1 to i = N

If x_N is the number of measurments then it is a constant and, once again, u^2(x_N) = 0 and you are left with the the total uncertainty being the root-sum-square of the measurement uncertainties.

How many more citations would you like?

As has been pointed out over and over this leads to the uncertainty in a daily mid-range temperature being

u_midrange = sqrt[ u_tmax^2 + u_tmin^2}. If the individual temperature measurements have an uncertainty of _+/- 0.5C then the daily mid-range temperature has an uncertainty of +/- 0.7C.

If you combine 30 of these daily mid-range values into a monthly average the total uncertainty becomes u_monthly = sqrt[ 30 * u_midrange^2] = u_midrange * sqrt(30) = +/- 3.8C.

Your uncertainty range keeps on growing as you average the monthly values to get an annual average and grows even more when you try to find a 10yr, 20yr, or 30yr annual average.

we average because a global average requires interpolation or spatial averaging

Every time you average independent, random, measurements of different things each having uncertainty the total uncertainty is at least the root-sum-square of the individual measurement uncertainties. This is most of the problem with the GAT. It becomes unusable and not fit for purpose because of the uncertainty associated with it.

“anomalies all temperature is an anomaly.

Anomalies do *NOT* reduce uncertainty. Anomalies are calculated using values that have uncertainty. A mid-range temperature has an uncertainty. The average subtracted from it has an uncertainty. It doesn’t matter if you add or subtract values with uncertainty, the total uncertainty is an addition. T_midrange – Tavg = T_anomaly. Then u_Tanomaly = sqrt[ (u_Tmidrange)^2 + (u_Tanomaly)^2]

UNCERTAINTY GROWS!

If you calculate 30 anomalies and then average them to get a monthly average anomaly the uncertainty GROWS EVEN MORE because you have added the uncertainty of the multi-year average to each daily uncertainty. Your uncertainty would be less if you just calculated the average T_midrange value and subtracted just that value from the multi-year average value. But the uncertainty would still grow in either case.

smoothing. gives you estimates with unceratinty.


Smoothing doesn’t lessen uncertainty. You are still doing addition, subtraction, dividing, and multiplication to do the smoothing. The uncertainty grows with each processing step you take!

BOTTOM LINE; You can’t reduce the uncertainty of independent, random measurements of different things with processing. You can only increase uncertainty with processing of that data. Just as Kip said.

Tim Gorman
Reply to  Kip Hansen
October 20, 2022 8:02 am

done

Gunga Din
Reply to  Steven M Mosher
October 18, 2022 3:40 pm

Mosh,
I have a number of tables of the recorded (from NOAA via NWS) record highs and lows for my my little spot on the globe. I started collected some in 2007. (I used the WayBackMachine to get the oldest I could find from 2002.)
Long story short, about 10% of the record highs and lows have been “adjusted”. Not broken, old records changed and new “records” for highs lower values than old ones and vice versa.
(2012 had lots of “adjustments” between the April list and the June list!)
Whoever pushed the button on the blender, why should I trust them?
If the measured and recorded DATA wasn’t sound at the time … what do we have?
The new values are theatrically true?
A computer model is more trustworthy than an eyeball looking at the thermometer 50 to 80 years ago?

Geoff Sherrington
Reply to  Gunga Din
October 18, 2022 4:28 pm

Gunga,
Only last evening I saw a step-like time series change in 2012 for about a dozen Australian stations. Work from Chris Gillham at his waclimate blog looking at data during changes from LIG to electronic thermometry.
Are you aware of any conference or international event that might have led to this? I am not. The magnitude is about 0.1 to 0.3 deg C Tmax in that step, varies with station. Too big to ignore. Geoff S

Gunga Din
Reply to  Geoff Sherrington
October 19, 2022 6:36 am

No, I’m not aware of any conferences or international event.
I did find this:
https://wattsupwiththat.com/2012/09/26/nasa-giss-caught-changing-past-data-again-violates-data-quality-act/
And I think that was James Hansen’s last or next to last year as director of NASA GISS.

Jim Gorman
Reply to  Gunga Din
October 19, 2022 6:54 am

Did you see what Mosher had to say on that thread?

“Since the past is an estimate …” and “… we don’t know the temperature of the past.”. Apparently sheets from the past containing temperature recordings are unreadable, e.g., we don’t know what they were so it is estimated.

I suspect he was talking about unmeasured location but that is a deflection as usual. Anthony was discussing recorded temps. Nothing new under the sun, same old, same old!

Gunga Din
Reply to  Jim Gorman
October 19, 2022 11:57 am

I didn’t look at the comments. My Google search was trying to find out what had changed at GISS in 2012.
I had a vague recollection that around that time James Hansen had been replaced by Gavin Schmidt.
Also that the program (program language?) used had been changed. That link is the best I found to sum it up.
I’m just “Mr. Layman”. But even a layman could notice back in 2007 with Al Gore promoting his “Inconvenient Truth” doctoredmentary that “something ain’t right here”.
That’s when I first copy/pasted the NWS record highs and lows for day into Excel. Sort by date. Most of the record highs for the day then were before 1950 and most of the record lows were after 1950.
(I didn’t “find” WUWT till 2012.)

Gunga Din
Reply to  Jim Gorman
October 20, 2022 5:47 am

I just looked at the comments and found that I had put up a list of the changes made in the April 2012 list of record highs and the 2007 list of record highs.
(The formatting wasn’t preserved in the copy/paste.)

Clyde Spencer
Reply to  Steven M Mosher
October 18, 2022 6:22 pm

we average because a global average requires interpolation or spatial averaging

It should be intuitively obvious that the greater the distance an interpolated or extrapolated point is from a measured point, the greater the unreliability. Thus, many, if not most, gridding algorithms weight the points closest to the point to be interpolated more heavily than the distant ones.

This is why you saying “Nope, nope, nope” as your ‘contribution’ to a discussion has little impact on the reader.

anomalies all temperature is an anomaly.

All except the Kelvin scale. So, that means not all.

smoothing. gives you estimates with unceratinty [sic].

Another of your fragments that doesn’t explain anything.

Jim Gorman
Reply to  Steven M Mosher
October 18, 2022 6:28 pm

Mosher,

What is the Standard Deviation of the distribution used in computing the average?

October 18, 2022 2:01 pm

kip

It appears that the hope that the many-analysts/many-analysis-teams approaches would help resolve some of the tricky scientific questions of the day has been dashed”

dashed?

really?

you know i went through yuour article and thought i would just comment on all the unqualified claims you made. especially those with no evidence.

heres the thing.

if you go looking for multiple interpretations of the same data you will find it this is why
theory is always underdetermined, this is why Q exists why 9-11 truthers exist
why biden can deny there is a recession.



Carlo, Monte
Reply to  Steven M Mosher
October 18, 2022 4:13 pm

mosh is a reality denier.

Jim Gorman
Reply to  Kip Hansen
October 18, 2022 5:37 pm

+100

October 18, 2022 2:08 pm

kip pretends this study breaks ground!!!!!

This new study adds in another layer – the uncertainty caused by the multitude of tiny decisions made by researchers when analyzing a research question. 

look we investigate this all the time

https://eapsweb.mit.edu/news/2021/quantifying-parameter-and-structural-uncertainty-climate-modeling#:~:text=The%20structural%20uncertainty%20comes%20from,small%2Dscale%20processes%20entirely%20correctly.

https://journals.ametsoc.org/view/journals/bams/86/10/bams-86-10-1437.xml

Historically, meteorological observations have been made for operational forecasting rather than long-term monitoring purposes, so that there have been numerous changes in instrumentation and procedures. Hence to create climate quality datasets requires the identification, estimation, and removal of many nonclimatic biases from the historical data. Construction of a number of new tropospheric temperature climate datasets has highlighted previously unrecognized uncertainty in multidecadal temperature trends aloft. The choice of dataset can even change the sign of upper-air trends relative to those reported at the surface. So structural uncertainty introduced unintentionally through dataset construction choices is important 

sceptics want to IGNORE structural uncertainty

https://twitter.com/climateofgavin/status/880790799532871680

https://www.remss.com/missions/ssmi/uncertainty/

you wont find this in roy spencers work

Geoff Sherrington
Reply to  Steven M Mosher
October 18, 2022 4:39 pm

Steven,
By your logic, homogenisation is not needed. The daily arithmetic average of many observations in a region will include a distributional scatter that shows in the uncertainty. The individual observations are valid and should not be altered. Abundant methods exist for using them without changing them, as I am sure your statistical learning agrees. You do not have to ignore structural uncertainty, you simply need to know how to process it, even if it gives an uncertainty answer big enough to challenge your preconceptions. Geoff S

October 18, 2022 2:34 pm

kip

this paper is pretty interesting

let me share my experience.

in reviewing skeptical work on the temperature series i always take note of the decisions they make

  1. which data set? sceptics always choose the smallest dataset. they filter it

and sift it to pull records they like

without exception they never test the use of alternative data. like does our result hold up using RSS or JMA or AIRS?

roll tape

https://wattsupwiththat.com/2022/10/08/tokyo-mean-september-temperatures-have-seen-no-warming-in-34-years-jma-data-show/

  1. one dataset
  2. one metric
  3. one time period
  4. one month!!!!!
  5. no uncertainties

you see you dont care about uncertainties or scientific purity or you would have been all over that chart!!!

no skeptic here saw fit to ask about uncertainties?
why?

you liked the answer.

roll tape

https://wattsupwiththat.com/2022/10/05/the-new-pause-lengthens-to-8-years/

As always, the Pause is calculated as the longest period for which the least-squares linear-regression trend up to the most recent month for which the UAH global mean surface temperature anomaly is available is zero.

1 ONE .dataset uah
2 a dataset with no published uncertainties

  1. ONE method for calculation of pause.
  2. no investigation of structural uncertainties!!

pot kettle black much

Carlo, Monte
Reply to  Steven M Mosher
October 18, 2022 4:13 pm

desperate much?

Tim Gorman
Reply to  Steven M Mosher
October 18, 2022 4:38 pm

What are you whining about? If the monthly uncertainty is +/- 3.8C then you will find measurements on that graph that exceed it. The range of values is from about 26C to 22C or a 4C range – which exceeds the typical uncertainty interval.

There were only like 50 total comments on that thread. Are you expecting everyone to post on every thread? I didn’t even read it! It wasn’t addressing a GLOBAL average in any case, only a local average. The biggest takeaway is that there is an obvious natural variation from year to year at that location. It appears that well-mixed, global CO2 is not a significant control knob for that location.

It would be helpful if uncertainty bars were shown on the graph. It would be much simpler to tell if the temperature measurements are meaningful. Frankly, I can’t even tell from the site how the monthly averages were derived. That alone is an issue affecting uncertainty.

Geoff Sherrington
Reply to  Steven M Mosher
October 18, 2022 4:49 pm

Steven,
So you cannot define me as a skeptic, since skeptics always choose the smallest data set, they filter it Steven, read the essay Tom Berger and I wrote in WUWT 5 days ago. Do any of your criticisms of skeptics apply to our skeptical essay? I think not. Geoff S

Carlo, Monte
Reply to  Geoff Sherrington
October 18, 2022 5:58 pm

I lost count of how many strawmen he erected here.

Jim Gorman
Reply to  Steven M Mosher
October 18, 2022 5:33 pm

Mosher,

https://wattsupwiththat.com/2022/10/08/tokyo-mean-september-temperatures-have-seen-no-warming-in-34-years-jma-data-show/

You missed the whole point. I suspect on purpose.

If there is a location that has no warming, yet the average is supposed to be +1.5 degrees, guess what? You need a location that has a 3 degree rise! TELL US WHERE THAT IS!

You and others hide behind averages. An average implies that there are data both above and below the mean. That is what a standard deviation is for. Tell us where the cooler and warmer locations actually are! Tell us what the Variance or Standard Deviation for the Global Average Temperature! Not the anomaly, the actual temperature. Have you even calculated the combined variance for the GAT? Has anyone?

How many papers and pronouncements do you need to be shown that everywhere is warming at or above the GAT anomaly? Have you ever listed locations that are both above and below the average and what the values are?

You castigate sceptics for pointing out anomalous data. That is not a cogent argument at all. You need to answer why those are not anomalous. If you would publish a variance for the AVERAGES you would go a long way to resolving the issues.

Clyde Spencer
Reply to  Jim Gorman
October 18, 2022 6:51 pm

I’m still waiting for one of the ‘professionals’ to show the trends for the Köppen climate classes instead a single number for the whole world.

Tim Gorman
Reply to  Clyde Spencer
October 19, 2022 6:04 am

I’d even settle for a breakdown by latitude bands!

Jim Gorman
Reply to  Steven M Mosher
October 18, 2022 6:22 pm

I read this study. Here is a pertinent section from it.

Like it or not, this paper bolsters a lot of what we sceptics have been saying about uncertainty, even on this site. The paper basically says there is NO transfer function that will allow changing historical temperature records while maintain any level of certainty.

Holy crap, how many people here have said that homogenizing, infilling, and and adjusting raise uncertainty. This is exactly what this paper says is the probable outcome.

Geoff Sherrington
Reply to  Jim Gorman
October 18, 2022 10:12 pm

Jim Gorman,
Too tiresome to check for specifics but they tested a couple of hundred variables.
Expertise was one. They report that even highly appreciated experts arrived at different findings. My analysis would say that those highly credentialed were not competent, as claimed. All but one of them must have failed this mini exam, possibly all.
I would suggest another variable: Have you as a participant ever read the Guide To Uncertainty in Modelling? Yes to the left of me, no to the right of me while we count. Geoff S

Jim Gorman
Reply to  Geoff Sherrington
October 19, 2022 6:15 am

Geoff,

Do you mean “clowns to the left of me, jokers to the right”?

Also it seems I forgot to add the quote from the paper!

https://journals.ametsoc.org/view/journals/bams/86/10/bams-86-10-1437.xml

Clyde Spencer
Reply to  Steven M Mosher
October 18, 2022 6:37 pm

sceptics always

they never test …

You are unjustifiably using your Broad Brush. Up stream you asked for citations. How about you providing citations?

I’m not going to take the time to chase it down, but I think I remember reading the estimated uncertainties for the UAH data.

Last edited 3 months ago by Clyde Spencer
October 18, 2022 3:32 pm

Uncertainty is scientific, certainty is religious. Climate Alarmism illustrates this perfectly.

October 18, 2022 4:46 pm

Just loved it!
Truth to power, with a thermonuclear garnish.
Nice job

cerescokid
October 19, 2022 6:43 am

An enjoyable read. The study suggests it’s important that epistemic humility be a part of the process. I think everyone can agree with that attribute. Except Mosher. That always seems to be missing in his arsenal.

Not Chicken Little
October 19, 2022 12:22 pm

There is a fourth “C” not listed but which so many supposedly scientific studies and “scientists” lack – “Common Sense”…

Pflashgordon
October 19, 2022 1:19 pm

Kip, while I see validity to the arguments and findings, I suspect that a major factor is that the research problem tackled in this comparative test is basically a sociopolitical, psychological / humanities type problem __ trying to decipher people’s “feelings.” Pseudo-science from the outset. One would be hard pressed to believe that any of the data are representative of … what? The scale is far too large and uncontrolled as well. Too many degrees of freedom. Humans are complicated, aren’t they?

There is still plenty of room for disagreement with any research, but starting from the other end of the spectrum, they should try a carefully executed, narrow scope experimental design. Use a small-scale research project, controlling for more of the variables and using a predetermined statistical method and well-defined hypothesis. For example, employ a “simple” field plot study. (Not so simple as it might at first appear if one has ever done it).

Beginning small, from there I expect that as one increases the scale and complexity, it would reveal rapid divergence among research teams. By the time they are done, one would be forced to conclude that global scale, multidimensional, multivariate and time-dependent inferences are just a shot in the dark. If “climate” researchers were to be honest, they would conclude (indeed already should be concluding) that there is no need for alarm, but with a tiny hint of doubt … “what if we are wrong?” Since in many of their eyes the consequences of being wrong could be very serious, they default on the side of extreme caution (translation: they tune their models toward high side bounding estimates that overstate the likely impacts.), then do not caution others as they draw inappropriate conclusions from what they are reading. That may have been reasonable 40 years ago, but ongoing measurement and research are showing equilibrium climate sensitivity is likely toward the low end of the range and that their models greatly exaggerate projected warming and the anthropogenic contribution compared to measured changes (even using flawed, biased “data sets.”). Meanwhile, the rates of change are slow and steady and magnitude is within the range of natural variability, meaning that this is NOT an emergency.

As a side note, I repeatedly read “research” summaries from various universities posted on WUWT, usually with ridiculous results and conclusions. Especially for those that are “impact of climate change on ___”, they are invariably amateurish, high school science fair quality. No controls, no experimental design, no hypothesis, no predetermined test statistic … Simply bad “science” by weak principle investigators trying to get some grad students their degrees and more publications, while feeding like pigs from the climate funding trough.

October 19, 2022 5:59 pm

The so-called research question is hopelessly vague. This explains the results of the study.

%d bloggers like this:
Verified by MonsterInsights