Guest Essay by Kip Hansen — 18 October 2022
Every time someone in our community, the science skeptic or Realists® community, speaks out about uncertainty and how it affects peer-reviewed scientific results, they are immediately accused to being Science Deniers or of trying to undermine the entire field of Science.
I have written again and again here about how the results of the majority of studies in climate science vastly underestimate the uncertainty of their results. Let me state this as clearly as possible: Any finding that does not honestly include a frank discussion of the uncertainties involved in the study, beginning with the uncertainties of the raw data and then all the way through the uncertainties added by each step of data processing, is not worth the digital ink used to publish it.
A new major multiple-research-group study, accepted and forthcoming in the Proceedings of the National Academy of Sciences, is set to shake up the research world. This paper, for once, is not written by John P.A. Ioannidis, of “Why Most Published Research Findings Are False” fame.
The paper is: “Observing Many Researchers Using the Same Data and Hypothesis Reveals a Hidden Universe of Idiosyncratic Uncertainty”. [ or as .pdf here ].
This is good science. This is how science should be done. And this is how science should be published.
First, who wrote this paper?
Nate Breznau et many many al. Breznau is at the University of Bremen. For co-authors, there is a list of 165 co-authors from 94 different academic institutions. The significance of this is that this is not the work of a single person or a single disgruntled research group.
What did they do?
The research question is this: “Will different researchers converge on similar findings when analyzing the same data?”
They did this:
“Seventy-three independent research teams used identical cross-country survey data to test an established social science hypothesis: that more immigration will reduce public support for government provision of social policies.”
What did they find?
“Instead of convergence, teams’ numerical results varied greatly, ranging from large negative to large positive effects of immigration on public support.”
Another way to look at this is to look at the actual numerical results produced by the various groups, asking the same question, using identical data:

The discussion section starts with the following:
“Discussion: Results from our controlled research design in a large-scale crowdsourced research effort involving 73 teams demonstrate that analyzing the same hypothesis with the same data can lead to substantial differences in statistical estimates and substantive conclusions. In fact, no two teams arrived at the same set of numerical results or took the same major decisions during data analysis.”
Want to know more?
If you really want to know why researchers who are asking the same question using the same data arrive at wildly different, and conflicting, answers you will really have to read the paper.
How does this relate to The Many-Analysts Approach?
Last June, I wrote about an approach to scientific questions named The Many-Analysts Approach.
The Many-Analysts Approach was touted as:
“We argue that the current mode of scientific publication — which settles for a single analysis — entrenches ‘model myopia’, a limited consideration of statistical assumptions. That leads to overconfidence and poor predictions. …. To gauge the robustness of their conclusions, researchers should subject the data to multiple analyses; ideally, these would be carried out by one or more independent teams.“
This new paper, being discussed today, has this to say:
“Even highly skilled scientists motivated to come to accurate results varied tremendously in what they found when provided with the same data and hypothesis to test. The standard presentation and consumption of scientific results did not disclose the totality of research decisions in the research process. Our conclusion is that we have tapped into a hidden universe of idiosyncratic researcher variability.”
And, that means, for you and I, that neither the many-analysts approach or the many-analysis-teams approach will [correction — deleting the word not ] solve the Real World™ problem that is presented by the inherent uncertainties of the modern scientific research process – “many-analysts/teams” will use slightly differing approaches, different statistical techniques and slightly different versions of the available data. The teams make hundreds of tiny assumptions, mostly considering each as “best practices”. And because of these tiny differences, each team arrives at a perfectly defensible results, sure to pass peer-review, but each team arrives at different, even conflicting, answers to the same question asked of the same data.
This is the exact problem we see in CliSci every day. We see this problem in Covid stats, nutritional science, epidemiology of all types and many other fields. This is a separate problem from the differing biases affecting politically- and ideologically-sensitive subjects, the pressures in academia to find results in line with current consensuses in one’s field and the creeping disease of pal-review.
In Climate Science, we see the mis-guided belief that more processing – averaging, anomalies, krigging, smoothing, etc. — reduces uncertainty. The opposite is true: more processing increases uncertainties. Climate science does not even acknowledge the simplest type of uncertainty – original measurement uncertainty – but rather wishes it away.
Another approach sure to be suggested is that the results of the divergent findings should now be subjected to averaging or finding the mean — a sort of consensus — of the multitude of findings. The image of results shows this approach as the circle with 57.7% of the weighted distribution. This idea is no more valid than the averaging of chaotic model results as is done in Climate Science — in other words, worthless.
Pielke Jr. suggests in a recent presentation and follow-up Q&A with the National Association of Scholars that getting the best real experts together in a room and hashing these controversies our is probably the best approach. Pielke Jr. is an acknowledged fan of the approach used by the IPCC – but only long as their findings are untouched by politicians. Despite that, I tend to agree that getting the best and most honest (no-dog-in-this-fight) scientists in a field, along with specialists in statistics and evaluation of programmatic mathematics, all in one virtual room with orders to review and hash out the biggest differences in findings might produce improved results.
Don’t Ask Me
I am not an active researcher. I don’t have an off-the-cuff solution to the “ Three C’s” — the fact that the world is 1) Complicated, 2) Complex, and 3) Chaotic. Those three add to one another to create the uncertainty that is native to every problem. This new study adds in another layer – the uncertainty caused by the multitude of tiny decisions made by researchers when analyzing a research question.
It appears that the hope that the many-analysts/many-analysis-teams approaches would help resolve some of the tricky scientific questions of the day has been dashed. It also appears that it may be that when research teams that claim to be independent arrive at answers that have the appearance of too-close-agreement – we ought to be suspicious, not re-assured.
# # # # #
Author’s Comment:
If you are interested in why scientists don’t agree, even on simple questions, then you absolutely must read this paper, right now. Pre-print .pdf is here.
If it doesn’t change your understanding of the difficulties of doing good honest science, you probably need a brain transplant. …. Or at least a new advanced critical thinking skills course.
As always, don’t take my word for any of this. Read the paper, and maybe go back and read my earlier piece on Many Analysts.
Good science isn’t easy. And as we ask harder and harder questions, it is not going to get any easier.
The easiest thing in the world is to make up new hypotheses that seem reasonable or to make pie-in-the-sky predictions for futures far beyond our own lifetimes. Popular Science magazine made a business-plan of that sort of thing. Today’s “theoretical physics” seems to make a game of it – who can come up with the craziest-yet-believable idea about “how things really are”.
Thanks for reading.
# # # # #
Absolutely, Kip! But in fact it goes even deeper than this, these traits result from climate science practitioners having not even Clue #1 about what measurement uncertainty is, and isn’t.
Averaging models that are all wrong is just the average of errors. If two wrongs do not make a right then multiple wrongs surely make things worse.
wrong.
lets suppose you have 100 weather models.
25 show the hurricane landing in the panhandle of florida
25 show fort myers
50 show tampa.
you have to preposition repair trucks — desantis did this
WHERE?
https://www.firstcoastnews.com/article/weather/hurricane/what-are-spaghetti-plots/77-842923b7-6ac8-4e00-ab5d-3d83b19e3fd6
if past experience says
“the average of models gives you the best prediction”
what do you use?
uniformity principle says what
Gee. That is disappointing.
One would expect, of all people, that Mosher would understand the difference between a mathematical conclusion and a “best guess of where we situate the trucks that we will almost certainly end up moving again anyway…”
Mosher ==> Gads, you realize that Hurricane Landing models converge as time goes by……besides, you don’t position repair trucks where the hurricane is expected to hit — the most important thing to NOT to position them where the most damage will be. For hurricanes, “past experience” does not say
“the average of models gives you the best prediction”…until nearly the last minute…maybe hours 36/24 or less.
The National Hurricane Center already shows where the “average” of models predicts….but 96 to 72 hours out, they are NEVER right. (maybe by accident once). The path and landfall point shift and change. Accuracy at 72 hours can be 200 miles, decreasing through time to 50 miles and 25…..
“The average of models gives you the best prediction”
Bull puckey!
Would you fly to Mars when all the trajectories we’re calculated by averaging multiple “models” together to get the one in use?
How about flying in a plane where three models, none of which are individually correct, are averaged to find the flight envelope parameters? Would you volunteer to be the test pilot on the initial flight?
That whole assumption is scientifically preposterous! Only someone with no responsibility or accountability would propose that as an adequate solution to a problem.
Think about what you are saying and it’s affect on people! Do you think waiting until 2 – 3 hours before landfall is enough time to get to the lumberyard, buy plywood, nail it up, pack your family and belongings, and hit the road? Why do you think people all up and down the coast were preparing days before landfall. Because NOT ONE MODEL nor their average could pinpoint where it was going to go. Uniformity principle my butt!
I may sound like a broken record but when are you going to quote some variance computation values on Global Average Temperatures data to go along with the averages? Are they too hard to calculate?
Two wrongs don’t make a right, but three rights do make a left…
One of the most revealing moments of my undergraduate studies was determining the magnitude of error in one of my labs. As many of you know, I have to take the error in each measurement and then multiply as in the calculations. I did not like the result! I could do better than that! But it was reality. Putting error bars on the graphs of other labs was a visual example. I don’t think these climate scientists like error bars….
oeman ==> CliSci, and I hate to say it, fakes the error bars even when they show any. They MUST know that when averaging results, you do not divide the error by the number of measurements along with the total.
“They MUST know that when averaging results, you do not divide the error by the number of measurements along with the total.”
How would most of them come to know this? They are *never* taught it today in any undergraduate or graduate course. They can only learn it through direct experience and how many of them have that direct experience on which to base their judgement about results.
*I* only learned it in my EE and Physics labs. And even then it had to be pointed out by the lab teachers how to discern it. When you put a circuit together using 10% tolerance components it’s easy to get results that don’t match calculations. If you are never forced to go back and figure out why it happens you’ll never learn about uncertainty!
tim, youve proven over and over again you dont know what a spatial average is, how its estimated and how total uncertainty is estimate
Clown.
How do you spatially average temperatures? Temperatures depend on all kinds of factors including humidity, terrain, geography, elevation, wind, land use under the measurement station, etc. These can vary greatly in just a few miles. Unless all of the various factors can be quantified and included in any function trying to relate the temperature between two locations then no amount of spatial averaging can be anything except totally uncertain!
You’ve been asked this before and refused to address the issue. How do you spatially average the temperatures shown on the attached weather map for 6:30pm, 10-18-22? Temperatures vary as much as 8F over a distance of 20 miles.
The best you could do would be a straight interpolation for a point between locations and there is simply no guarantee that such a procedure would provide an accurate value at all!
I understand spatial averaging just fine. It can work well for things that are not variable over time, such as for underground deposits of minerals. It doesn’t work for crap for something like surface temperatures where so many different variable factors are in play.
do you a have a specific case of libel to make to just insult everyone equally
That’s one of the reasons I believe Dr. Frank’s comments on error. This is part of his work in helping scientists design experiments to minimize measurement errors. Many, many scientists and mathematicians have no direct practical physical experience in using measurements to build things. Master machinists, auto mechanics, carpenters learn and use metrology concepts every day but don’t know it. Ask a finish carpenter why he spent $1500 or more on a top of the line miter saw. And hundreds on a miter gauge. He’ll tell you go look at my joints. How many scientists an mathematicians use a tool like this every day?
https://www.amazon.com/Incra-MITER1000SE-Miter-Special-Telescoping/dp/B0007UQ2EQ/ref=asc_df_B0007UQ2EQ?tag=bingshoppinga-20&linkCode=df0&hvadid=79920803409825&hvnetw=o&hvqmt=e&hvbmt=be&hvdev=m&hvlocint=&hvlocphy=&hvtargid=pla-4583520382642971&psc=1
Or this one?
https://www.festoolcanada.com/products/sawing/sliding-compound-miter-saws/575306—ks-120-reb-usa#Overview
if frank were right he woulndt bump his ass when he jumps
Are you this nasty in RL?
I’m sure he loves you too.
go through past posts here and note the lack of error bars
find 1 here
https://wattsupwiththat.com/2022/10/16/solar-sensitivity/
pot kettle black
I teach law graduates court practice and procedure (the skill of advocacy). Last Tuesday each had to argue in class that their client would likely suffer ‘extreme hardship’ based on a given set of facts. I stopped one student when they identified one reasonably expected event giving rise to another reasonably expected event which (they argued) would likely cause extreme hardship.
I explained that if the first on its own likely caused extreme hardship then they might have an argument. But 2 uncertainties combined could not logically cause extreme hardship (much lowered possibility of that consequence). The student immediately understood and reshaped their argument.
The principles in logic and uncertainties when reaching conclusions apply in the practice of law as well as science. (well at least in the competent practice of law!)
these traits result from climate science practitioners having not even Clue #1 about what measurement uncertainty is
another certainty espoused by skeptics.
i wish i had a guage block for every sceptic who was certain about measurement uncertainty.
Another content-free mosh drive-by.
guage
noun
Common misspelling of gauge.
I’m impressed that you have a clue what gauge blocks even are. Tell us what YOU have used them for! Or, did you just look up what they are?
They do have a purpose when measuring rotating objects. Any idea what that might be?
Climate scientists don’t understand statistics. In fact most scientists don’t understand statistics. I’ve personally had to correct pretty basic statistical errors made by eminent professors.
This is one of the biggest problems with the peer-review system, because the peer reviewers don’t understand statistics either!!!
This is a huge issue in climate science.
McIntire of Climate Audit often mentioned that EVERY scientific paper using “numbers” needed a statistician coauthor.
Do ANY climate “scientist” follow that recommendation. NO.
The study and data given to the teams concerned a question involving people. This is the reason for so many different conclusions. People cannot be put into boxes.
Steve ==> The analyst-teams that did the study reported on here did not interact with people, only with numerical data. They didn’t put any people anywhere — no less in boxes.
The question involved data about different peoples, both the immigrants, and the receiving nations. Thus no common response should be expected. All the different peoples involved could not resolve to one result, or box.
But then, that’s the problem.
With the same data we still don’t know, or can reliably forecast, what happens to the economy if immigration is increased.
At least reliably from the same data.
Data is data. Nobody is being put into a box.
That individuals can choose to interpret the data in front of them differently and hence come to different results is the whole point of the experiment.
I suppose the question is whether personal biases are influencing the conclusions that they come to.
If personal biases were in play, then it can be argued the same applies to “Climate studies”.
One last try:
The question actually put to the teams of analysts was whether:
“greater immigration reduces support for social policies among the public.”
Wouldn’t the data be different for Ukraine immigration into the US than for US immigration into India?
Just a random example to point out that the answer would be different for every case studied, so the data probably wouldn’t support any particular conclusion overall.
Of course, the question that follows is “How can some analyst teams conclude yes, while others conclude no? Shouldn’t every team conclude “Maybe”?
If that was your point, I agree. What I don’t understand is the objections made to my point that there probably isn’t a resolvable yes or no answer to the original question that was put to the analyst teams.
I think the point is that by giving every team the exact same data, it didn’t matter whether it was about Ukraine immigration into the US or US immigration into India. It was just data. There is an expectation in much of science that the same data really should lead to the same results. The scientific reality is that “we” get these results and “they” get those results, so “they” are wrong.
I don’t know what the data here is but it seems very unlikely that it is in any way similar to the data from observing galaxies far far away or running a fluid and gas mixture through a complex pipe arrangement. If it is instead about people’s responses to events involving other people and political processes, rather different kinds of mental processes are likely to be going on inside the heads of anyone trying to analyze the data.
Certainly there can be multiple biases about what to expect or how to approach for both kinds of data but the world views of those trying to make sense of the data is likely to be a much greater factor in the later case.
With hard data about inanimate physical processes, the analysts would have to have considerable background knowledge about the field to even begin an analysis. This would likely direct their reasoning in rigorous ways, correctly or not.
With data about people’s reactions to social and political events, almost everyone is likely to have their own opinion to begin with and will tend to relate the question to a wide variety of their own ideas about persons, societies, governments, and many other social organizations.
Which I never tried to counter. My point has always been that the data should have been inconclusive due to the uniqueness of each case, so no team should have had grounds for a solid yes or no. My position actually supports the position of others here that something is wrong with the way several of those teams were doing their analysis. (57.7 % of the teams agreed with me)
So, why the downvotes, I ask.
Why the downvotes? Because data is data. It really doesn’t matter if the data was made up out of whole cloth. You would expect that similar data would provide similar study outputs. What the outputs vary so greatly then how do you assign a weighting to each output to determine what output the “true value” and which ones are not?
Kips main point, and the point of the study, was to show that uncertainty doesn’t necessarily decrease when different analysis tools and processes produce different results. Yet there is a growing assumption among many in the scientific community that study outputs are 100% uncertain – e.g. climate model outputs.
For climate studies the uncertainty associated with any single study should be *at least* the spread of all model outputs since there is no way to identify the actual “true value” when so many of them vary so much from actual observations.
Perhaps I’m being misunderstood because my extremely poor internet has caused me to make overly concise posts. My point never was that there wasn’t a problem with the conclusions the teams produced. Nor was I missing the point about lack of uncertainty consideration. My point was that any team producing either a yes or a no, was forcing a conclusion that wasn’t supported by the data. Both sides were wrong because “All the different peoples involved could not resolve to one result, or box”. (Actually, the majority of the teams came to the same conclusion as I did.) The point I was making was that too often, conclusions are totally unwarranted by the evidence (tree rings?).
But the data doesn’t answer the question. The question concerns the future thus the scientists have to make assumptions about when in the future and how much greater immigration.
The “scientists” make assumptions, THEN analyze the data to fit their assumptions. So you get a wide range of “answers” due to the wide range of assumptions.
You know, just like climate “scientists”.
You have identified the problem with any social “science” analysis – but missed the point of this study.
The researchers all had the SAME data set. The wildly different results are not from differences in the data, but from the differences in the analyses.
This is the problem with ANY multivariate analysis. Depending on how you slice, aggregate, etc., you can come up with no significant correlations (as most of the researchers here did), spuriously significant correlations, or you MIGHT find a real correlation. Which one is correct cannot be determined.
Climate “science,” as practiced by the “consensus,” is an example of the problem that doesn’t involve data from people, but physical measurements. They find a correlation between just two variables – CO2 level and temperature. Which, as posts here have shown over and over again, is a correlation that does exist – but is insignificant when more variables are considered.
Please read my response to Tim Gorman, just above.
I agree, Steve. Human data are not “just data.” Research teams know that to be a fact. I will practically guarantee that they did not simply use the raw numbers, state a simple hypothesis, and select a test statistic. Given this nasty problem and questionable data set, they would immediately try to tease out some means to control for the many variables. That would involve everything from “what was asked?”, “to whom was it asked?”, to expert opinion to citing other research findings on related topics or variables. The scale is too large, with too many uncontrolled degrees of freedom.
This is not a basic high school statistics problem from the text book.
Mark,
As you so rightly point out, data is data. The differences shown here are a result of the process of analyzing that data, not how the data was initially gathered or what the population was that generated the data.
If you will, each analysis represents a model. Each model is different in its choice of the tools to use in analyzing the data and in the order in which the tools are used.
It’s almost like a Monte Carlo study in which initial conditions are varied to see the results on the output. Only in this case it’s not the initial conditions that are varied but the analyzation process itself.
This is so similar to the “model” approach in the clisci field it is amazing. Kip really did his research and critical thinking on this subject. He is to be commended for this.
The clisci models vary all over the map and are mostly wrong when validated against actual observation data. As someone else pointed out, using the average of the ensemble is just finding the average error of the ensemble members! If the models are wrong the average will be wrong as well!
Somewhere in one of the threads on wuwt long ago, someone pointed out that many of the climate models give different results if the order of calculation inside the models are changed. That’s exactly like what this study is highlighting. When the order of processing changes the results then which order of processing is correct? What is the uncertainty associated with each result? It’s like measuring the same thing using a different tool each time. There is no guarantee that measurement error will cancel in such a situation leading to a determination of a “true value”. Some level of uncertainty will remain and is probably closely related to the variance of the data set.
Individuals can, indeed, be unpredictable. However with humans being social herd animals, our group behaviour tends to be a little bit more predictable, on average – fewer extremes in behavioural patterns and less individual outliers. These research groups are looking at group behaviour and psychology, not individuals.
I would rather see this type of test run on something physical. Then, the test will be about which teams get true results. Then we might be able to evaluate techniques and get people to realize uncertainty matters.
Steve => Read the study — the people and question are not that important. One question, one data set. What the data set represents is not important.
In the case of humans, individual differences do matter, in both the subject bodies, and the analytical teams. The data set may not even contain any commonalities to be harvested.
No not really. There is a world of difference between group behaviour and the behaviour of a lone individual.
Cultures are individualistic in that each culture responds differently to interactions with other cultures. Which is what this survey tried to predict. The particular cultures involved can determine whether the immigrants assimilate or form enclaves. The data on compatibility between cultures is as varied as compatibility between individuals. And just as unpredictable.
Climate “science” is just that type of test. With the same results.
define .. true
Comment withdrawn
Hi Mark. I always enjoy your responses, so I would like to know – Are you withdrawing your 7:07 comment, or some other comment?
These are spectacular results and need to be considered, but as the subject is a social science hypothesis with a fair degree of emotion, could it not be that some at least of the results reflect inbuilt unconscious biases towards one end or the other? To over simplify, left leaning researchers might already favour acceptance of immigration, right learners may favour the opposite. Certainly the uncertainties in say historical temperature data is one that should be examined, but agin, so called deniers will favour some results over those uncertainties, and alarmist others?
Howard ==> The research team looked at things that might have affected the results from the various teams….that is all in the details of the study. Including pre-existing biases on the question itself.
+42X42^42! People, read (and attempt to understand) the study before commenting.
That is possible, however the fact that different groups of researchers would be subject to this kind of, even unconscious bias, is the whole purpose of this study.
Too many people claim that scientists are completely free of any kind of bias when reviewing data.
Yet, how often do we hear the description of a scientist as being a “disinterested observer?”
Is this a Freudian Slip? 🙂
Thanks, Kip, most fascinating.
w.
w. ==> Yeah, I thought so too. The original team, and all the contributing teams, did a terrific job.
Certainly is, Willis.
But is this an exception to what I suspect is the general rule, that the number of authors of a paper is inversely proportional to the value of the paper?
Hmmmmm
You can sort of see it as 73 papers each with a modest number of authors.
Martin and Mike ==> If one looks at the original paper, you see that 3 guys designed the overall project, enlisted a lot of teams around the world with most teams consisting of more than one researcher. Everyone who contributed from all the teams was given co-author status.
So yes, 73 team analyses rolled into one investigation of the results of those analyses. But everyone gets credit for contributing.
Social sciences aside what strikes me is credible researchers are given the same data and the same hypothesis to work with and do not reach a consensus. To me that invalidates the 97% consensus claptrap. The 97% consensus on global warming is every bit as believable as the Soviet leaders winning election by a 99% margin. Neither are believable.
It’s never clearly defined what exactly the 97% agree on. I believe the agreement is only that the increase of carbon dioxide causes some global warming. Therefore, as a luke-warmist, I and all other luke-warmists are members of the 97%. Only those expecting absolutely no warming from additional greenhouse gases are in the 3%. Thus, we luke-warmists who expect minor beneficial warming this century, are in the same group with those worried that warming is an existential threat. The major media present global warming as a serious problem, with 97% consensus, but we luke-warmists never signed on to the “serious problem” part.
Yes, like many things in CliSci summary and reporting, it is the INTERPRETATION of the results by someone else that most of the public see.
Examples are BHO’s statement that the Doran and Zimmerman study proved 97% of scientists said Global Warming was dangerous. The study said no such thing.
Of course the IPCC does the same misinformation/misinterpretation with their Summary for Policy Makers.
What you say is true, I would also be included in the 97% using there criteria. That is partly my point, they cherry picked what studies to include and cloud what it means to agree. Agree to what ? We don’t know. This study provided the same data and the same hypothesis to all participants and the results don’t come close to 97% agreement. I don’t believe CO2 is the control knob for earth’s climate and I don’t believe earth’s climate will reach a tipping point. Both those things must be true for the global warming alarmist’s claims to be true.
” this study helps us appreciate the knowledge accumulated in areas where scientists do converge on expert consensus – such as human impact on the global climate” p.10
it’s a peculiar line. For in physical sciences there does tend to be a convergence on a certain paradigm, and then that paradigm shifts from time to time, especially on complex subjects. A cherry picked example that may not be appropriate in the context of the article.
Alternatively, it could be viewed to suggest the convergence of consensus on such a complex subject as climate is an anomalous outlier, one which appears to be an unnatural occurrence in the context of the study.
Remember Dr. Roger Pielke, Jr. asserted that politics must be kept out of any study of alternative studies to be valid. The entire UN IPCC CliSciFi effort is nothing but politics, and blatantly so.
Part of the problem is in the mathematics used. If everyone used the same tools in the same fashion and order the results should have been similar. The fact that a broad range of results occured tells me the math used varied considerably. Why is that? I really don’t know but that is what should be investigated.
A simple tool of sampling is insuring that samples are IID, independent and identical distributions. How many do that or simply input the data and let software do it summary thing?
Jim ==> Part of the lesson here is that there are many many ways to evaluate a data set — even the exact same data set being asked the exact same question.
Of course the maths and statistical approaches are different — each team has used what it thought to be the best way to accomplish the task. There are a lot of right ways to do it.
The designers of the experiment purposefully chose a more “real world” question — one that cannot be done with a simple single formula (like a Newtonian physics question in High School).
Every question in CliSci, for example, is far more complex than the one used in this experiment.
This site discusses the many ways of sampling. https://www.scribbr.com/methodology/sampling-methods/#:~:text=There%20are%20two%20types%20of%20sampling%20methods%3A%201,other%20criteria%2C%20allowing%20you%20to%20easily%20collect%20data.
One can even make a decision whether you are dealing with a population or a sample.
So, yes, there are differences in how the data was analyzed. I guess my point was that if there are different results, then different choices of assumptions were made. If these assumptions were properly documented and math methods appropriately applied, the different results would be explained.
Too much of science ignores the assumptions behind the analysis especially when it comes to statistics.
Jim ==> One of the points made by the authors of this study is that many oif these decisions — many of the choices — are made without the analysts realizing that they are making choices. They are following what they think are standard procedures and best practices. Not cognizant that each of these is a choice and that there are other valid choices as well.
It is those choices better all of the possible valid choices that seem to result in the divergence instead of convergence of the results.
I am sure that is true. Too many learn a given method from a professor or mentor and never learn the assumptions that go along with that method! Many never learn there are all kinds of tests and assumptions that apply for using them. One big one is about stationary or non-stationary time series.
I did some analysis of July and August temps in six cities scattered across the U.S. several years ago. The Standard Deviations I were seeing in the different summaries were in the 2 – 4 °C range. That solidified my opinion that temps in the hundredths or thousandths of a degree is simply insane. The confidence levels just preclude that precision. And, that doesn’t even include measurement uncertainty.
I’ll be posting some of my findings in Geoff’s last thread.
FWIW: I graduated college with a degree in physics and math. Pretty much everything was clear to me, with one exception: data analysis that we did for our physics labs.
What wasn’t clear? How measurements could vary between trials? That is the very basis for the study of uncertainty in metrology!
splitting hairs a bit more – I for one think humans probably do have some impact on climates, just for different reasons than normally discussed. so the example is unclear to me.
JCM ==> There is not, and has never been to my knowledge, any real question as to whether or not humans can affect the climate — certainly local climate, thus cumulative, planetary climate.
The paranoid egoist part of me feels like they stuck it in just so so-called climate skeptics couldn’t get much mileage out of the the article, but that’s probably stretching reality.
It looks like a throwaway line to ensure publication
Otherwise climate science might also suffer from uncertainties. Cant publish npc findings.
Rather than get “the best real experts together in a room and hashing these controversies”, it would be more honest for climate scientists to admit that their models and forecasts are so riddled with uncertainties and assumptions that no sensible policy advice can be produced.
But then, the more outspoken public figures like Mann prefer to sue rather than engaging in public debate, and the likes of griff dive bombs with anecdotal data that he is unwilling to defend, and Mosher drives by and lobs fragmented sentences without capitalization or substantive support of his opinion. Stokes offers up sophistry, usually of the “look a squirrel!” variety. So, how can there be dialogue unless that alarmists are willing to engage on a level playing field?
I think ‘science skeptic’ is a misleading phrase. Skepticism is not of science itself, but of unsound, overly broad, unjustified and simply false claims and conclusions made under the banner of science. All scientific claims initially should be met with skepticism until verified repeatedly, understood theoretically, and are successful in predicting.
I suspect that you would get agreement on your opinion from most self-described skeptics, but few alarmists.
“Let me state this as clearly as possible: Any finding that does not honestly include a frank discussion of the uncertainties involved in the study, beginning with the uncertainties of the raw data and then all the way through the uncertainties added by each step of data processing, is not worth the digital ink used to publish it.”
See Pat Frank’s frank discussion of uncertainties in GCMs
Please note the double-negative in the following statement from the article which I believe says the opposite of what the author intended or at least makes it very unclear.
“And, that means, for you and I, that neither the many-analysts approach or the many-analysis-teams approach will not solve the Real World™ problem”
Huh?
With great certainty I state that the “will not solve” should be “will solve”.
S Browne ==> Thank you — good close reading is a skill that evades many. In my case, too many edits and failure to enlist my personal professional editor for this piece. she would have caught that immediately.
Correcting the line — it should be, as Andy states below, “will solve”.
Proper English would have used “neither” and “nor” together in the sentence. Rewritten it should have been: ““And, that means, for you and I, that neither the many-analysts approach nor the many-analysis-teams approach will solve…” Clearly showing the “not” in the final portion of the sentence is not needed.
dbidwell ==> Gads! I have researched the nasty nasty Either Or / Neither Nor point dozens of times — and have not come to any firm conclusion. The English Mavens do not agree on the point.
My error was not carefully editing the sentence after making several changes to the paragraph.
Writing is Hard — along with Science.
Kip, I understood the meaning very clearly; I don’t read with the objective of pointing out the irrelevant.
This from near the end of the article:
“I don’t have an off-the-cuff solution to the “ Three C’s” — the fact that the world is 1) Complicated, 2) Complex, and 3) Chaotic.”
What, pray, is the difference between ‘complicated’ and ‘complex’? I think most people would consider those words to be synonymous, as does the thesaurus and dictionary.
A many proof-reader approach would help before publishing articles.
According to this site there is a shade of difference:
“Complex is used to refer to the level of components in a system. If a problem is complex, it means that it has many components. Complexity does not evoke difficulty.
On the other hand, complicated refers to a high level of difficulty. If a problem is complicated, there might be or might not be many parts but it will certainly take a lot of hard work to solve”.
I guess it may depend on the context.
Complex vs. complicated.
It’s subtle but I would read the 2 words as having different meanings as well. Complex as in many different processes interacting with one another as part of the whole, and complicated as in difficult to perceive or understand, hard to grasp the individual parts or processes.
“Good science isn’t easy.”
Amen to that!
“…to test an established social science hypothesis: that more immigration will reduce public support for government provision of social policies.”
So this is about social “science,” and not about observations of physical phenomena? It seems like a mathematical treatment of the feelings (from surveys) of one mythical entity (the public) about the relationship between another undefined social entity (government) and an unquantifiable function (social policy).
Perhaps this study says more about sociology as a science than it says about the scientific method?
If one presents the same data set to disparate groups of physicists or chemists should we expect the same sort of result?
Climate Science, if ever it is defined well enough to warrant the capitalization, may be perfectly scientific — right up until it predicts the unknowable future with arrogant certainty, attempts to influence for political or economic gain, or impose arbitrary behavior on the unanointed.
dk_, you are making the point I was thinking when I made my 3rd post up above. Good to see somebody has the same thoughts.
Steve, see Geoff Sherrington comment, below. I think we may be converging at different angles.
Perhaps I’m confused about the working definitions of science versus social engineering, but running a bunch of pseudo-statistical calculations around “data” from questionaires and surveys seems a little more like advertising and polling to me: more suitable for evaluating propaganda. In the context of social engineering it is almost inevitable that different groups will produce biased results, and that results curve above looks somehow familiar to me from that arena, not science.
It is possible that the article, exposing fuzzy-science bias, is a wake-up for the better-natured social scientist/engineering crowd, but I’m not holding my breath. On past performance, many or most will ignore it and/or discredit the authors, publisher, and study.
It is possible that, real world, “more immigration” has had some effect on “public support for government provision of social policies” and that therefore there is a correct answer. However, what that answer actually is will, as daily evidence displays, depend markedly on who is expressing the answer.
Or is the answer predicted by who is defining the terms? The article seems to me to show bias whenever the terms are arbitrary. Can any amount or degree of sophisticated maths be applied to opinion and achieve a scientific result?
This study is very applicable to science. Just review Stephen McIntyre and Ross McKitrick’s take-down of Michael Mann’s MBH 98&99 studies. It was data processing errors, misuse of statistics and modeling errors that brought them down.
I think we agree that emotional manipulation of information removes any result from the realm of science and puts any claims into the realm of politics and social engineering. IMO Ross/McKitrick is a demonstration of that, just as Mann is a continuing horrid example of anti-science charlatanism.
If we agree that the study demonstrates that social science isn’t necessarily science, we are really on the same page. My words, if interpreted as meaning that the article has nothing to do with science, may be poorly chosen.
Social science isn’t. It is mental masturbation dressed up with sciency sounding words. It reflects the ideology and politics of its practitioners. Alot of it is just made up out of whole cloth, no real data collection, analysis nor modeling.
Agreed D.F. Key word: alot. But not all. I think that there are good scientists and sometimes good practice in psychology, sociology, anthropology, and economics. But these fields are easily turned into anti-science political platforms for deeply biased opinions to be presented at the same or greater value than the sciences.
Kip, you have stepped into something that has been bothering me for some time. This blog is populated by what are probably the brightest and best educated commenters in the blogosphere. I’m of the opinion that there is a significant fraction of graduate engineers and geologists, most of whom graduated before the educational system was seriously corrupted by administrators more concerned about growing their student population than about maintaining standards.
Yet, it is common that not only the alarmists and skeptics can’t agree, but even the skeptics can’t agree on interpretations and methodology. I can somewhat rationalize the tension between the alarmists and skeptics because of some of the things that T. C. Chamberlain warned about in his paper The Method of Multiple Working Hypotheses. But, I’m dismayed at the inability of skeptics to present a more unified front.
I don’t know what the answer is, or if there even is one. I think that more careful attention to definitions would help. However, you have just provided evidence that suggests that modern science is broken.
There are many questions where the right answer is “No one knows”. People with college degrees, especially advanced degrees, are very reluctant to say: “I don’t know” or “no one knows”.
Every time i read a science article, I ask myself if the author has convinced me that he (or she) knows what he is talking about. One clue is that the author analyzes the data he used and explained how it could be inaccurate or not useful for his conclusion. And why alternative sources of data were not used.
I want a science author to be skeptical about himself! Tell us where he could have gone wrong. Everyone makes mistakes.
I have an advanced degree, but I am an expert in “We don’t know that” ! My first step to becoming an expert in “We don’t know that” is to immediately assume predictions of the future will be wrong. As a result, much of modern climate science scaremongering is ignored.
Every single prediction of environmental doom since the 1960s has been wrong, yet we still read new scary predictions every week. In fact, CAGW is nothing more than a prediction. It is not reality. And Nut Zero is based on the prediction of CAGW. This world needs a lot more people who are willing to say: “We don’t know that”.
Richard ==> Plus 10 on “We Don’t Really Know!”
“We Don’t Really Know!” doesn’t get the grants from governments’ need for studies to support their pre-determined policies. Ideology and toadyism really matter in those arenas.
As I recall, Feynman once said (or wrote) that in every scientific paper the author should include, besides his own arguments for his conclusions, the best arguments AGAINST his conclusions. Myself, I have never seen any author do this. If anyone actually did this in climate science, then we should see a long list of (conscious) assumptions used and how one could argue against each one. Feynman, you gave it your best, but others are not up to your standard.
What I believe that I find at WUWT is that multiple processes from varying people in different fields lead to almost the same results.
There have been two or three ways I’ve seen that arrive as ECS to be between 1.6 and 2.0, instead of the large uncertainty from CMIP6 reports.
Willis and RickW both show that their are limiting factors to the temperature in the tropics because of the properties of water.
Many people explain how energy and heat move through the Earth’s atmosphere and oceans which show why the higher latitudes show higher increase in average temperature.
All these examples come to very similar conclusions using different methods.
Being skeptical is a very important process of science. And if so many people here can provide work showing how the models are not correct based on physics, or math equations or just compiling data and showing it to us, it really is a unified front to the climate scaremongers, it is just a very wide front attacking from many different positions.
Clyde ==> Not my evidence, but a very convincing study about the Many-Analyst-Teams Approach and how it does not lead to converging results. Does it mean that modern science is broken? I don’t think so, but I do think it means that we need to re-examine that scattered individual scientists spending money by the bucket-full producing results that may not reflect the overall reality of the real world.
In advanced physics, high level physicists often get together and plan out a research approach that they think is the best way to find an answer to some outstanding question. They argue about it, they fight and squabble and eventually, for the sake of advancement of physics, find a common path that they hope will lead to an answer. Doesn’t always work, but then they admit it and try again.
“Does it mean that modern science is broken? I don’t think so, “
It is broken in the sense that far too many “believers” simply ignore uncertainty of their results. They just blithely go down the road believing they have found the “true and only answer”.
It’s my opinion that far too many academics today have no “real world” experience of metrology. All measurements have two parts, the “stated value” and a “measurement uncertainty interval”. If you’ve never been caught up by the measurement uncertainty issue in the real world it is far too easy to dismiss it and just analyze the stated values – especially since that is exactly what every single basic college level statistics textbook I have purchased actually teaches. None of them provide data sets with “stated value +/- measurement uncertainty” examples. It’s all “stated value” only.
It’s not until you get caught with waves in the ceiling drywall because of measurement uncertainties or you wind up ruining a crankshaft because you ignored measurement uncertainty in ordering the rod bearings that you *really* understand that measurement uncertainty actually exists and has real world consequences that can’t be ignored.
Many climate “studies” are predictions of doom with no data
There are no data for the future climate
Predictions of environmental doom have been 100% wrong since the 1960s
So I have 100% certainty that the next prediction of doom will be wrong too.
Error bars, Richard, error bars
🤣
Ah yes; doom +/- doom!
That’s better Richard
Roger Pielke Jr’s “a recent presentation and follow-up Q&A” Kip mentioned
above is very informative & well worth watching as he explains all of the
nuances & details of the data he presents.
Isn’t this the reason scientists are supposed to publish the full method and data so others can follow their reasoning and check the results?
(Obviously not climate seancetists refusal to hand over data in case McIntyre found something wrong with the data.)
It’s one of the reasons, yes – another is to check that there are no glaring errors in the data or analysis.
Redge ==> YES and sort of….if you want to know if Joe’s study was valid, you must do exactly what Joe did — every stinking detail, same chemicals from the same supplier and the same batch of those chemicals, same temperatures, same flasks, same test tube size, everything. Then the same analysis techniques. But then you will only know of Joe made any mistakes…not if his results are valid overall.
The study under discussion today shows that 73 teams did serious well-done analyses and came up with different results — both effect size and effect sign.
Each of these studies would replicate if the replication used exactly the same procedure.
You see?
The other element is beware of extrapolating from the exact conditions of the experiment.
Arrhenius came to some conclusions under a strict set of conditions.
But our atmosphere is not a closed system, the variables are not independent etc.
So Svante gave us a clue, but not the answer.
In many studies it’s almost impossible to fully document all variables let alone duplicate them. In virology studies the lab animals should have the exact same genome if full duplication is to be expected. That’s almost an impossibility. Thus the need for addressing possible uncertainties in the results on at least a subjective basis. If this were adequately done then at least part of the “replication problems” of today would be solved.
What the results show is that if you repeat a study enough times you will eventually get the answer you are looking for. At which point you throw away the other studies and publish the “correct” result.
Super computers allow climate models to automate this process.
And the admitted practice of adjusting parameters to get an ECS that “looks about right.”
Kip, we did manage to settle on scientific understandings that allow predictability in the hard sciences to the degree that we have built a marvelous intricate technological civilization.
Social sciences were almost totally corrupted by the left ~ 70yrs ago and today it is even worse. The terms themselves we’re defined incorporating political bias. (‘Ills of society because of corporate greed’ sort of thing).
I wish they hadn’t chosen a loaded topic like reaction of people to accelerated immigration.There is no way with the battery of computer statistical tools and the actual practice of p-Hacking almost acceptable practice and ‘feelings’ of how outcomes should come out that it’s no surprise the range of conclusions is all over the map.
Medical research is full of bias but at least the world does have good quality care available and enormous problems have been solved. It would have been better to choose a non-controversial medical science topic and don’t identify by name the condition or the medication used just incase Trump happened to mention it.
In the social science of anthropo global warming, I have often observed that the findings of study actually best supports a conclusion different than that of the researcher.
Yes, just compare the UN IPCC reports’ SPMs to the bodies of the report chapters.
Dave ==> Perfect — the science sections are usually pretty good (sometimes a bit biased, but not bad).
Gary ==> Its not that they chose a loaded subject — they chose a complex, “not easy”, data set to be analyzed.
You can look at the data base yourself — they are just numbers. But intentionally, not easy.
The analogy I used to explain this to a family member is that the laws of the power of a lever can be derived by high school students in a high school physics lab — coming to fairly close to the correct formulas. That is easy science — solved long ago.
Global Sea Level rise (or not?) — complex, complicated and chaotic.
(But, yes, absolutely right to your politicization of science, the journals, the associations, etc).
I have argued to a reputable scientific organisation that it should never seek to have an authoritative voice on science, because scientific knowledge changes with the publication of every new scientific paper. I was not shouted down. Actually, the reaction was rather quiet.
Kip,
Thank you for showing this paper. I strongly agree with your comments such as “In Climate Science, we see the mis-guided belief that more processing – averaging, anomalies, krigging, smoothing, etc. — reduces uncertainty. The opposite is true: more processing increases uncertainties. Climate science does not even acknowledge the simplest type of uncertainty – original measurement uncertainty – but rather wishes it away.” And “Any finding that does not honestly include a frank discussion of the uncertainties involved in the study, beginning with the uncertainties of the raw data and then all the way through the uncertainties added by each step of data processing, is not worth the digital ink used to publish it.”
However, I disagree with your contention that “This is a separate problem from the differing biases affecting politically- and ideologically-sensitive subjects, the pressures in academia to find results in line with current consensuses in one’s field and the creeping disease of pal-review.”
Houston, we have one very big dominant problem. The people you are analysing are every day hacks of limited intelligence who do not know about the finer points of proper hard science. There are no significant separate problems, there is this one big problem than overshadows all else. These hacks do not care about the various requirements for proper expression of uncertainty. How many times have you seen a climate paper with no references at all to errors and uncertainty? Close to 100%? Maybe 97%, to use a sciency number? These hacks accept pal review and current consensus because they are too lazy and ignorant to do otherwise.
Here is a quick comment or two on this paper, which is reluctant to find a cause of ignorance.
This paper is very much a German effort. Most authors from abroad have German names, e.g. the sole one from Australia is Max Grömping, USA has Jonathan Mijs and Amie Bostic and more, UK has Eike Mark Rinke, Kaspar Burger and others like Roxanne Connelly which is arguably non-Germanic (sarc). Germany, with places like Potsdam Institute with Hans Joachim Schellnhuber is central to the movement to exclude hydrocarbon fuels. Most major scientific journals are owned or controlled by Germans. The dreaded World Economic Forum has a German grand-daddy.
These observations might well mean nothing. However, a paper with a strong Germanic content of authors, using the same data to independently test the same prominent social science hypothesis, raises thoughts about the ability of Germans to be original creators of concepts as opposed to willing slaves of the orders from above. Are these authors drawn from one group of aligned thinkers, or is the authorship a loose group of fiercely independent researchers aiming dominantly for truths versus group beliefs?
I cannot fully answer this question, but I think it should be asked. The authors conclude inter alia “These results call for epistemic humility and clarity in reporting scientific findings.”
The paper, page 3, notes of its working teams “46% had a background in sociology, 25% in political science and the rest in economics, communication, interdisciplinary or methods-focused degree backgrounds. Eighty-three percent had experience teaching courses on data analysis and 70% had published at least one article or chapter on the substantive topic of the study or the usage of a relevant method.” I see no mention of specialism in hard science or statistics.
On page 4, “To remove potentially biasing incentives, all researchers from teams that completed the study were ensured co-authorship on the final paper regardless of their results.” This confirms thoughts of stupidity. Which genuine, skilled researcher would enlist in a study under any precondition or promise of recognition, before the quality of the final paper was known? This move to reduce bias used a bias. Ignorance assisted the ignorant?
Page 5. “Therefore, we standardized the teams’ results for each coefficient for stock and flow of immigration post hoc.” These teams participated on the promise of recognition and then stood by in obedience while their results were altered by the authors?
Sorry, I cannot go further. This paper is social junk, far away from the realities of hard science.
Footnote:
Five days ago, Tom Berger and I showed on WUWT a great deal of hard science analysis of some weaknesses with the primary temperature data used by the Australian BOM to present the public with a hypothesis that global warming is an existential crisis. It was part three of three parts for which I earlier invited BOM to participate. Crickets.
Geoff S
Germanic thinkers include mathematicians such as Leibniz, Euler, Weierstrass, and scientists such as Einstein, Hertz, Helmholz, as well as philosophical thinkers such as Kant, Wittgenstein and Hegel, and political thinkers such as Marx and Hitler. Those dealing in real science and have made great advances for humanity, and even demonstrated the limits of such an approach through Gödel’s Theorem. The influence of the less scientific thinkers has been rather less benign.
Yeah but things were different during the Enlightenment! List your favorite German scientists and philosophers of the last 5 decades!
Geoff ==> I am not sure I understand what you are saying — it seems to be an anti-German rant. It that what you intended?
They are testing a hypothesis about multiple analyses … thus use team of data analysts.
Kip,
Far from a rant. Germanism is a possible unstated exogenous variable. Overall German thought is different to that in many other countries, as displayed by Potsdam, Energiewende, immigration policy to name just some.
My hypothesis is that overall standards of authors in climate research are lower than other sectors of science. Ignorance is rife. You should not extend the findings of an ignorant paper to less ignorant sectors of science. You should ask for a retraction.
I gave some examples of how bad it was before I felt like a chunder and gave up.
It is not science, it is social babble, Geoff S.
True, but I know a lot about German mentality, my father couldn’t speak English when he went into the first grade, was in WWII, died having lost his German. His father was an excellent auto mechanic, especially in diagnosis and could read and write in two languages. I am currently reading a few very old (1855-1878) German marine science papers, good as they come. I counted 7 universities from the US including The University of Texas Rio Grande Valley. There are real statisticians that teach education and social science majors, not sure how widespread or distributed. Every US university I knew of had a wide range of members from dumb to brilliant with diverse parentage. Read their last paragraph.
To expand, I am suggesting that ignorance of science is a contributing factor in this paper’s findings. Some 80% of these researchers are or have been teachers. Bad teaching helps to lead to scientific ignorance.
I don’t imagine teachers would identify themselves as a variable affecting the outcome of their paper.
Likewise, I don’t imagine Germans would identify themselves as a variable either. Yet, as I have noted already, there are German traits that are stronger than in other countries, like being the centre of the poor science promoting climate activism and doing ignorant national experiments like Energiewende.
It is an error to exclude such factors because they are thought to be impolite. Geoff S
I am reminded of my time in college. There were several things the different teachers basically imparted as common practice or actual fact. Without the ability to actually replicate a lot of things, they were accepted as fact and later regurgitated. Oh my, was that not exactly the correct thing to do! Question everything, including your teachers, they are not infallible.
I can’t believe climate science has reached the point where so much is being taught only upon faith without evidence. Kinda like teaching that gravity is an attractive force that acts at a distance totally ignoring Einstein’s theories.
So there ia a widerange of interpretations fo the same data derived from the best practises of the researchers.
It would be interesting to see the results grouped by nation to see if these biases are cultural or random.
And by gender too.
One simple explanation of the classical Philosophical Problem Of Induction is that there is intrinsically a one-to-many relationship between results and possible causes, ranging from the mundane to the absurd and supernatural.
The practical effect of this is that there can be no certainty in science. Whatever noumenal* entities we propose to explain observed effects, can never be held to certainly exist. Not gravity, not the Gods in Asgard, not quantum fields.
What class of noumena you decide are believable depends on what you currently accept are normal and – if not real – at least well established and moderately useful. E.g. one would prefer explanations based on gravity and physics than the Norse Gods et al. Not because these are not useful, but because they are simply not part of ones current worldview.
Oh, and in passing, modification of that worldview is the purpose of dialectical Marxism, so that one e.g. replaces simply being not very talented, with the idea of heroic injured victimhood caused by other people doing bad stuff to you. Thus dissent anger and hatred replace any desire for hard work humility or self improvement.
It is sad that academics have to do a massive study to reveal something any philosophy graduate and many other people who have bothered to read the subject could have told them.
*Kant’s word for that which is not phenomenal, but lies behind the world of phenomena, as it were, as an explanation of phenomena.
“Whatever noumenal* entities we propose to explain observed effects, can never be held to certainly exist. Not gravity, not the Gods in Asgard, not quantum fields.”
Not quite. These entities, to use your word, most certainly exist, because if they didn’t the observable effect would not be observed. Getting to the most accurate description of said entities is what science is all about, and while the theory of gravity or quantum fields may be incomplete, there is/are at least some evidence to support them.
Unlike ancient Norse gods.
PD,
A good response, to which I would add agreement that in science, nothing can ever be proved to be exactly known or the exact truth. That is one reason why uncertainty estimates are brothers of measurements.
However, we slide rule warriors have done some rather accurate jobs threading some thin needles, like navigating space vehicles onto comets or nudging asteroid orbits.
We have a value of pi to a million places or more, but we do not know how to test if this is the right answer. Sooner or later, practicality trumps aspiration for perfection.
Geoff S
‘Will not solve’ should be ‘will solve’?
Ed ==> Yes, precisely, corrected in the essay. I love good readers!
merely demonstrates that social science is not really science.
this wouldn’t happen is a real science like chemistry or physics.
and climate science is not a science either, but mostly opinionated guesses made to look like science.
Nobel laureate physicist Richard Feynman said exactly that over 45 years ago about social sciences.
Joel ==> This is about analysis….not the topic of the data being analyzed.
Interesting article, Kip. Thank you. Much to think about, concerning the central finding.
But wait.
From the Breznau, et al paper, in Implications, p.15 of the pdf:
“Third, countering a defeatist view of the scientific enterprise, this study helps us appreciate the knowledge accumulated in areas where scientists do converge on expert consensus – such as human impact on the global climate or a notable increase in political polarization in the United States over the past decades.”
LOL.
David ==> Everyone has to throw the dog a bone…..and even the most skeptical climate skeptics, such as myself and Anthony Watts, acknowledge that humans and human civilization have impacted, and will continue to impact, the climate.
Congratulations on actually reading the paper! Well one….so very few do.
Yeah, Kip, its amazing the number of commentors that continue to mischaracterize the paper. It looks like that comes from various pet peeves.
Kip,
Maybe it is better to attribute that line to simple ignorance of the harm from throw-away lines. Few intelligent sciuentists would agree that there is convergence of expert consensus on climate and few hard scientists would even regard that as a matter worth the worry. What matters is the result of the research, with its uncertainty. Nothing else like consensus matters much. Many oft-mentioned advances in science are demolitions of the prevailing consensus.
I read the paper, but I do not expect congrats. Any here who have commented without reading the paper are part of the ignorance problem. They deserve to be chastised. Geoff S
Geoff ==> You are right, of course. But people are people and we are not all the same…my effort is to educate.
Kip,
Mine too. Sometimes that involves writing that has potential to upset. Realism trumps emotion in hard science. Cheers. Geoff S
Wasn’t there a reference or two in the Climategate emails about doing things for “The Cause”?
Doesn’t sound like CliSi is unbiased research or even a hard science.
William Briggs has a good write-up about these findings:
All Those Warnings About Models Are True: Researchers Given Same Data Come To Huge Number Of Conflicting Findings
Paul ==> Briggs is, by nature and training, a statistician and gives his statisticians take on the study. He is right about models.
However, the import and significance of this study for the broader field of scientific research goes far beyond the impotence of models.
kip
“Every time someone in our community, the science skeptic or Realists® community, speaks out about uncertainty and how it affects peer-reviewed scientific results, they are immediately accused to being Science Deniers or of trying to undermine the entire field of Science.
nope nope nope
Every time someone in our community, the science skeptic or irRealists® community, claims with certainty that
that is when you DENY known science with CERTAINTY we sometimes, not everytime calll you deniers.
you could say….
but you dont you are certain
certain you are right.
Mosher ==> You are talking to the wrong guy. Your tar brush is far to wide and misapplied here today. In other words, you are doing what you are complaining about. You take offense for an accusation that does not apply to you personally — though it is true in a broader sense, and not just in CliSci.
In my own case, you either didn’t read (or you have forgotten):
https://wattsupwiththat.com/2018/08/25/why-i-dont-deny-confessions-of-a-climate-skeptic-part-1/
https://wattsupwiththat.com/2018/08/27/why-i-dont-deny-confessions-of-a-climate-skeptic-part-2/
For myself, I can assure you that I do not say 1, 3 or 4. Just the opposite. I do have serious doubts about the numerical results claimed for “GAST”, but freely and vigorously acknowledge that, if nothing else, it has pleasantly warmed a little since the end of the Little Ice Age.
Why don’t you actually read and discuss to main topic of the essay? I’d be interested in your rational thoughts.
Tell you what, show us the EVIDENCE THAT GLOBAL TEMPERATURE MEANS ANYTHING.
If the Earth’s global temp is increasing then radiation from the earth to space should be decreasing, i.e., trapped energy causing heat (temperature) increase is heat that is retained. Let us know when that has been proven with EVIDENCE and not models.
mosh don’t do evidence, only drive-bys in black SUVs.
All you’ve done here is create a series of strawman arguments you can argue against. None of them hold water when reviewed with a modicum of critical thinking. Stop putting words in people’s mouths.
I made the case that the Empirical Rule in statistics allows one to estimate the standard deviation in a population sample from the range. (https://wattsupwiththat.com/2017/04/23/the-meaning-and-utility-of-averages-as-it-applies-to-climate/) Based on that, the standard deviation of the global temperature should be several ten’s of degrees. Yet, the typical person supporting the claim that many readings allows an increase in precision of the global mean temperature claims that we can calculate it to a precision of +/-0.005 deg C. That does not square with the estimate of several ten’s of degrees C for even a 2 sigma uncertainty. Yet, no one explained why the estimate is at least three orders of magnitude larger than the claimed precision obtained from averaging. They stick to their claim of the Central Limit Theorem justifying at least an order of magnitude greater precision for the average than the original temperature measurements.
For those that don’t remember, the Empirical Rule basically says that most of the values in a distribution will be within 3 sigma’s of the mean. That is very nearly the total range of values. For earth that range is about +/- 150F so the 3 sigma value would be about 75F. If you want to use 1 sigma as the uncertainty it would be 75/3 F = 25F. One can quibble about the coverage factor and such but it still gives an uncertainty estimate that is several orders of magnitude greater than the differences the clisci alarmists attempt to hang their hats on.
I had never heard of the Empirical Rule till Clyde posted about it. I’m still amazed at the tidbits you throw out Clyde!
You might be even more amazed at the tidbits that are still in my refrigerator! 🙂
The claim the random variables (station records/averages) are samples. Yet they then divide by the √N to get an SEM. They don’t even realize the SD of the sample distribution IS THE SEM. Worse, they then think that defines the precision of the mean rather than the INTERVAL where the mean may lay.
“DENY known science”
Why didn’t you just say “settled science” instead of “known science”?
Isn’t “settled science” what you really meant?
Or maybe you meant “political science”?
That’s where the “Green” comes from today.
[N.B. I posted this before reading all of the comments.]
Only one team, right up front just looking at the data, said the data was insufficient to reach a conclusion. Through the process only about 13% of the teams ultimately concluded that the data was insufficient. About 87% said “What the hell, we’ll just forge ahead, damn rigorous data analysis.” Paleoclimatology and its darling, Michael Mann, come to mind first and foremost.
The results will be valuable if the government-funders require grant recipients, at a minimum, preregister their studies and clearly show their data collection, data analyses, model selections and each and every decision made along the way. The funding entities should also fund independent parallel studies with the same objectives, having no connections to the original grant recipients nor their institutions and having not seen the other study. Additionally, because this is all publicly funded science, all work products and study decisionmaking should be posted online such that scientists and citizen-scientists can replicate (or not) studies on a massive, world-wide scale. No more pal review.
While the study is admirable, they just had to throw in climate change and U.S. politics at the end.
Dave ==> Lamentable but from the author’s point of view, self-protective. They will be, nonetheless, attacked as it they had not said so.
kip
If you are interested in why scientists don’t agree, even on simple questions, then you absolutely must read this paper, right now. Pre-print .pdf is here.
If it doesn’t change your understanding of the difficulties of doing good honest science, you probably need a brain transplant. …. Or at least a new advanced critical thinking skills course.
“This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases.”
most discussions? focus on systematic bias?
yes including STRUCTURAL uncertainty. the bias that occurs because of analyst choices
we deal with this all the time.
example: hadcrut versus giss versus BEST all different methods/analyst choices
example RSS versus UAH
skeptics only want to look at UAH and ignore RSS
but RSS actually quantifies its structural uncertainty. UAH does not.
so i take it only skeptics with good brain transplants will loook at RSS instead of UAH
Mosher ==> This essay is not about RSS or UAH or BEST or GAST at all, is it?
It is about the finding that small decisions about how to carry out an analysis to ask a particular question of a data set can change, even reverse, findings, when done by well-meaning serious acknowledged unbiased expert teams.
In this comment, you mix, without attribution, quotes from my essay and quotes from the authors of the paper under consideration. I cannot speak for the authors
Your war on your beloved imaginary enemy “The Skeptic” is misguided.
Sure, there are silly teenagers here spouting nonsense in comments — just as bad and ill-informed as the Junior Climate Warriors commenting at ClimateCentral and the like.
You should concentrate on what the authors here say, and ignore the “sillies” — I do.
HEY!
“Sure, there are silly teenagers here spouting nonsense in comments — just as bad and ill-informed as the Junior Climate Warriors commenting at ClimateCentral and the like.”
I’m not a teenager! 😎
(That was meant as a joke for those who didn’t realize.)
Phew…. was almost fooled….
Something to consider that plays into this are the unstated (and usually unexamined) assumptions that guide the researchers.
I think that the fact that there are to often unstated assumption of the researchers is the point.
(In CliSci, not a lot of “Green” to those who don’t tow the green line.)
I’ve read this study twice. My largest take away is that each and every decision made in analyzing data can send you off into a different conclusion, some of simply must be incorrect. When I think of climate science several things come to mind.
1. Are temp databases consistent in homogenizing, infilling, and weighting when creating their dataset.
2. Are adjustments to data ever legitimate for the sole purpose of “creating” LONG records?
3. Is station data considered a population or samples. Different inferences are made from each.
4. Are the station averages IID samples?
5. Why are variances and/or standard deviations NEVER quoted for global average temperatures? Are they not available?
In Climate Science, we see the mis-guided belief that more processing – averaging, anomalies, krigging, smoothing, etc. — reduces uncertainty. The opposite is true: more processing increases uncertainties. Climate science does not even acknowledge the simplest type of uncertainty – original measurement uncertainty – but rather wishes it away.
there is NO documented belief that more processing reduces uncertainty.
you dont understand spatial stats, or structural uncertainty or anything about uncertainty.
im certain of that
You *have* to be kidding, right? The citations have been provided over and over at wust. For instance:
-1. John Taylor, “Introduction to Error Analysis”, Page 61, Eq. 3.18
let q = x/u, then ẟq/q = sqrt[ (ẟx/x)^2 + (ẟu/u)^2 ]
Since in an average u is a constant then ẟu = 0 and the uncertainty becomes
ẟq/q = ẟx/x
If x is a sum of measurements, x1 + x2 + x3 … + xn, then ẟx = sqrt[ ẟx1^2 + ẟx2^2 + … ẟxn^2]
-2. JCGM 100:2008
Eq. 1, Pg 20:
let Y = f(x1,x2,…,xn)
then, Eq 10, Pg 31
u_c^2(y) = Σ(∂f/∂x_i)^2 u^2(x_i) from i=1 to i = N
If x_N is the number of measurments then it is a constant and, once again, u^2(x_N) = 0 and you are left with the the total uncertainty being the root-sum-square of the measurement uncertainties.
How many more citations would you like?
As has been pointed out over and over this leads to the uncertainty in a daily mid-range temperature being
u_midrange = sqrt[ u_tmax^2 + u_tmin^2}. If the individual temperature measurements have an uncertainty of _+/- 0.5C then the daily mid-range temperature has an uncertainty of +/- 0.7C.
If you combine 30 of these daily mid-range values into a monthly average the total uncertainty becomes u_monthly = sqrt[ 30 * u_midrange^2] = u_midrange * sqrt(30) = +/- 3.8C.
Your uncertainty range keeps on growing as you average the monthly values to get an annual average and grows even more when you try to find a 10yr, 20yr, or 30yr annual average.
Every time you average independent, random, measurements of different things each having uncertainty the total uncertainty is at least the root-sum-square of the individual measurement uncertainties. This is most of the problem with the GAT. It becomes unusable and not fit for purpose because of the uncertainty associated with it.
Anomalies do *NOT* reduce uncertainty. Anomalies are calculated using values that have uncertainty. A mid-range temperature has an uncertainty. The average subtracted from it has an uncertainty. It doesn’t matter if you add or subtract values with uncertainty, the total uncertainty is an addition. T_midrange – Tavg = T_anomaly. Then u_Tanomaly = sqrt[ (u_Tmidrange)^2 + (u_Tanomaly)^2]
UNCERTAINTY GROWS!
If you calculate 30 anomalies and then average them to get a monthly average anomaly the uncertainty GROWS EVEN MORE because you have added the uncertainty of the multi-year average to each daily uncertainty. Your uncertainty would be less if you just calculated the average T_midrange value and subtracted just that value from the multi-year average value. But the uncertainty would still grow in either case.
”
Smoothing doesn’t lessen uncertainty. You are still doing addition, subtraction, dividing, and multiplication to do the smoothing. The uncertainty grows with each processing step you take!
BOTTOM LINE; You can’t reduce the uncertainty of independent, random measurements of different things with processing. You can only increase uncertainty with processing of that data. Just as Kip said.
Tim ==> Please send your email adress to me at my first name at i4.net.
Tim ==> Can you please send your email address again? Internet Pixies disappeared it….
done
Mosh,
I have a number of tables of the recorded (from NOAA via NWS) record highs and lows for my my little spot on the globe. I started collected some in 2007. (I used the WayBackMachine to get the oldest I could find from 2002.)
Long story short, about 10% of the record highs and lows have been “adjusted”. Not broken, old records changed and new “records” for highs lower values than old ones and vice versa.
(2012 had lots of “adjustments” between the April list and the June list!)
Whoever pushed the button on the blender, why should I trust them?
If the measured and recorded DATA wasn’t sound at the time … what do we have?
The new values are theatrically true?
A computer model is more trustworthy than an eyeball looking at the thermometer 50 to 80 years ago?
Gunga,
Only last evening I saw a step-like time series change in 2012 for about a dozen Australian stations. Work from Chris Gillham at his waclimate blog looking at data during changes from LIG to electronic thermometry.
Are you aware of any conference or international event that might have led to this? I am not. The magnitude is about 0.1 to 0.3 deg C Tmax in that step, varies with station. Too big to ignore. Geoff S
No, I’m not aware of any conferences or international event.
I did find this:
https://wattsupwiththat.com/2012/09/26/nasa-giss-caught-changing-past-data-again-violates-data-quality-act/
And I think that was James Hansen’s last or next to last year as director of NASA GISS.
Did you see what Mosher had to say on that thread?
“Since the past is an estimate …” and “… we don’t know the temperature of the past.”. Apparently sheets from the past containing temperature recordings are unreadable, e.g., we don’t know what they were so it is estimated.
I suspect he was talking about unmeasured location but that is a deflection as usual. Anthony was discussing recorded temps. Nothing new under the sun, same old, same old!
I didn’t look at the comments. My Google search was trying to find out what had changed at GISS in 2012.
I had a vague recollection that around that time James Hansen had been replaced by Gavin Schmidt.
Also that the program (program language?) used had been changed. That link is the best I found to sum it up.
I’m just “Mr. Layman”. But even a layman could notice back in 2007 with Al Gore promoting his “Inconvenient Truth” doctoredmentary that “something ain’t right here”.
That’s when I first copy/pasted the NWS record highs and lows for day into Excel. Sort by date. Most of the record highs for the day then were before 1950 and most of the record lows were after 1950.
(I didn’t “find” WUWT till 2012.)
I just looked at the comments and found that I had put up a list of the changes made in the April 2012 list of record highs and the 2007 list of record highs.
(The formatting wasn’t preserved in the copy/paste.)
It should be intuitively obvious that the greater the distance an interpolated or extrapolated point is from a measured point, the greater the unreliability. Thus, many, if not most, gridding algorithms weight the points closest to the point to be interpolated more heavily than the distant ones.
This is why you saying “Nope, nope, nope” as your ‘contribution’ to a discussion has little impact on the reader.
All except the Kelvin scale. So, that means not all.
Another of your fragments that doesn’t explain anything.
Mosher,
What is the Standard Deviation of the distribution used in computing the average?
kip
“It appears that the hope that the many-analysts/many-analysis-teams approaches would help resolve some of the tricky scientific questions of the day has been dashed”
dashed?
really?
you know i went through yuour article and thought i would just comment on all the unqualified claims you made. especially those with no evidence.
heres the thing.
if you go looking for multiple interpretations of the same data you will find it this is why
theory is always underdetermined, this is why Q exists why 9-11 truthers exist
why biden can deny there is a recession.
mosh is a reality denier.
Mosher ==> Are you ever going to comment on the study at hand — or just make snarky, questionable, personal attacks based on your favorite talking points?
The study is not about interpretation of the data….it is about analysis and analysis methods and what happens to results when decisions are made in the analysis approach.
You’ve slipped your gears and wandered off into your political fantasies.
+100
kip pretends this study breaks ground!!!!!
This new study adds in another layer – the uncertainty caused by the multitude of tiny decisions made by researchers when analyzing a research question.
look we investigate this all the time
https://eapsweb.mit.edu/news/2021/quantifying-parameter-and-structural-uncertainty-climate-modeling#:~:text=The%20structural%20uncertainty%20comes%20from,small%2Dscale%20processes%20entirely%20correctly.
https://journals.ametsoc.org/view/journals/bams/86/10/bams-86-10-1437.xml
Historically, meteorological observations have been made for operational forecasting rather than long-term monitoring purposes, so that there have been numerous changes in instrumentation and procedures. Hence to create climate quality datasets requires the identification, estimation, and removal of many nonclimatic biases from the historical data. Construction of a number of new tropospheric temperature climate datasets has highlighted previously unrecognized uncertainty in multidecadal temperature trends aloft. The choice of dataset can even change the sign of upper-air trends relative to those reported at the surface. So structural uncertainty introduced unintentionally through dataset construction choices is important
sceptics want to IGNORE structural uncertainty
https://twitter.com/climateofgavin/status/880790799532871680
https://www.remss.com/missions/ssmi/uncertainty/
you wont find this in roy spencers work
Steven,
By your logic, homogenisation is not needed. The daily arithmetic average of many observations in a region will include a distributional scatter that shows in the uncertainty. The individual observations are valid and should not be altered. Abundant methods exist for using them without changing them, as I am sure your statistical learning agrees. You do not have to ignore structural uncertainty, you simply need to know how to process it, even if it gives an uncertainty answer big enough to challenge your preconceptions. Geoff S
kip
this paper is pretty interesting
let me share my experience.
in reviewing skeptical work on the temperature series i always take note of the decisions they make
and sift it to pull records they like
without exception they never test the use of alternative data. like does our result hold up using RSS or JMA or AIRS?
roll tape
https://wattsupwiththat.com/2022/10/08/tokyo-mean-september-temperatures-have-seen-no-warming-in-34-years-jma-data-show/
you see you dont care about uncertainties or scientific purity or you would have been all over that chart!!!
no skeptic here saw fit to ask about uncertainties?
why?
you liked the answer.
roll tape
https://wattsupwiththat.com/2022/10/05/the-new-pause-lengthens-to-8-years/
As always, the Pause is calculated as the longest period for which the least-squares linear-regression trend up to the most recent month for which the UAH global mean surface temperature anomaly is available is zero.
1 ONE .dataset uah
2 a dataset with no published uncertainties
pot kettle black much
desperate much?
What are you whining about? If the monthly uncertainty is +/- 3.8C then you will find measurements on that graph that exceed it. The range of values is from about 26C to 22C or a 4C range – which exceeds the typical uncertainty interval.
There were only like 50 total comments on that thread. Are you expecting everyone to post on every thread? I didn’t even read it! It wasn’t addressing a GLOBAL average in any case, only a local average. The biggest takeaway is that there is an obvious natural variation from year to year at that location. It appears that well-mixed, global CO2 is not a significant control knob for that location.
It would be helpful if uncertainty bars were shown on the graph. It would be much simpler to tell if the temperature measurements are meaningful. Frankly, I can’t even tell from the site how the monthly averages were derived. That alone is an issue affecting uncertainty.
Steven,
So you cannot define me as a skeptic, since skeptics always choose the smallest data set, they filter it Steven, read the essay Tom Berger and I wrote in WUWT 5 days ago. Do any of your criticisms of skeptics apply to our skeptical essay? I think not. Geoff S
I lost count of how many strawmen he erected here.
Mosh ==> You are still fighting your imaginary enemy “The Skeptic” …. and you have misunderstood the paper discussed here.
Mosher,
https://wattsupwiththat.com/2022/10/08/tokyo-mean-september-temperatures-have-seen-no-warming-in-34-years-jma-data-show/
You missed the whole point. I suspect on purpose.
If there is a location that has no warming, yet the average is supposed to be +1.5 degrees, guess what? You need a location that has a 3 degree rise! TELL US WHERE THAT IS!
You and others hide behind averages. An average implies that there are data both above and below the mean. That is what a standard deviation is for. Tell us where the cooler and warmer locations actually are! Tell us what the Variance or Standard Deviation for the Global Average Temperature! Not the anomaly, the actual temperature. Have you even calculated the combined variance for the GAT? Has anyone?
How many papers and pronouncements do you need to be shown that everywhere is warming at or above the GAT anomaly? Have you ever listed locations that are both above and below the average and what the values are?
You castigate sceptics for pointing out anomalous data. That is not a cogent argument at all. You need to answer why those are not anomalous. If you would publish a variance for the AVERAGES you would go a long way to resolving the issues.
I’m still waiting for one of the ‘professionals’ to show the trends for the Köppen climate classes instead a single number for the whole world.
I’d even settle for a breakdown by latitude bands!
I read this study. Here is a pertinent section from it.
Like it or not, this paper bolsters a lot of what we sceptics have been saying about uncertainty, even on this site. The paper basically says there is NO transfer function that will allow changing historical temperature records while maintain any level of certainty.
Holy crap, how many people here have said that homogenizing, infilling, and and adjusting raise uncertainty. This is exactly what this paper says is the probable outcome.
Jim Gorman,
Too tiresome to check for specifics but they tested a couple of hundred variables.
Expertise was one. They report that even highly appreciated experts arrived at different findings. My analysis would say that those highly credentialed were not competent, as claimed. All but one of them must have failed this mini exam, possibly all.
I would suggest another variable: Have you as a participant ever read the Guide To Uncertainty in Modelling? Yes to the left of me, no to the right of me while we count. Geoff S
Geoff,
Do you mean “clowns to the left of me, jokers to the right”?
Also it seems I forgot to add the quote from the paper!
https://journals.ametsoc.org/view/journals/bams/86/10/bams-86-10-1437.xml
You are unjustifiably using your Broad Brush. Up stream you asked for citations. How about you providing citations?
I’m not going to take the time to chase it down, but I think I remember reading the estimated uncertainties for the UAH data.
Uncertainty is scientific, certainty is religious. Climate Alarmism illustrates this perfectly.
Just loved it!
Truth to power, with a thermonuclear garnish.
Nice job
PEter ==> Thank you.
An enjoyable read. The study suggests it’s important that epistemic humility be a part of the process. I think everyone can agree with that attribute. Except Mosher. That always seems to be missing in his arsenal.
There is a fourth “C” not listed but which so many supposedly scientific studies and “scientists” lack – “Common Sense”…
Kip, while I see validity to the arguments and findings, I suspect that a major factor is that the research problem tackled in this comparative test is basically a sociopolitical, psychological / humanities type problem __ trying to decipher people’s “feelings.” Pseudo-science from the outset. One would be hard pressed to believe that any of the data are representative of … what? The scale is far too large and uncontrolled as well. Too many degrees of freedom. Humans are complicated, aren’t they?
There is still plenty of room for disagreement with any research, but starting from the other end of the spectrum, they should try a carefully executed, narrow scope experimental design. Use a small-scale research project, controlling for more of the variables and using a predetermined statistical method and well-defined hypothesis. For example, employ a “simple” field plot study. (Not so simple as it might at first appear if one has ever done it).
Beginning small, from there I expect that as one increases the scale and complexity, it would reveal rapid divergence among research teams. By the time they are done, one would be forced to conclude that global scale, multidimensional, multivariate and time-dependent inferences are just a shot in the dark. If “climate” researchers were to be honest, they would conclude (indeed already should be concluding) that there is no need for alarm, but with a tiny hint of doubt … “what if we are wrong?” Since in many of their eyes the consequences of being wrong could be very serious, they default on the side of extreme caution (translation: they tune their models toward high side bounding estimates that overstate the likely impacts.), then do not caution others as they draw inappropriate conclusions from what they are reading. That may have been reasonable 40 years ago, but ongoing measurement and research are showing equilibrium climate sensitivity is likely toward the low end of the range and that their models greatly exaggerate projected warming and the anthropogenic contribution compared to measured changes (even using flawed, biased “data sets.”). Meanwhile, the rates of change are slow and steady and magnitude is within the range of natural variability, meaning that this is NOT an emergency.
As a side note, I repeatedly read “research” summaries from various universities posted on WUWT, usually with ridiculous results and conclusions. Especially for those that are “impact of climate change on ___”, they are invariably amateurish, high school science fair quality. No controls, no experimental design, no hypothesis, no predetermined test statistic … Simply bad “science” by weak principle investigators trying to get some grad students their degrees and more publications, while feeding like pigs from the climate funding trough.
The so-called research question is hopelessly vague. This explains the results of the study.
David ==> But, you see, that is quite intentional! The study designers wanted a very Real World question – the kind of question (in the social sciences) quite likely to be asked by a Prime Minister or a President or a Congress.
It is a question being asked by Congressmen and Senators in the United States “Will more immigration will reduce public support for government provision of social policies.”
It is very like the questions being asked in CliSci: What has caused the warming since the Little Ice Age? How much of that warming has been anthropogenic? Will suppressing the use of fossil fuels harm or improve world/national economies?