'Robust' analysis isn't what it is cracked up to be: Top 10 ways to save science from its statistical self

In the wake of what Willis recently pointed out from Nassim Taleb, about how “In fact errors are so convex that the contribution of a single additional variable could increase the total error more than the previous one.”, I thought it relevant to share this evisceration of the over-reliance on statistical techniques in science, especially since our global surface temperature record is entirely a statistical construct.

Excerpts from the Science News article by Tom Siegfried:

Science is heroic. It fuels the economy, it feeds the world, it fights disease. Sure, it enables some unsavory stuff as well — knowledge confers power for bad as well as good — but on the whole, science deserves credit for providing the foundation underlying modern civilization’s comforts and conveniences.

But for all its heroic accomplishments, science has a tragic flaw: It does not always live up to the image it has created of itself. Science supposedly stands for allegiance to reason, logical rigor and the search for truth free from the dogmas of authority. Yet science in practice is largely subservient to journal-editor authority, riddled with dogma and oblivious to the logical lapses in its primary method of investigation: statistical analysis of experimental data for testing hypotheses. As a result, scientific studies are not as reliable as they pretend to be. Dogmatic devotion to traditional statistical methods is an Achilles heel that science resists acknowledging, thereby endangering its hero status in society.

More emphatically, an analysis of 100 results published in psychology journals shows that most of them evaporated when the same study was conducted again, as a news report in the journal Nature recently recounted. And then there’s the fiasco about changing attitudes toward gay marriage, reported in a (now retracted) paper apparently based on fabricated data.

But fraud is not the most prominent problem. More often, innocent factors can conspire to make a scientific finding difficult to reproduce, as my colleague Tina Hesman Saey recently documented in Science News. And even apart from those practical problems, statistical shortcomings guarantee that many findings will turn out to be bogus. As I’ve mentioned on many occasions, the standard statistical methods for evaluating evidence are usually misused, almost always misinterpreted and are not very informative even when they are used and interpreted correctly.

Nobody in the scientific world has articulated these issues more insightfully than psychologist Gerd Gigerenzer of the Max Planck Institute for Human Development in Berlin. In a recent paper written with Julian Marewski of the University of Lausanne, Gigerenzer delves into some of the reasons for this lamentable situation.

Above else, their analysis suggests, the problems persist because the quest for “statistical significance” is mindless. “Determining significance has become a surrogate for good research,” Gigerenzer and Marewski write in the February issue of Journal of Management. Among multiple scientific communities, “statistical significance” has become an idol, worshiped as the path to truth. “Advocated as the only game in town, it is practiced in a compulsive, mechanical way — without judging whether it makes sense or not.”

Commonly, statistical significance is judged by computing a P value, the probability that the observed results (or results more extreme) would be obtained if no difference truly existed between the factors tested (such as a drug versus a placebo for treating a disease). But there are other approaches. Often researchers will compute confidence intervals — ranges much like the margin of error in public opinion polls. In some cases more sophisticated statistical testing may be applied. One school of statistical thought prefers the Bayesian approach, the standard method’s longtime rival.

Why don’t scientists do something about these problems? Contrary motivations! In one of the few popular books that grasp these statistical issues insightfully, physicist-turned-statistician Alex Reinhart points out that there are few rewards for scientists who resist the current statistical system.

“Unfortunate incentive structures … pressure scientists to rapidly publish small studies with slapdash statistical methods,” Reinhart writes in Statistics Done Wrong. “Promotions, tenure, raises, and job offers are all dependent on having a long list of publications in prestigious journals, so there is a strong incentive to publish promising results as soon as possible.”

And publishing papers requires playing the games refereed by journal editors.

“Journal editors attempt to judge which papers will have the greatest impact and interest and consequently those with the most surprising, controversial, or novel results,” Reinhart points out. “This is a recipe for truth inflation.”

Scientific publishing is therefore riddled with wrongness.

Read all of part 1 here

to_pvalue_free
WORTHLESS A P value is the probability of recording a result as large or more extreme than the observed data if there is in fact no real effect. P values are not a reliable measure of evidence.

Excerpts from Part2:

Statistics is to science as steroids are to baseball. Addictive poison. But at least baseball has attempted to remedy the problem. Science remains mostly in denial.

True, not all uses of statistics in science are evil, just as steroids are sometimes appropriate medicines. But one particular use of statistics — testing null hypotheses — deserves the same fate with science as Pete Rose got with baseball. Banishment.

Numerous experts have identified statistical testing of null hypotheses — the staple of scientific methodology — as a prime culprit in rendering many research findings irreproducible and, perhaps more often than not, erroneous. Many factors contribute to this abysmal situation. In the life sciences, for instance, problems with biological agents and reference materials are a major source of irreproducible results, a new report in PLOS Biology shows. But troubles with “data analysis and reporting” are also cited. As statistician Victoria Stodden recently documented, a variety of statistical issues lead to irreproducibility. And many of those issues center on null hypothesis testing. Rather than furthering scientific knowledge, null hypothesis testing virtually guarantees frequent faulty conclusions.

10. Ban P values

9. Emphasize estimation

8. Rethink confidence intervals

7. Improve meta-analyses

6. Create a Journal of Statistical Shame

5. Better guidelines for scientists and journal editors

4. Require preregistration of study designs

3. Promote better textbooks

2. Alter the incentive structure

1. Rethink media coverage of science

Read the reasoning behind the list in part 2 here


I would add one more to that top 10 list:

0. Ban the use of the word “robust” in science papers.

Given what we’ve just read here and from Nassim Taleb, and since climate science in particular seems to love that word in papers, I think it is nothing more than a projection of ego from the author(s) of many climate science papers, and not a supportable statement of statistical confidence.

One other point, one paragraph in part one from Tom Siegfried said this:

For science is still, in the long run, the superior strategy for establishing sound knowledge about nature. Over time, accumulating scientific evidence generally sorts out the sane from the inane. (In other words, climate science deniers and vaccine evaders aren’t justified by statistical snafus in individual studies.) Nevertheless, too many individual papers in peer-reviewed journals are no more reliable than public opinion polls before British elections.

That ugly label about climate skeptics mars an otherwise excellent article about science. It also suggests Mr. Siegfreid hasn’t really looked into the issue with the same questioning (i.e. skepicism) that he did for the abuse of statistics.

Should Mr. Siegfreid read this, I’ll point out that many climate skeptics became climate skeptics once we started examining some of the shoddy statistical methods that were used, or outright invented, in climate science papers. The questionable statistical work of Dr. Michael Mann alone (coupled with the unquestioning media hype) has created legions of climate skeptics. Perhaps Mr. Siegfeid should spend some time looking at the statistical critiques done by Stephen McIntyre, and tell us how things like a single tree sample or upside down data  or pre-screening data begets “robust” climate science before he uses the label “climate deniers” again.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

227 Comments
Inline Feedbacks
View all comments
July 12, 2015 4:00 pm

According to IPCC AR5 industrialized mankind’s share of the increase in atmospheric CO2 between 1750 and 2011 is somewhere between 4% and 196%, i.e. IPCC hasn’t got a clue. IPCC “adjusted” the assumptions, estimates and wags until they got the desired mean.

Mindert Eiting
July 12, 2015 5:12 pm

This discussion runs for already more than thirty years. This may suggest that some problems are badly understood, one of them the flaw of the Fisher procedure. Here the null hypothesis postulates zero (e.g. the difference between two means). Zero means exactly 0 and therefore zero is not 0.00000001. The p-value depends on sample size, the greater the sample size, the smaller the p-value. Therefore, in a sufficiently large sample you will get a p-value sufficiently small (highly significant result) if the real difference of two means is 0.00000001. Mathematics tells therefore that for getting a highly significant result you only need money for getting a large sample, making that you can reject a null hypothesis being trivially false from the onset. The best alternative is the likelihood ratio test (also noted here), and especially the rather unknown sequential likelihood ratio test for two intervals. It is a matter of education in statistics to get these to be used in practice.
Nevertheless, I think that the biggest problem is not checking the assumptions made in the statistical procedure. What assumption is always there? Random sampling. This means that the conclusion from a statistical test is two-fold: the postulated value is not true and/or the sample is not random. But you never read the latter which may point at a dirty secret. So the conclusion that temperatures increased ‘ significantly’ over a certain period, means that the amount of data is sufficiently large in order to get the predicate ‘significant’ and (1) the slope of linear regression is not exactly zero, and/or (2) the data may not be a random sample from a well defined population (i.e. the data were selected, filtered, tortured, dependent, etc.).

July 12, 2015 6:56 pm

Just in case anyone is getting a faulty “take home” message here: The problem isn’t that statistics doesn’t work, it is that it has been used wrongly.
Let’s say you have a theory, you make just one prediction, and it pans out at the 95% confidence level. There is only a 5% chance that you are mistaken. That is a real chance, though! The world would be foolish to wreck economies, starve the poor, destroy wilderness with bird and bat killing windfarms, etc, just on your 95% chance of being right. If the experiment can be repeated independently, and again pans out, you now have a 99.75% chance of being right – and it gets better the more you repeat the experiment independently and successfully.
But most of these “95%” results suffer from two afflictions (either one is enough to wreck the result, but most announcements I have seen suffer from both flaws):
1) There wasn’t just one single prediction being tested. They went out hunting for a significant result. Try just 14 tests where there is no connection between the supposed cause and the effect, and you have a greater than 50% chance of finding at least one 95% significant correlation – where no correlation exists. This is the basis of the extremely funny cartoon urederra posted above.
2) Even when just one posited connection is being tested, it is inherently irreproducible. Typically this happens in historical surveys – people who ate X also got cancer – that sort of thing. We can’t find another planet and check the correlation there as a second check – we used our data once and for all in the original survey.
It isn’t hard to see that climate “science” (can’t stop laughing!) is riddled with these errors. Good statisticians understand them – bad scientists ignore them – bad editors and journal publishers overlook them – because personal advancement is the goal, not science, not truth, not the welfare of other people and wildlife.

RACookPE1978
Editor
Reply to  Ron House
July 12, 2015 7:46 pm

Ron House:
The problem is, the models (the religion of catastrology predictions over the past 18-1/2 years) have had only a 8% success record of being right!
After 1/6 of the prediction range in time, only 2 models in 23 have been even close to the real world!

george e. smith
Reply to  Ron House
July 13, 2015 8:56 pm

So if your “prediction” has just two possibilities; either it happens , or it doesn’t happen.
Well you may have to wait to the end of time, to confirm that your prediction didn’t happen, whereas, it might happen in the next atto-second confirming your prediction.
So presumably to be a sensible postulate to test, you must have predicted the event within some finite time window; otherwise your conjecture is just nonsense.
So ok, the time window comes and passes. Your conjecture is now moot, the window has elapsed.
And either the event happened or it didn’t. No other possibilities.
So now; in either case, what can you say about your statistics after the window has elapsed, and one of those .two possibilities eventuated.
I contend that the experiment neither proves, nor disproves the validity of your 95% confidence level. It has told you nothing.
Statistics tells you nothing about an event that only happens (or doesn’t) once.
Now if your conjecture is that in the next succeeding 100 time windows some event will happen in each one, with a 95% confidence level. Well of course after those 100 intervals have come and gone, maybe you got 97 hits and 3 misses.
Now maybe it is meaningful to say that your statistical analysis was valid.
But clearly, the prediction of the happening or non happening of just a single event is sheer balderdash.
Well that’s my opinion.
If you buy just one lottery ticket; either you win or you don’t. the statistics is irrelevant. It is still really just a choice between two cases. A win, and a non win.
Buy one ticket in a million different lotteries and the vast majority of them will lose.
G

July 12, 2015 8:55 pm

“Statistics are used much like a drunk uses a lamp-post:
for support, not illumination.”
Vin Scully

Editor
July 12, 2015 9:02 pm

Steven Mosher July 12, 2015 at 4:34 pm

“Damaging the lives of billions between ow and 2100, killing millions each year for 85 years.
For nothing. To prevent no harm to no people, but to promulgate harm on the all.
Just so you (those who support the CAGW theories) can “feel good” about your religion of Gaia and death.”

You think that is settled science?
Saying the Cost is high and that millions will die is just a form of economic catastrophism.
I suppose you have an econ MODEL to back up that claim… wa.
A proper skepticism would note that supposed damage from climate change is dependent on models
Supposed damage from cutting c02 is based on models.

Steven, the damage from artificially raising energy prices and from the World Bank denying loans from coal plants is already happening. We can go out and measure it. Your claim that the damage on the two sides is equally dependent on models ignores current, measurable, non-modeled real-world suffering happening right now.
This is the amazing thing to me, Mosh—the people willing to injure the poor today in hopes of imagined future cooling of a tenth of a degree or so in fifty years either claim to have the high moral ground or, like you, they claim the two sides are equal. They are not equal. One is present harm, the other is pie-in-the-sky promises of climate valhalla. Easy choice for me—if you want to fight CO2 go ahead, but doing it on the backs of the poor by taking any action that drives up energy prices is reprehensible.
w.

Reply to  Willis Eschenbach
July 13, 2015 12:05 am

It is touching how much concern the sceptics have for the poor.
/sarc
Actions taken to curb the CO2 emissions will undoubtedly lead to somewhat lower GDP growth. This does not mean that the effect of the measures will cause that we on average will be poorer than today. The effect is that we in the coming years, on average, will be somewhat less wealthy than we could have become, if we had done nothing to curb CO2.
However, how this GDP growth is distributed is another topic. Somewhat lower GDP growth does not necessarily have to hurt the poor.
/Jan

wayne
Reply to  Jan Kjetil Andersen
July 13, 2015 2:06 am

But it will hurt them Jan, the poor, no way around it. Bet you are a well off ba….rd and you will also have no problem justifing it in the end. Try living on less than $9000/yr and helping your kids/grandkids, some live on even less, I have for the last eight years, see if utilities matter to you then, when it means less food, worst food, no medical, no travel, less heat, less cool when needed. Be a real man, try it out yourself, see if you feel the same after a few years. Right now you are no different than what I wrote of below.

Reply to  Jan Kjetil Andersen
July 13, 2015 3:24 am

Wayne,
You can use the same argument for all kinds of emission controls. For instance, when we are curbing SO2 that causes acid rain, it also raises the cost of electricity. Should we let the Power plants send out the SO2 in the atmosphere so the electricity could be cheaper?
In addition, what about emission controls in cars. The catalysts used in car exhaust hurt the poor by making the cars more expensive. Should we also abandon catalysts in solidarity with the poor?
Moreover, what about sewage cleaning? That is also expensive.
All these emission controls make us a little less wealthy, but most of us think it is worth the money. I think there are other ways to compensate the poor so they are not hit by the expensive emission controls for CO2.
/Jan

Reply to  Jan Kjetil Andersen
July 13, 2015 4:17 am

Jan Kjetil Andersen,
But we don’t exhale SO2, nor is it a basic requirement of life for most life on earth.
Plus all of the emission equipment on cars are designed to reduce gasoline to mostly water and Co2.

Reply to  Jan Kjetil Andersen
July 13, 2015 8:30 am

Micro says:

But we don’t exhale SO2, nor is it a basic requirement of life for most life on earth.

I don’t think that is a very good argument Micro.
That a compound comes from humans and is essential for life does not mean that it cannot be harmful in elevated quantities.
After all, sewage comes from humans and contain nutrients that are essential for plant life, and that is harmful in elevated quantities isn’t it?
/Jan

Reply to  Jan Kjetil Andersen
July 13, 2015 8:37 am

Jan Kjetil Andersen: It is touching how much concern the sceptics have for the poor.
/sarc

Everybody says that sarcastically; yet the fact remains that analysis of energy benefits and coal costs shows that restrictions on coal harm the wealth and health prospects of the poor. Analysis of restrictions on mercury and sulfates shows that, after the wealth benefits of burning coal have been achieved, additional health benefits can be achieved by restricting mercury and sulfates. No wealth and health benefits for restrictions on CO2 in under 50 years have been demonstrated, and longer term benefits are totally conjectural.
So you want to mock people who point out that restrictions on CO2 harm the poor. How exactly does your mockery help the poor you claim to care about?

RACookPE1978
Editor
Reply to  matthewrmarler
July 13, 2015 9:16 am

matthewrmarler
Your claim is dead wrong.
The statement can be made of course, but it is based on false assumption DESIGNED SPECIFICALLY to create the false conclusion you just repeated.
Federal hype, exaggeration and spin created (invented) to justify their new regulations BY the bureaucrats (and their supporters in the media and other bureaucracies such as yourself) who want to implement their new regulations and enrich their power and budgets and influence, regardless of cost nor benefits. Not scientific values based on real world medical and economic sense.

Reply to  Jan Kjetil Andersen
July 14, 2015 5:56 am

Jan Kjetil Andersen:
If the action taken to reduce CO2 results in an undetectably small mitigation of climate change are the costs still worth it?
Is there a proper pace of CO2 reductions today that can be shown to produce (at least) offsetting benefits in the future? At what point do the benefit/cost curves cross?
As long as we are talking about hypothetical harms that are contingent upon the precise manner and timing of implementation (for both CO2 emissions and regulations intended to reduce them), should you also consider that future technologies may produce better results with less economic loss?
In light of the questions above, have we truly answered the question “Can we afford to wait?”

Reply to  Jan Kjetil Andersen
July 14, 2015 1:00 pm

Opluso asks:

… should you also consider that future technologies may produce better results with less economic loss?
In light of the questions above, have we truly answered the question “Can we afford to wait?”

The next question is then how long do we have to wait on those future technologies?
The answer on this last question will probably depend on whether we recognize reduced CO2 emissions as desirable, and if so, how much are we willing to pay for it.
If we don’t want to pay anything at all, there will be no incentive to develop these future technologies and then we may have to wait a very long time.
On the other hand, if we adopt a binding target to reduce the emissions by some quantity in let’s say 2030, there will be incentives to develop these technologies. Therefore I think it is time to act now by adopting binding targets.
/Jan

Reply to  Jan Kjetil Andersen
July 14, 2015 2:12 pm

If we don’t want to pay anything at all, there will be no incentive to develop these future technologies and then we may have to wait a very long time.
On the other hand, if we adopt a binding target to reduce the emissions by some quantity in let’s say 2030, there will be incentives to develop these technologies. Therefore I think it is time to act now by adopting binding targets.

I think we have 5,10 or 20 years to better understand if extra Co2 is a problem, we also built out 100’s of nuclear power plants, and we’ve been funding fusion for 50 years.
I don’t think Solar and Wind will ever support a modern world, but more time will give time to develop better wind and solar, at least for right now there’s no evidence doing something is critical, and if it was, we should build another 500-1000 nuclear power plants.
I think when we see the environmentalists protesting to build nuclear, then they are truly worried about Co2, until then they are protesting modern society, and I’d love to see them going back to human labor farming.

Reply to  Jan Kjetil Andersen
July 14, 2015 4:31 pm

JKA:

On the other hand, if we adopt a binding target to reduce the emissions by some quantity in let’s say 2030, there will be incentives to develop these technologies. Therefore I think it is time to act now by adopting binding targets.

Incentives need not be coupled to “binding targets”. In fact, technologies that improve existing systems are likely to pay for themselves (as many already do).
Binding targets are little more than a political fetish at this point.

wayne
Reply to  Willis Eschenbach
July 13, 2015 1:20 am

Think that’s the best words you have ever spoken Willis. Good proper words that needed to be said to Mosher.
Just read recently that Jacques Cousteau estimates that killing 325 million a year is needed. Can you believe that? These people make me shudder. Right… to preserve for our children as they have no problem even thinking of killing countless millions without even a blink? Insane, evil, all of them.

AJB
Reply to  wayne
July 13, 2015 2:28 am

+10

Sleepalot
Reply to  wayne
July 13, 2015 4:44 am

That’s around 20 times the killing rate of WWII – which is generally considered to have been a bad thing.

Reply to  wayne
July 13, 2015 8:36 am

Wayne,
What do you mean by “all of them”? Do you think all non-sceptics are equal?
I agree that the oceanographer Jacques Cousteau had some very silly ideas about population control.
To give him some justice, the full quote as said on an interview by “UN Unesco” in 1991 was:

. . . Should we eliminate suffering, diseases? The idea is beautiful, but perhaps not a benefit for the long term. We should not allow our dread of diseases to endanger the future of our species. This is a terrible thing to say. In order to stabilize world population, we must eliminate 350,000 people per day. It is a horrible thing to say, but it is just as bad not to say it.

http://www.abovetopsecret.com/forum/thread974511/pg1
It seems like he thought that the only way to stop population growth is to increase mortality. Fortunately this is wrong.
The only viable way to stop population growth is to reduce fertility, and that comes as a natural result of lifting the world’s most backward nations up to a level with more education – especially for females, less mortality and better health. Not the opposite.
/Jan

Reply to  wayne
July 14, 2015 5:18 pm

Jacques Cousteau wants that 325 million to be all of a certain sort, no doubt. Eugenics had the same presumptuous arrogance attached to it and we witnessed the murderous end-game predicted by those vehemently opposed to that vile creed.

Reply to  Willis Eschenbach
July 13, 2015 9:31 am

I think Ayn Rand’s best book was her first We the Living . One of the most salient commonalities between traditional economic maxism and the eKo-fascism we face is the sacrifice of the living for supposed utopian future .
But the marxists at least claimed their centrally enforced privation was for greater future productivity and quality of life . The eKo-fascists offer no such vision . Theirs is anti-life from molecule they demonize to the number remaining alive .

Reply to  Willis Eschenbach
July 14, 2015 4:23 pm

The world’s poor are getting less because of it, but It’s not on their backs—they don’t work for it. The taxpayer is getting scammed.

Khwarizmi
July 13, 2015 3:40 am

Jan,
It goes without saying that the people who spend the largest percentage of their income on energy are those who have the lowest incomes. So if you impose a system of mandatory indulgences in order to artificially increase the price of otherwise cheap energy, the most impact will be felt by the poor.
China lifted millions upon millions of its people out of poverty at a historically unprecedented rate over the last 2 decades, due mostly to actions designed to increase CO2 emissions.
Where do you suppose they would be today if they had adopted the opposite policy?

Reply to  Khwarizmi
July 13, 2015 10:26 am

Khwarizmi,
the development in China over the last 2-3 decades is highly welcome.
Admittedly, they have increased the local pollution levels and the CO2 emissions, but the good they have achieved by lifting so many people out of poverty is immensely more important. All these new more well off people now contribute to lifting the economy in the rest of the world.
I think the increased pollution and CO2 is a temporary situation. When the Chinese get richer they will not accept to breathe in unhealthy smog, and the CO2 emissions there are already starting to level out.
The best we can hope for is that India and other poor nations can achieve a similar development. That will also for a time period cause more pollution and more CO2 until they eventually are wealthy enough to give priority to the environment.
But I think the richest countries in the world, like the US, most parts of Europe, Japan and Australia can afford to both curb pollutions and CO2 emissions now and that it should be implemented in a way that spare the poor.
/Jan

Dudley Horscroft
Reply to  Jan Kjetil Andersen
July 14, 2015 7:36 am

Jan – 13/7 at 1036
“But I think the richest countries in the world, like the US, most parts of Europe, Japan and Australia can afford to both curb pollutions and CO2 emissions now and that it should be implemented in a way that spare the poor.”
You are mixing things up that should not be mixed. In the richer countries there has been a massive reduction in pollution – remember the projections of New York being knee deep in horse manure if the population kept on growing in Manhattan. But streetcars were invented and then motor cars – Lo and Behold, there is no horse manure. Remember London and the abolition of “smog”.
But CO2 is not a pollutant, and there is no reason to curb its production. By all means try to produce it more efficiently – it is a good fertilizer, and it is always worth while to reduce the resource costs of creating what is perhaps the “Universal” fertilizer.
CO2 emissions and “pollutions” should not normally be linked by “and”.

Reply to  Jan Kjetil Andersen
July 14, 2015 12:30 pm

Dudley
Many harmless substances are considered as pollutants when they are found out of place or in excess quantities.
Horse manure is one of them; that is also a good fertilizer, still it is considered as a pollutant if you have too much of it.
I think that also CO2 can be rightfully considered as a pollutant.
It comes as a byproduct by the same processes that produce ordinary pollutants like CO, SO2, NOX and particulates, and it is deadly in very high concentrations.
How harmful or harmless it in lower concentrations is a big question. It is no less than what this blog and many other are about, so I don’t think we can finish a discussion about that here, but I do not think that we can conclude with certainty that it is only harmless.
/Jan

Mervyn
July 13, 2015 7:17 am

The problem with science is that there is science that prove things (the good science) and science that does not prove things (the bad science). Climate change is all about politics relying on science that does not prove things.

Reply to  Mervyn
July 13, 2015 8:59 am

Proof?
That is for mathematics and liquor.
The best we can aim for in empirical science is falsification
As Karl Popper has said: “A theory in the empirical sciences can never be proven, but it can be falsified, meaning that it can and should be scrutinized by decisive experiments.”
/Jan

Reply to  Jan Kjetil Andersen
July 13, 2015 9:52 am

Wow , quoting Michael Mann !
http://cosy.com/Science/QuantTrumpsQual.jpg
And distorting Popper’s point — which , btw , the CAGW crowd has spectacularly failed .

Reply to  Jan Kjetil Andersen
July 13, 2015 11:09 am

Bob
How do you think that I am distorting Poppers’s point?
It is generally accepted in science theory that proof is out of reach in the empirical sciences,
See Stanford: http://plato.stanford.edu/entries/popper/
And Wiki:
https://en.wikipedia.org/wiki/Scientific_evidence
https://en.wikipedia.org/wiki/Karl_Popper
/Jan

Reply to  Jan Kjetil Andersen
July 13, 2015 9:10 pm

First , it is clear that no amount of falsifying evidence can penetrate the warmist skull .
But more to the point , the precise computations upon which our world runs , including the predictions of when dawn will occur tomorrow at any location on the globe are believed with existential certainty because they have survived centuries of potential falsification . They have been winnowed ; they have been “scrutinized by decisive experiments” and have survived .

Reply to  Jan Kjetil Andersen
July 13, 2015 10:28 pm

Bob
If you take a class in science philosophy you will hear that many students protests with arguments similar to your when they hear this for the first time. The professors have a hard time convincing the students that Popper, Hempel and other gurus in science philosophy was not out of their mind.
However, given time for reflection most of them realize that the science philosophers are right after all. In a strict logical sense, you can never find the evidence that gives the 100% absolute proof. There will always be a chance that you overlooked something and that new evidence will turn up.
Repetition increase the confidence in the result, but it is not proof.
If 1000 independent researchers have confirmed the result, you have a high confidence, but in a strict sense, it is not a proof. Nor will it be when the 2000nd or 3000nd researcher also confirms it.
/Jan

Reply to  Jan Kjetil Andersen
July 13, 2015 11:18 pm

Among the handful of professors from whom I learned the most , I feel Don Campbell edges out the others . He gave me my first job doing APL ( calculating statistics of discontinuities in time series ) and also funded the writing up of what would have been my PhD thesis after I had lost my tenure in grad school . It was largely thru him that I learned of his friend Popper and their similar thoughts on what Campbell coined evolutionary epistemology .
My overall response to your pedantic point is ” so what ? ” .
Even Popper could not argue that Newton’s equations were falsified by Einstein , simply that they were shown to be just a limiting case of a more general insight . And both Popper and Campbell would agree that Newton’s quantitative ( mathematical ) derivations of orbital motions from strikingly simple fundamental relationships are profound and precise over an astoundingly large domain and have survived centuries of potential falsification over that domain , yet have not been .
And if you want to bet me that orbital mechanics will be falsified by the sun failing to come up tomorrow , I’ll be happy to give you very very long odds .
Michael Mann objects to math because it falsifies his claim to infame .

July 13, 2015 8:38 am

Nuts.
First “results published in psychology journals ” means that this isn’t about science, it’s about world shaking theories derived from comparisons between the two halves of a sample of three.
Second, the penulitmate author here is railing against the mis-use of p-values. Duh. I believe the world champion exponent on this is Dr. Briggs (see http://wmbriggs.com/ ). Any of his articles on the subject will make the same point -but do so without significant (!) reliance on social “science” examples.

johann wundersamer
July 13, 2015 9:46 am

Oh well, Gigerenzer.
Made a lot of money on Heuristics – what a lucky man he was.
Now the small people of 10 mill greeks blasted world economie – according to 80 mill germans incl. Gigerenzer.
Heuristics – find scapegoats.
____
SPIEGEL: Wie testen Sie denn ob eine Heuristik tatsächlich taugt?
Gigerenzer: Auch da ein Beispiel:
Eine der einfachsten Faustregeln
ist die sogenannte
Rekognitionsheuristik. Sie beruht
auf dem Prinzip: Den Namen, die
man kennt, vertraut man eher.
Wir haben das im Aktienmarkt
getestet. Dazu haben wir
Passanten genommen, in
München und in Chicago. Diese
Leute haben wir Aktienpakete
schnüren lassen, nur nach dem
Kriterium, welche Firmennamen
sie schon gehört hatten. Und
siehe da: Im Schnitt hatte das
Rekognitionsportfolio basierend
auf der bloßen
Namenserkennung von
halbignoranten Menschen mehr
Geld gemacht als professionell
gemanagte Fonds. Ich habe nie
in meinem Leben so viel Geld
verdient.
____
Objective, neutral, science? Please correct me where I’m wrong. Hans

johann wundersamer
July 13, 2015 10:14 am

+++ mod.

jon
July 13, 2015 7:37 pm

“Damaging the lives of billions between now and 2100, killing millions each year for 85 years.
For nothing. To prevent no harm to no people, but to promulgate harm on the all.
Just so you (those who support the CAGW theories) can “feel good” about your religion of Gaia and death.”
Oh come on, that’s just the old “Believe in God because if you’re wrong you’ll go to hell””crap re-cycled. Of course since this is a religion we’re discussing rather than science, maybe that’s appropriate, but still ridiculous.
Using that logic all the AGW-ers should go with the Ice Age scenario because if they’re wrong the results would be horrendous.
How do you choose between the alternatives if we give up the scientific approach?

Gary Pearse
July 14, 2015 8:17 am

End of the world doomsters (biologists like Ehrlich, economists like Malthus, climatologists saving the planet, etc) have always been wrong. This is because they leave out the enormous and unfailing role of human ingenuity at problem solving from their simplistic, linear and two dimensional view of the world (someone said ‘petri dish’ world). Biologists study the habits and ecology of animals and plants, count and analyze droppings, etc. but such study, although important and useful gives them no expertise or insight whatsoever into what the future will bring. Economists, like meteorologists have some short term success in forecasting and the world that has unfolded constantly takes them by surprise. Consensus climatologists are the worst because they have espoused a theory that anchors them and spent most of their time fending off challenges to it so that we basically have nothing new in 35 years of intensive study with a budget thousands of times larger than that of the Manhattan project. All the world’s most pressing problems would have been solved if this money had gone toward such tasks.
We didn’t get buried in Malthus’s horse manure – hey, the poor horse suddenly all but disappeared with the discovery of petroleum and IC engines. Mass starvation, paucity of resources didn’t end civilization as we know it. Human ingenuity has all but wiped out famine, disease and delivered raw materials in abundance. We didn’t freeze to death in the dark by 2000 and we won’t burn up in 2100.
This and countless other doom scenarios NEVER came to pass. I don’t think it bold to say it is AXIOMATIC THAT EXTREME PREDICTIONS OF HUMAN CAUSED DOOM CANNOT COME TRUE because overpowering dynamic ingenuity is absent as a force in their thinking. Unconstrained by this first order principal component, their thoughts (and heartfelt concerns) soar through the roof of reality.
Let me add two more items to the ‘ten things’ to save science from its statistical self.
1) pass the forecast through the filter of the Axiom above. The forecast should not be one of doom. Nature is the keeper of doom scenarios and human ingenuity will even be able to deal with some of these.
2) explicitly invoke the Le Chatelier principle first. Le Châtelier’s principle states that if a dynamic equilibrium is disturbed by changing the conditions, the position of equilibrium shifts to counteract the change (in part). Wiki gives this definition: “Any change in status quo prompts an opposing reaction in the responding system.”
https://en.wikipedia.org/wiki/Le_Chatelier%27s_principle
This is pure “governor”-like effect a la Willis Eschenbach. It is even a predictor of Newton’s Laws of Motion, market behavior, and a broad range of things which include initiation of human ingenuity. To use it, take the Doom prediction, cut it in half because of human nature’s propensity for exaggeration and emotional inertia when they want to make a point. Then cut it at least in half again to account for the omitted le Chatelier effect. Finally, if anything sticks up that needs attention, human ingenuity will grind it down to a small bump.

KhumoBraes
July 15, 2015 9:30 pm

Cook et al.’s dodgy stats are certainly what drove me to “It appears that more than 97% of climate scientists use stats incorrectly.

gerjaison
December 11, 2016 1:25 pm

Hi,
Take a look at that coolest stuff ever! You’ve never seen something like that I swear! Here, check this out

All best, gerjaison