In the wake of what Willis recently pointed out from Nassim Taleb, about how “In fact errors are so convex that the contribution of a single additional variable could increase the total error more than the previous one.”, I thought it relevant to share this evisceration of the over-reliance on statistical techniques in science, especially since our global surface temperature record is entirely a statistical construct.
Excerpts from the Science News article by Tom Siegfried:
Science is heroic. It fuels the economy, it feeds the world, it fights disease. Sure, it enables some unsavory stuff as well — knowledge confers power for bad as well as good — but on the whole, science deserves credit for providing the foundation underlying modern civilization’s comforts and conveniences.
But for all its heroic accomplishments, science has a tragic flaw: It does not always live up to the image it has created of itself. Science supposedly stands for allegiance to reason, logical rigor and the search for truth free from the dogmas of authority. Yet science in practice is largely subservient to journal-editor authority, riddled with dogma and oblivious to the logical lapses in its primary method of investigation: statistical analysis of experimental data for testing hypotheses. As a result, scientific studies are not as reliable as they pretend to be. Dogmatic devotion to traditional statistical methods is an Achilles heel that science resists acknowledging, thereby endangering its hero status in society.
…
More emphatically, an analysis of 100 results published in psychology journals shows that most of them evaporated when the same study was conducted again, as a news report in the journal Nature recently recounted. And then there’s the fiasco about changing attitudes toward gay marriage, reported in a (now retracted) paper apparently based on fabricated data.
But fraud is not the most prominent problem. More often, innocent factors can conspire to make a scientific finding difficult to reproduce, as my colleague Tina Hesman Saey recently documented in Science News. And even apart from those practical problems, statistical shortcomings guarantee that many findings will turn out to be bogus. As I’ve mentioned on many occasions, the standard statistical methods for evaluating evidence are usually misused, almost always misinterpreted and are not very informative even when they are used and interpreted correctly.
Nobody in the scientific world has articulated these issues more insightfully than psychologist Gerd Gigerenzer of the Max Planck Institute for Human Development in Berlin. In a recent paper written with Julian Marewski of the University of Lausanne, Gigerenzer delves into some of the reasons for this lamentable situation.
Above else, their analysis suggests, the problems persist because the quest for “statistical significance” is mindless. “Determining significance has become a surrogate for good research,” Gigerenzer and Marewski write in the February issue of Journal of Management. Among multiple scientific communities, “statistical significance” has become an idol, worshiped as the path to truth. “Advocated as the only game in town, it is practiced in a compulsive, mechanical way — without judging whether it makes sense or not.”
Commonly, statistical significance is judged by computing a P value, the probability that the observed results (or results more extreme) would be obtained if no difference truly existed between the factors tested (such as a drug versus a placebo for treating a disease). But there are other approaches. Often researchers will compute confidence intervals — ranges much like the margin of error in public opinion polls. In some cases more sophisticated statistical testing may be applied. One school of statistical thought prefers the Bayesian approach, the standard method’s longtime rival.
…
Why don’t scientists do something about these problems? Contrary motivations! In one of the few popular books that grasp these statistical issues insightfully, physicist-turned-statistician Alex Reinhart points out that there are few rewards for scientists who resist the current statistical system.
“Unfortunate incentive structures … pressure scientists to rapidly publish small studies with slapdash statistical methods,” Reinhart writes in Statistics Done Wrong. “Promotions, tenure, raises, and job offers are all dependent on having a long list of publications in prestigious journals, so there is a strong incentive to publish promising results as soon as possible.”
And publishing papers requires playing the games refereed by journal editors.
“Journal editors attempt to judge which papers will have the greatest impact and interest and consequently those with the most surprising, controversial, or novel results,” Reinhart points out. “This is a recipe for truth inflation.”
Scientific publishing is therefore riddled with wrongness.
Read all of part 1 here

Excerpts from Part2:
Statistics is to science as steroids are to baseball. Addictive poison. But at least baseball has attempted to remedy the problem. Science remains mostly in denial.
True, not all uses of statistics in science are evil, just as steroids are sometimes appropriate medicines. But one particular use of statistics — testing null hypotheses — deserves the same fate with science as Pete Rose got with baseball. Banishment.
Numerous experts have identified statistical testing of null hypotheses — the staple of scientific methodology — as a prime culprit in rendering many research findings irreproducible and, perhaps more often than not, erroneous. Many factors contribute to this abysmal situation. In the life sciences, for instance, problems with biological agents and reference materials are a major source of irreproducible results, a new report in PLOS Biology shows. But troubles with “data analysis and reporting” are also cited. As statistician Victoria Stodden recently documented, a variety of statistical issues lead to irreproducibility. And many of those issues center on null hypothesis testing. Rather than furthering scientific knowledge, null hypothesis testing virtually guarantees frequent faulty conclusions.
10. Ban P values
9. Emphasize estimation
8. Rethink confidence intervals
7. Improve meta-analyses
6. Create a Journal of Statistical Shame
5. Better guidelines for scientists and journal editors
4. Require preregistration of study designs
3. Promote better textbooks
2. Alter the incentive structure
1. Rethink media coverage of science
Read the reasoning behind the list in part 2 here
I would add one more to that top 10 list:
0. Ban the use of the word “robust” in science papers.
Given what we’ve just read here and from Nassim Taleb, and since climate science in particular seems to love that word in papers, I think it is nothing more than a projection of ego from the author(s) of many climate science papers, and not a supportable statement of statistical confidence.
One other point, one paragraph in part one from Tom Siegfried said this:
For science is still, in the long run, the superior strategy for establishing sound knowledge about nature. Over time, accumulating scientific evidence generally sorts out the sane from the inane. (In other words, climate science deniers and vaccine evaders aren’t justified by statistical snafus in individual studies.) Nevertheless, too many individual papers in peer-reviewed journals are no more reliable than public opinion polls before British elections.
That ugly label about climate skeptics mars an otherwise excellent article about science. It also suggests Mr. Siegfreid hasn’t really looked into the issue with the same questioning (i.e. skepicism) that he did for the abuse of statistics.
Should Mr. Siegfreid read this, I’ll point out that many climate skeptics became climate skeptics once we started examining some of the shoddy statistical methods that were used, or outright invented, in climate science papers. The questionable statistical work of Dr. Michael Mann alone (coupled with the unquestioning media hype) has created legions of climate skeptics. Perhaps Mr. Siegfeid should spend some time looking at the statistical critiques done by Stephen McIntyre, and tell us how things like a single tree sample or upside down data or pre-screening data begets “robust” climate science before he uses the label “climate deniers” again.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Roh, roh, rohbusted.
=============
“0. Ban the use of the word “robust” in science papers.”
Spot on. This article could simply have been entitled ” Robust analysis isn’t”.
In fact, it would be a shame to ban it, since it is one the best early indicators that the author is doing politics not science. One thing you can be sure of when a scientist chooses to describe his results as “robust” is that they are anything but. Robust is a word for politicians and lawyers. Saying a paper is robust is about as convincing as a politician announcing a “robust enquiry”. .
If we wanted a No.0 for that list I’d say ban discussing “trends” in climate papers. Then they’d have to think a bit. It is also what most of the trivial waffle about significance relates to.
Siegfreid rather blows objectivity credentials by assimilating climate deenyiers to vaccinations ( why not go the whole crap and cite some Lew-paper reference to moon landings while he’s in there.).
Still, I’m sure he thinks his comparison is “robust”.
Hi Mike –
I don’t disagree with either you or our host AW’s suggested ban on the word “robust” from science dialog. But I will support “robust” as a meaningful word in engineering. For engineering, the terms “robust process” or “robust design” refer to a process or design that is least sensitive to process/design parameter variations compared to alternative process or design parameters. Determining the robust process/design, however, depends on a combination of solid experimental data and a verified and validated model of the process or design. The robust operating point for either is determined by optimization methods or other systems engineering methods.
My only point is that where defined, as it is in engineering, the word “robust” is not a dirty word! It’s only “dirty” where it is used in vague, undefined ways that obfuscate or project “soundness” that exceeds reality.
Thanks
Dan
Again, this is my thing, this time in my role as a hypothesis tester in the program Dieharder, a random number generator tester that relies heavily on p-values because, well, it is a hypothesis tester testing the null hypothesis “The current random number generator is a perfect random number generator”, which is what one is trying to falsify with a test.
I’ve also read Statistics Done Wrong and am very sympathetic to its thesis. But even p-values are useful if one recognizes the importance of George Marsaglia’s remark in the Diehard documentation: “p happens”.
p happens means that one simply should not take a p of 0.05 terribly seriously. That’s a 1 in 20 shot. This sort of thing happens all of the time. Who would play russian roulette with a gun with 20 chambers? Only a very stupid person, or one where winning came with huge rewards.
The big question then is when one SHOULD
When it’s replicated by a number of independent measurements, (trials). This means it cannot happen in a casino, and unfortunately this cannot be the case in climate science as long anyone is adjusting any data. It’s not just that the adjustments are biased, but the fact that they are dependent on a “body of knowledge” (sic) that by it’s very nature lacks the characteristics of independent observation and interpretation.
Having had a chance to look at Taleb’s draft version, (without the benefit of a good cold lager), I believe I see his point, and agree with it, (still have not worked through the formal math, but I see where it’s going). In certain types of models it very possible for the error processes to overwhelm the substantive information, the inclusion of additional variables just compounds the problem. This is certainly consistent with my field, (telecommunications), where I work with a lot of nominal variables and find that if we can isolate the main effects correctly then it’s been a good day, anything more than that is just a crap-shoot.
similarly, the word COULD sees no limits on its use in the Clmate change hustle.
A p-value isn’t a probability. It assumes some distribution of an unobservable parameter. In the frequentist world it is not permitted to assign probabilities to unobservables.
But even it could be a probability, simply because it’s about an unobservable, it doesn’t say much about the model other that it might have good parameter values. IOW: P(parameter|model,data), This says nothing about the model other than it has nice parameters. Saying that the model must represent anything in the real world because the parameters are the best is a lot like saying some car will perform well because it was made with high quality screws and has a great paint job.
What’s really needed is P(model | parameters, data). The only way to get this is to see how well the model predicts. Nothing else will do.
Wish I could edit.
simply because it’s about an observable
Should have been
simply because it’s about an unobservable
[Fixed. -w.]
Thanks Willis
Well I like what famous New Zealand scientist Ernest (Lord) Rutherford, had to say about statistics.
” If you have to use statistics, you should have done a better experiment. ”
I would go even further (strictly my opinion mind you):
If you are using statistics, you are NOT doing science. You simply are practicing a specific branch of mathematics, albeit a quite rigorous and exact branch of mathematics. No uncertainty exists in the output of statistical calculations. They are always done on a given data set of exact numbers (listed in the data set) So the end result of performing any specific statistical mathematics algorithm is always an exact result.
As for what that result might mean; well that is pure speculation on the part of the statistician.
NO physical system is responsive to or even aware of the present value (now) of any statistical computation; they can only respond to the current present value of each and every one of the pertinent physical parameters, and can act only on those values. For example, any statistical average value of some variable almost certainly occurred some time in the past, and the physical system can have no knowledge or memory of what that value is or when it might have occurred.
So appending a meaning to some statistical math calculation is simply in the eye of the beholder. It is all fiction and it can predict no future event. Well it might suggest how surprised an observer might be, when he eventually learns what a future event is at the time it becomes a now event.
So as I said, if you are doing statistics, you are not doing science. No experiment can be devised to test the output of your quite fictional calculation. (ALL of mathematics is fictional. Nothing in mathematics exists anywhere in the physical universe.)
g >> G
I think it was George Box (statistician) who said that all models are wrong, but some models are more useful than others. Sure math doesn’t exist in reality, but approximations (i.e., models) are useful, as are estimates of uncertainty.
Simple problem.
I have a product. It is priced at 50 bucks.
At 50. dollars 80% of the people who click on the link buy it.
So, with 100 clicks I get 80 sales and 4000 in revenue.
I do A/B testing.
For half the people coming to the site I price it at 55.00 and 40 of 50 people buy it
Do I have enough evidence to raise the price? should I.
In practical problems it is always easy to determine the Should.. when should I bet with 75% confidence
80% 99% because one can calculate the cost and benefit of being right or wrong.
In science what is the cost of being wrong.
Steve Mosher
Damaging the lives of billions between now and 2100, killing millions each year for 85 years.
For nothing. To prevent no harm to no people, but to promulgate harm on the all.
Just so you (those who support the CAGW theories) can “feel good” about your religion of Gaia and death.
Hmm increase price to 55 from 50 ? Hmm, whats your COGS. will the increase toss you into a higher tax bracket. NOT so simple…
“Damaging the lives of billions between now and 2100, killing millions each year for 85 years.
For nothing. To prevent no harm to no people, but to promulgate harm on the all.
Just so you (those who support the CAGW theories) can “feel good” about your religion of Gaia and death.”
You think that is settled science?
Saying the Cost is high and that millions will die is just a form of economic catastrophism.
I suppose you have an econ MODEL to back up that claim… wa.
A proper skepticism would note that supposed damage from climate change is dependent on models
Supposed damage from cutting c02 is based on models.
It’s rather uncertain.
The decision about what to do.. is not science.. it can be supported by science.. but the decision is not
a scientific one. It’s political and pragmatic.
Guesswork
Its good to have a pen and a phone.
“Hmm increase price to 55 from 50 ? Hmm, whats your COGS. will the increase toss you into a higher tax bracket. NOT so simple…”
Still simple. The point is NOBODY who solves practical problems cares a whit about 95% or 94.9%
or 99.9999999%
95% is just a tradition. Not written in stone.
“Just so you (those who support the CAGW theories) can “feel good” about your religion of Gaia and death.”
Huh?
Are U 95% certain that I believe in CAGW? opps U are wrong.
GAIA? a bunch of crap.
Try sticking to the topic. The cost of being wrong in science qua science.
Steven, you were the one to go off into “marketing” and “economics”,
Your method was condescending, I used the term “COGS” cost of goods sold.
In the example you used the percentages are only a small part of the decision making process.
You had to know I was having a little fun with you. The example you used was not the best for the point you were trying to make.
Oh and thank you for the reply.
Michael
Dumb example. What they really do is sell the exact same product under two different brands and model numbers at different prices at the same time. It’s called price discrimination, and it makes marketing go around.
It sort of like dickering without the dick. Or maybe the other way around.
Harold, years ago, I worked for a place that manufactured orthopedic devices. In a test, one of our engineers broke the mold. Sigh, we (the toolmakers) told him it would not withstand a 20% increase in pressure for his test shot. After it split in half, I asked the foreman if he needed me to work through the night to get the replacement mold built. He told me, “No. We manufactured the devices for BOTH of the major competitors, we would just ramp up production for the “other”company until the new mold was built.” God, I love capitalism!
I laugh to this day.
michael
“Steven, you were the one to go off into “marketing” and “economics”,
yes to illustrate the space where the word SHOULD gets used in making decisions about
confidence intervals. That is, where values are in play.
Your method was condescending, I used the term “COGS” cost of goods sold.
In the example you used the percentages are only a small part of the decision making process.
Note: I didnt mean for You to answer the question. rather to contemplate those areas where the
OPs question ( SHOULD) makes sense. I fully understand all the details required in making
these decisions..
You had to know I was having a little fun with you. The example you used was not the best for the point you were trying to make.
The point was simple. Where does the “SHOULD” question make sense
Oh and thank you for the reply.
Michael
Have a good night Steven, the wars will wait until tomorrow.
and again thank you for answering me. and your other comment on “designing chips” has caught my interest. Something to look into, I am ignorant of the subject
michael
Well I’m not sure what you mean by being wrong.
You can mess up an experiment and misread a thermometer. Not to worry; people trying to replicate your results will discover your error. The cost to you is egg on the face.
You can be wrong by postulating a theory of how some system or process works. Once again; not to worry. Other people doing experiments to test your theory will discover it simply doesn’t work that way. The cost to you is no Nobel Prize award for a meritorious discovery.
So you change the theory to bring it better into line with what the experimental observations say is actually happening.
That is how science has always operated.
g
Y’all, Steve is raising a valid point. Quit bashing him just because of history.
Yes, there is a price of getting it wrong. In this case, it’s my interpretation that the price of being wrong is far less in the do-nothing scenario than in the action scenario. The results of action are more immediate and the effects more certain (notably in increased poverty and decreased ability to raise oneself from it as well as decreased aid from rich nations and damage to the environment due to increased deforestation from biofuels, increased land use from wind farms, and overall reduction in available environmental funds), versus the nebulous and questionable effects of CO2 (which might or might not alter rainfall patterns for the worse in some areas and for the better in others, along with a host of minor issues including negligible increase in temperature and even smaller increase in sea level rise rate).
It’s not calculable even in theory due to the uncertainties involved. The point that the article is making is that trying to distinguish between tiny discrepancies is pointless and leads to self-delusion. In this case, however, the costs aren’t even close. It’s obviously better to not try and reduce CO2 through wind, solar, or biofuels, and carbon markets are pure financial smokescreens. However, nuclear or hydrological power are useful sources in their own right.
You are doing it wrong… You set the price at $80. Then you lower it $5 each time sales start to erode. Now you have the HP pricing model when they released their RPN calculators in the 80’s. All sales above marginal revenue are economic profit. 🙂
Glad to see you’re weighing in on this topic, rgb. Especially after seeing Tom Siegfried’s knee-jerk statement.
I was fortunate to have George Marsaglia as my advisor in graduate school. A great professor!
Were I forced to play Russian Roulette, one using a 20 chamber revolver would be preferred over a standard 6 shot revolver.
Still, even a standard revolver beats using a semi-auto. One of the funniest news stories I ever read (only the one about the guy who got angry at the soda machine and started rocking it to get his drink is funnier) was about the guy in Chicago who, after seeing Deer Hunter, decided to play the game with an automatic pistol. That’s what I call a self correcting problem.
rgbatduke says: “p happens means that one simply should not take a p of 0.05 terribly seriously”
As always your posts are both insightful and educational … thank you! So I thought I’d pass along what to me is a humorous (or is it sad) anecdote related to the p-value” comment above. I’ll try to make it brief:
I spent my career at a major aerospace company that invoked “Six Sigma” in the 90’s. The blackest of black belts issued a “best practice” memoranda describing a “ground-breaking analysis” to improve aeropropulsion engine performance and the study found “five key controling variables”
So, consider the experimental design: y was engine performance and x was a matrix of 100 variables … ranging from the physically realistic to the improbable (not scientifically identified). The study results identified 5 of the 100 variable as statistically significant the 95% level using multiple regression (no surprise there). I then challenged the grand-six-sigma guru by talking about just what a 95 confidence level means and Tukey’s teachings regarding the multiple comparison effect. If the 100 variables were simply random variables, the results would likely be the same.
I’m sure it’s obvious to most, frequentist statistics are predicated on making a pre-defined hypothesis and then testing it; it’s not a hunting license to search for correlations within a database that meet a “p” value that “itself” perverts the notion of probability.
Dan
The appropriate p value needs to be determined by analyzing Type I errors (p value or alpha) and Type II errors (beta). A p value of 0.05 may be far superior to p=0.01 in terms if statistical power if the beta value (probability of Type II error) grows very quickly as the Type I error is decreased by a lower p value.
In terms of Russian roulette, consider that you p value diminishes as the number of cylinders is raised from say 20 to 100, but the Type II error may grow very fast. Again, if the Russian roulette analogy is interpreted a p= 1/no of cylinders then beta may be something such as the cylinder exploding when the cylinder walls become thinner. The point is that in hypothesis testing as the p value decreases the beta value tends to increase.
For a simple discussion of the relation between Type 1 type errors and Type II errors please see:
http://statistics.about.com/od/Inferential-Statistics/a/Type-I-And-Type-II-Errors.htm and here is an explanation that I think is more understandable:
http://www.cliffsnotes.com/math/statistics/principles-of-testing/type-i-and-ii-errors
Finally a larger random sample size lowers the probability of either error.
great article explains how people in lab coats could publish the notion that inhaling smoke scattered and defused in the air could be WORSE for a person than inhaling 100% of the smoke directly to the lungs without really anybody saying WUWT!
The smoke that hadn’t gone through the filter could be worse is easy to see if in fact the filter was actually filtering out harmful particles. The fact that the smoker would breathe the same unfiltered smoke as they sat in the room where they were smoking so would be getting it as well as the filtered. So many points of not just WUWT but WTF with the controls on those studies. Too many “ifs” and “assumptions” in those studies.
Maybe you’ve never observed someone smoking. Almost all of the “smoke” generated when a cigarette is used comes from being exhaled by the smoker, thus through the cigarette filter (if the cigarette is filtered) and the lung filter of the smoker. (Which is why first hand cigarette smoke causes lung cancer, and 2nd hand doesn’t…at least “statistically” speaking)
“Which is why first hand cigarette smoke causes lung cancer”
I hate to quibble, especially about something as detestable as cigarettes, but I think I must.
Smoking is associated with an increased risk of lung cancer.
It is incorrect to say it causes it, as a stand alone statement.
I hate cigarettes, and do not and have never smoked, but in fact lot of people smoke all day long, everyday, and have done so since they were teens, and do not get cancer. I have four siblings for which this is the case.
Just sayin’.
TRM even a relative novice, like myself, can see that the basis of these experiments is nonsense. I don’t care, hate cigarettes, all you want to do is be sure that science is telling you the truth or something close to it. It does not require that you make up an ad hoc reason that you should BELIEVE the conclusions
To some of us it really has little to do with health and everything to do with dealing with the smoker’s bad manners. A smoker’s sense of smell and taste is deadened by smoking. They assume that everyone around them has the same handicap and fail to understand that cigarettes in particular stink. There are tobaccos that don’t, but not in cigarettes. One trick I used to convince my two-pack-a-day dad was to scent trail him in the dark. Cigarette smoke also flavors every bl**** thing you put in your mouth. If you have eaten in a European restaurant, and if you’re a non-smoker, then you have had the wonderful experience of tasting first class wine-plus-ash-tray, truly excellent venison-plus-ash-tray, etc.
Though you would have to balance the effect of one cigarette in your mouth against the dozens of cigarettes burning in a public place, and the fact that you are inhaling the secondhand smoke all the time you are in there instead of just for 5min every hour or so.
In assuming that the one cig you smoke is worse than the many others in the room, you are assuming a linear relationship between smoke concentration and effect. That might be incorrect, after all the effect of CO2 on greenhouse effect is not linear but near to saturation. It may be that the effect of smoke is chronic, that continuous exposure to smoke irritates the lungs more than a high concentration for a short time. Without investigation of these factors, no-one knows,.
Though, it ihas been suggested that continuous exposure to lower levels of smoke from unventilated wood fires is what caused hunter-gatherer society humans to have a lifespan of only half the modern one.The traditional line of archaeologists was that this was down-to the harsh living conditions, but the reality might just be that one single factor was responsible, a factor which the people themselves failed to recognise and therefore took no action to remedy.
All of which in principle shows just how difficult it is to obtain meaningful statistical results,
I believe he was actually referring to the background PM studies that show 30 ppb particulate matter in the air causes an endless list of health problems. Those are the issue. Secondhand smoke, especially in concentrated areas, is much more suppored (in fact, if these suggestions were taken into account, I think the EPA would have needed much less statistical trickery to get it labeled as a health hazard).
The issue is really the knowledge that a pack a day takes 5-10 years off your life, but the EPA is claiming 30 ppb in the air (a small fraction of the exposure that the smokers get) can be responsible for hundreds of thousands of premature deaths annually.
Forget it, you can’t save science.
It’s dead, it shot itself in the head.
Perhaps Science was playing Russian Roulette with rgbatduke’s 20 chamber revolver with 19 bullets, or, more effectively….playing said roulette with a Glock….which generates a “winner” every time.
+1000
Sorry but you’d have to explain one that to UK readers:
A Glock is a pistol
OK, what’s a ‘pistol’ ?
A pistol is a short gun.
Gun? Please explain.
Ian,
A Glock is a Swiss manufactured semi-automatic handgun. Semiautomatics fire from a spring loaded magazine (cartridges are stacked atop one another) which feeds the chamber of the firearm. The are some differences between single action (the weapon requires the operator to load the first round into the chamber by working the action (normally a slide on a handgun) which also cocks the hammer and a double action (the trigger mechanism cocks the hammer – I believe you still have to work the action to load the chamber – I only shoot SA autos), but essentially, if you have a semiauto pistol, with a round in the chamber and the safety off, the odds are almost 100% that the weapon will discharge. When playing RR those odds are only good for the spectators betting on the action.
The problem is that people use the term science about unscientific methods.
The 500 year old method of inductivism has been demonstrated to be flawed.
However it seems to be dominating the works by IPCC.
Not enough people endorse the empirical method of Karl Popper.
The empirical method is about making precise, falsifiable statements and then putting all effort into trying to falsify the statement. The empirical content of the theory is higher the more the theory forbid. A theory is merited by the attempts of falsification it has survived.
Popper must be turning in his grave by the confidence and agreement statements by IPCC.
And even worse – even so called scientific societies issue unscientific statements on climate change.
Science is not dead – it is just in a very sorry state.
And even worse than that – 36 out of 65 Nobel laureates signed an unscientific statement.
At least there were some reluctancy – 29 of them demonstrated some integrity by not signing – giving at least some hope in scientific integrity.
Have you reversed those two numbers?
The more you look at it…..the more skeptical you become
Until you reach the point of the most asinine ridiculous piece of fabricated BS you ever thought you would like to see…………
Offline: What is medicine’s 5 sigma?
Richard Horton
DOI: http://dx.doi.org/10.1016/S0140-6736(15)60696-1
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(15)60696-1/fulltext?rss%3Dyes
“… this symposium—on the reproducibility and reliability of biomedical research, held at the Wellcome Trust in London last week—touched on one of the most sensitive issues in science today: the idea that something has gone fundamentally wrong with one of our greatest human creations.
The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. As one participant put it, “poor methods get results”. … The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data. Journal editors deserve their fair share of criticism too. … Our acquiescence to the impact factor fuels an unhealthy competition to win a place in a select few journals. Our love of “significance” pollutes the literature with many a statistical fairy-tale. We reject important confirmations. Journals are not the only miscreants. Universities are in a perpetual struggle for money and talent, endpoints that foster reductive metrics, such as high-impact publication. National assessment procedures, such as the Research Excellence Framework, incentivise bad practices. And individual scientists, including their most senior leaders, do little to alter a research culture that occasionally veers close to misconduct.
“One of the most convincing proposals came from outside the biomedical community. Tony Weidberg is a Professor of Particle Physics at Oxford. Following several high-profile errors, the particle physics community now invests great effort into intensive checking and re-checking of data prior to publication. By filtering results through independent working groups, physicists are encouraged to criticise. … Weidberg worried we set the bar for results in biomedicine far too low. In particle physics, significance is set at 5 sigma—a p value of 3 × 10–7 or 1 in 3·5 million (if the result is not true, this is the probability that the data would have been as extreme as they are). …”
–http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(15)60696-1/fulltext?rss%3Dyes
“poor methods get results”
Very well and truly said.
From the second link, ” insist on replicability statements in grant applications and research papers”
would go a long way to clean up this mess.
An improvement in the media coverage of science would need better journalists. Not going to happen.
At a minimum we need journalists who believe that their job is to inform rather than to educate and influence.
Journalists, like scientists, want to eat at the “cool table” in the school cafeteria, so they come up with stories highers-up want to see, and nothing sells better than imminent catastrophe. Global warming is right up there. The fact that the end of the world keeps getting put off every few years bothers them not at all.
What happened with watchdog journalists – nobody left in mainstream media?
I have not seen any mention of the fact that we are in an interglacial period during which temperature is expected to rise and continue to rise…until it doesn’t, and then back to another ice age.
Given the fact that the CAGWers have “high jacked” every Degree K of Interglacial Global Warming (IGW) that has occurred post-1880 …… they dare not make mention of the existence of any IGW.
I have brought it up several times in recent weeks on WUWT comment threads, like here:
http://wattsupwiththat.com/2015/06/27/i-only-ask-because-i-want-to-know/#comment-1975512
And many make mention of the apparent wish of warmistas to have the world return to the “preindustrial” temperature regime which prevailed during the LIA, and to hence stay there forever.
If they get their wish, I suspect a lot of people will regret it.
Interglacial T is not “expected” to rise until it doesn’t, and back to another ice age, nor has it. For three thousand years each warm peak, has been cooler then the proceeding.
There is not one size fits all for statistics.
P values at the 1/20 threshold (2 sigma) are not what people like unless the study is really expensive or the benefits are worth banging on ahead (the old hackneyed example is people are dying by holding up a treatment).
The other time low p-values don’t matter is in early experiments in a paper that are later confirmed later (hopefully in the same publication) by different approaches.
But really, most people push things to the 3 and 4 sigma level. (better than 1/100 to better than 1/1000.) I see no problems when you are in that region, as long as the exp design is okay.
Estimation is great sometimes, but most tests are experimental against control and estimation often makes no sense.
something that is going to happen is not a probability!
Science has certainly become tainted by marketing in recent decades, just compare the famous sentence in the Watson-Crick paper on the double helix, which went something like “our results may shed some light on the mechanism for inheritance” with the hype found in many modern climate science papers.
Science papers should simply describe what the authors have done, let the readers decide if this is Nobel material or worthless junk.
I concur many times over, It’s compounded by academic cheer-leading … I’ve seen too many recent dissertations with the word revolutionary or groundbreaking associated with the results. Back when I earned my PhD we were told: 1) This is not your magnum opus, it’s just the start of what can be a good career, 2) The purpose of the dissertation is to establish, to your committee and the senior faculty, that you understand the process of designing and executing quality original research. If there is sound logic behind your research questions, your research design, and yet the null hypothesis fail, then your dissertation research was a success, since we all have learned something.
Which is the basis for the Nobel physics, chemistry, and medicine awards often waiting decades to ensure the discoveries stand to replication. Unfortunately for many this consigns the discovers to No Nobel, as they are posthumously awarded.
… err….they are not posthumously awarded.
The greatest problem with the use of P-values is that researchers use them to measure the value of observations as evidence for or against a hypothesis when they are not designed for this purpose, and using them in this manner leads to some very illogical results. Neyman-Pearson inference suffers from the same problem, as do the use of confidence intervals. They are all part and parcel of the same line of thinking.
Bayesian methods are meant to answer a completely different question. In effect, Bayesian methods tell a person how to modify prior beliefs and remain consistent with probability in light of new data.
The only method that measures data as evidence for or against a particular hypothesis is the method of likelihood. Most scientific questions can be framed as a test of one hypothesis against another, so likelihood should be the first thought in analysis, but P-values, confidence intervals and Neyman-Pearson inference are simpler to do, and more widely accepted. Well, bleeding patients was widely accepted practice at one time as well.
No amount of statistics will save science from fraud, bias, bad models, circular reasoning and other logical fallacies, or for the corrupting influence that sources of support, including the Federal government bring.
Statistics always tells you only the properties of numbers you already know. They contain no information about numbers you don’t yet know. And all of the information you will ever have about that set of numbers is contained in the set of numbers itself. anything added is information about statistical mathematics algorithms; not about the data set numbers.
It has been said that the signal with the highest possible information content, is white Gaussian noise.
It is totally unpredictable and no future sample of the signal can be deduced; so it is 100% information about itself. You only know what it is once you have it.
An example of how adding “color” as a variable increases the number of ways to find a false result, making a false result more likely. How can any study with a 1/20 odds of being wrong be called robust?
https://xkcd.com/882/
Rehashing Old Baloney Using Statistical Trickery.
Reprehensibly Obfuscated Bollocks Undermining Scientific Thought.
Some particle physics experiment might be considered valuable if you simply get the right order of magnitude of an effect. But if you want to know the wavelength of the red Cadmium line measured against the standard meter bar, maybe eight significant digits isn’t close enough. ( I think it is; or was 6438.4696 Angstrom units) But don’t quote me on that.
g
#s 7, 8, 9, and 10 are wrong. Adoption of these points would make research worse, not better. Testing the null hypothesis is the gold standard and should remain so. Trouble is there are many ways to get to a p value, and the climate game is to choose the one that has the best chance of getting to a p value you like. Worst cases are when there are none that lead to a p value without smelling bad so estimations and confidence intervals are used instead. All four of those points should be trashed for just one: Ban the use of statistical test SHOPPING and instead outline recommended statistical testing based on study design. I would further emphasize that any study that resorts to estimations and confidence intervals should be labeled as snake oil salesmanship and be banned from ever seeing the light of day in a research journal.
I do believe you have it exactly right. No amount of research guidelines and formalisms can protect an endeavor
from unethical behavior or a deliberate effort to deceive. Statistics in general, and wee p-values are not the problem here, as abused as they sometimes are.
I also like to remind people that ClimateScience! is not science, it is politics. What those people do should not be held against the real sciences. And that is my Robust conclusion.
Pamela:
Good comment and suggestion.
It’d be a terrific comment and suggestion if you added direct to the Author’s, Tom Siegfried, article, where the list originates. Perhaps email (tweet) the author too?
You give me too much credit. My experience with statistics comes from one audited graduate level class (but I did all the assignments and earned an A), running my data through my own purchased Statview SE program for my published research, and an oddity in my brain: math intuition. Any statistician could easily run circles around this one hit wonder.
Pam, all you’re suggesting is that there is a ‘gold standard’ statistical method. And there are two corollaries to this:
1) If such a gold standard exists, then you would already be recommending it.
2) That statistical correlations can answer scientific causation.
But no matter how we turn the subject around, statistics will always fail to properly test the null hypothesis. For if there is a relation between two variables, then we can simply construct an experiment, fiddle one, and watch the other dance. Within science there are only three purposes for statistics:
1) To do initial inquiry to see if some large and unexpected something is unavoidably present and worth chasing further.
2) To put distributions on the error bounds between the mathematical description of the fiddling mentioned above and the received observational values.
3) When it is impossible to actually perform an experiment.
And that last is where we get into the very serious problem of statistics in science: For how can it be science if an experiment is impossible?
Which is not to say that statistics don’t have their uses. But it only shines when we can state that we know what should or must be the case, because we have engineered it successfully, and then compare it to what we actually get.
That is not what I am suggesting at all. There are a variety of methods to determine statistical significance that can accurately determine results that fall outside of chance occurrence. The issue is ill-matched methods with type of research design. If there is a gold standard it is this: Pay great attention to your choice of p-value methods lest your ill-chosen methods lead you down the primrose path to error.
“Testing the null hypothesis is the gold standard and should remain so. ”
should?
Interesting argument or rather interesting LACK of an argument.
So much of of scientific understanding comes in areas where there is no null, that it is odd to make the assertion that you do.
The “null” is a tool. sometimes handy. other times as the author argues it leads you astray.
If there is a “why”, there is a null.
It’s worse than that. Before those 4 things are even on a young post-doc’s horizon, publishing for becoming competitive for that first big government research grant must occur in today’s environment. Publishing null hypothesis affirmations are the short path to a job in industry.
“but on the whole, science deserves credit for providing the foundation underlying modern civilization’s comforts and conveniences.”
Disagree – Fossil Fuels, mainly oil, provided the leap for humanity.
Without the products of science, fossil fuels were just gooey black stuff that made a mess of your shoes.
MarkW:
Fossil fuels required the steam engine for their chemical energy to do useful work.
“Science owes more to the steam engine than the steam engine owes to science. ”
Attributed to Lawrence Joseph Henderson
Richard
Fuel science, anyone?
“Son, I have one word for you. Plastics.”
+1 to anyone who can name the movie.
The Graduate
The Graduate (1967)m string Dustin Hoffman, Anne Bancroft, Katharine Ross Directed by Mike Nichols
http://www.imdb.com/title/tt0061722/
Just shows how old I am. I saw the original release in first run.
+1 to Walter. For the youngsters, the movie caused quite the sensation, it was downright scandalous.
Hey, hey, Mrs. Robinson…
“How to remove a black stocking”?
They used Paul Simon’s version of Mrs Robinson for the movie.
kookookachoo!
kookookachoo comes from the “I am a Walrus” tune by the Beatles.
Don’t feel bad. I saw the original release of “King Kong vs Godzilla”.
Coo coo ca-choo, Mrs. Robinson
Jesus loves you more than you will know wo wo wo
God bless you, please, Mrs. Robinson
Heaven holds a place for those who pray
Hey hey hey, hey hey hey
I am the Walrus was Goo-goo ga job.
Koo koo ka choo was the graduate.
So solly.
goo goo ga joob
And engineering.
wrong. Try horse collar, stirrups selective breeding for animals (horses!) and plants. Societies go through many phases one advancement the logical out come of earlier ones, You need some earlier “techs” to successfully use others.
Note the lack of the wheel in south american societies. This list is is but an off the cuff snarl. Others can add or substitute from it. Last I think it is better to say it is the engineers rather then the scientist who is paramount in human development .
michael
+1
In every single case, the construction of a scientific experiment is an engineering exercise. Take away the engineering and all that’s left in science is philosophy.
“vaccine evaders”
So vaccines are prison now? And they openly admit that?
Or maybe it’s a religion? Apostates will be eradicated?
I gather this has to do with hockey sticks and IPCCs GCMs. Interesting, but beside the point. All these statistical analyses, five significant figure anomalies, results beyond the resolution of the instrument, attempt to back justify CAGW. Something akin to the Wonderland Queen’s verdict first, trial later. What really matters:
“These questions have been settled by science.” Surgeon General
IPCC AR5 TS.6 Key Uncertainties. IPCC doesn’t think the science is settled. There is a huge amount of known and unknown unknowns.
According to IPCC AR5 industrialized mankind’s share of the increase in atmospheric CO2 between 1750 and 2011 is somewhere between 4% and 196%, i.e. IPCC hasn’t got a clue. IPCC “adjusted” the assumptions, estimates and wags until they got the desired mean.
At 2 W/m^2 CO2’s contribution to the global heat balance is insignificant compared to the heat handling power of the oceans and clouds. CO2’s nothing but a bee fart in a hurricane.
The hiatus/pause/lull/stasis/slowdown (IPPC acknowledges as fact) makes it pretty clear that IPCC’s GCM’s are not credible.
The APS workshop of Jan 2014 concluded the science is not settled. (Yes, I read it all.)
Getting through the 1/2014 APS workshop minutes is a 570 page tough slog. During this workshop some of the top climate change experts candidly spoke about IPCC AR5. Basically they expressed some rather serious doubts about the quality of the models, observational data, the hiatus/pause/lull/stasis, the breadth and depth of uncertainties, and the waning scientific credibility of the entire political and social CAGW hysteria. Both IPCC AR5 & the APS minutes are easy to find and download.
https://stevengoddard.wordpress.com/2015/07/04/dr-bill-gray-responds-to-pope-francis/
Tom Siegfried makes the same mistake classically made by social constructivists (not saying he’s one), which is to confuse and conflate science with scientists.
Here’s the jump: “But for all its heroic accomplishments, science has a tragic flaw: It does not always live up to the image it has created of itself. Science supposedly stands for allegiance to reason, logical rigor and the search for truth free from the dogmas of authority. Yet science in practice is largely subservient to journal-editor authority, riddled with dogma and oblivious to the logical lapses in its primary method of investigation:…” Typical. Start the sentence about science, finish it with the behavior of (some) scientists.
Science is about the interplay of a falsifiable theory, and reproducible data. That interplay and the practice of it by scientists, are where all our advances have originated. The fact that individual scientists have foibles is lovely for sociologists, but does not reflect at all on science itself.
Second, Psychology is not a branch of science. The fact that, “100 results published in psychology journals shows that most of them evaporated when the same study was conducted again” says nothing about a sad state in science.
Likewise, Epidemiology is not a branch of science, and epidemiological correlation-chasing is not part of the scientific method. It’s standard practice in these ‘science is flawed‘ articles, to immediately offer abuses in Psychology and medical epidemiology as proof texts. But those fields are not part of science.
That logical disconnect between the accusation (science is flawed) and the evidence (after all, look at Psychology… blah, blah, blah) is typical, and is a kind of intellectual bait-and-switch. Look for it. It’s always there, it’s always a sign the author has an ax to grind about science, and it’s always a dead giveaway that the thesis is bankrupt.
In science, predictions are deduced from hypotheses and theories. The falsifiability of hypotheses and theories means that they make logically coherent but extremely unlikely statements about how the physical universe works. Predictive statements imply one and only one observable outcome. Physical deductions and predictions imply causality and invite observations and experiments as tests of the causal claim.
Epidemiological correlations are inductive inferences. They imply no causality and predict no observables. The rooster crows and the sun rises. The correlation is strong. Whoop-de-do. Does anyone think the former is causal to the latter? That’s epidemiology and that’s about the entire causal content of Psychology.
Science itself is not undercut by the fact that some scientists are foiblicious, or that medical epidemiology is infested with crockness. Those arguing from the latter to the former merely demonstrate a non-understanding of science itself.
And those people would be yet another group, which is science journalists. Usually (but not always), they have a very thin understanding of science, and don’t understand how thin it is. So ‘science’ is what they imagine it to be, and like a groupie spurned, when “science” doesn’t behave they way they expect it to, they lash out like this.
Technology journalism is frequently just as bad, with endless stories about perpetual motion machines, etc.
The whole journalistic enterprise is a disaster. It’s gotten to the point where the only journalists who can cobble together a coherent sentence are the ones with JD degrees.
Pat Frank:
Well said. Thankyou.
Richard
Pat writes “Science is about the interplay of a falsifiable theory, and reproducible data.”
Whilst this is true, its not specific enough. So in climate “science” there is extensive use of models because largely they have no choice. People like Mosher dont seem to understand the difference between a model that organizes data into an understandable form (eg global temperature anomaly calculation) and a model that makes a projection. He just sees that models are ok in general.
So when climate science uses projection models it no longer uses data except in the sense that its now data from a model and has nothing to do with reality.
Climate science should be starting with the hypothesis that their GCMs model reality but they seem to have skipped that step and moved on to the assumption they do and draw results relating to reality from them. Its pretty obvious that they dont in so many respects that its outrageous to move to the next step of drawing climate conclusions using them.
At that point climate “science” became non-science for a great many papers. And since they all draw upon one another, the whole field is tainted.
can we post xkcd comic strips?
one of my favs
It just became one of mine.
Ditto!
Perfect example of significance shopping when dealing with a multi-variate design done in such a way to garner the result you want. Reminds me of the one sensor in the UK that caused the recent “hottest EVA” media blitz.
Now prove that the green food coloring doesn’t react with the pectin to produce some zitogen.
Hey, let’s talk about GMOs…
“Above else, their analysis suggests, the problems persist because the quest for “statistical significance” is mindless. “Determining significance has become a surrogate for good research,” Gigerenzer and Marewski write in the February issue of Journal of Management. Among multiple scientific communities, “statistical significance” has become an idol, worshiped as the path to truth. “Advocated as the only game in town, it is practiced in a compulsive, mechanical way — without judging whether it makes sense or not.””
Climate “Science” has neither the statistical support, nor does it make sense. Climate “science” fails on both fronts.
1) Statistical analysis of every ice core I’ve looked at covering the Holocene shows that a) we are well below the temperature peak of the past 15k years and b) there is absolutely nothing statistically significant about the temperature variation over the past 50 and 150 years.
2) Geologic record covering 600 million years demonstrates that atmospheric CO2 was as high as 7,000 PPM, and we never had run away global temperatures, in fact temperatures never got above 22 degree c for a sustained period of time. We fell into an ice age when CO2 was 4,000 PPM.
3) Mother Nature isn’t stupid, Earth and life has survived billions of years. The absorption band of CO2 is centered around 15 microns IR, that is consistent with a black body of -80 degree C. Only a small fraction of the radiation at 10 microns IR (The Average Earth Temp) is absorbed by CO2, and as it warms, the peak shifts to the left, and CO2 absorbs even less of the radiation. Unless my eyes deceive me, it looks like CO2 doesn’t even absorb IR at 10 Microns.
http://2.bp.blogspot.com/-z0099m2A1dI/Un9NrLLKIZI/AAAAAAAAAtU/oYmZOjynqPk/s1600/spectra.png
http://clivebest.com/blog/wp-content/uploads/2010/01/595px-atmospheric_transmission.png
BTW, if CO2 doesn’t absorb at 10 Microns how in the hell can it be the cause of warming? CO2 would have to be absorbing radiation hotter than the earth to warm it. The more I look into this “science” the more nonsensical it becomes.
It looks like CO2 stops absorbing at 13 microns.
http://homeclimateanalysis.blogspot.com/2010/09/co2-absorption-band.html
13 Microns is consistent with a black body of -50 degree C.
http://www.spectralcalc.com/blackbody_calculator/plots/guest1201988774.png
This “science” is pure garbage. How can radiation IR at -50 Degree C warm the globe? What a joke.
Well you need a bit of retuning there CO2islife.
If the earth surface really radiates a 288 K black body like spectrum, the as you say, it peaks ( o a wavelength scale graph at 10 microns 920 times the sun’s peak wavelength).
But 98% of that total radiated spectrum energy lies between one half of the peak wavelength (5.0 microns) and 8 times the peak wavelength (80 microns) Only 1% remains beyond those two end points, and only 25% of the total energy is at wavelengths less than the peak of 10 microns.
So the spectral radiant emittance at 15 microns; the CO2 band is quite substantial. In fact my handy dandy BB calculator says at 1.5 times the peak wavelength the spectral radiant intensity is 70% of the peak value.
So indeed CO2 has a lot more than peanuts to feed on.
g
George,
A question for you. If all the radiation from the sun was only at the electromagnetic frequency of 15 microns, what is the maximum temperature that the earth could reach? If the answer is warmer than -80c then I think you have answered CO2islife’s challenge.
My understanding of CO2islife’s point is that if you direct a fire-hose of water at a building and the water is at a temperature of say 50c, the building can not warm up to more than 50c – no matter how much water you hit it with.
I don’t know the answer by the way and I hope that you do.
Drat, I wrote frequency but meant wavelength as I hope you might have guessed!
I do have a non scientific theory in that I observe that a microwave cooker can heat water to boiling point even though microwaves have a longer wavelength that 15 microns. But I have to admit I don’t know enough about microwaves to know if they also have a broader spectrum of emissions.
” I observe that a microwave cooker can heat water to boiling point even though microwaves have a longer wavelength that 15 microns. But I have to admit I don’t know enough about microwaves to know if they also have a broader spectrum of emissions.”
Microwaves heat dipole molecules by making them vibrate, basically friction. Thermal photons (IR) don’t heat like that.
But i think microwaves show that’s there’s more than one way photons can transfer energy.
If the Sun’s blackbody output had a peak wavelength of 15u, It does have a surface temp of about 192K. And the Earth would be be limited to a max 192K.
Well Bernard, that is an interesting question. Not one pertaining to reality though.
You ask if ALL (my emfarsis) radiation from the sun was 15 micron wavelength, what would be the maximum obtainable Temperature.
So I’ll get pedantic and take your recipe at face value.
If ALL of the radiation is 15 micron wavelength then NONE of the radiation is at ANY other frequency or wavelength.
Ergo, the sun must be a perfectly coherent source of 15 micron wavelength EM radiation. which is 20 THz frequency, so it is a laser, or a maser, whichever you prefer.
A laser behaves as if it is a near point source with a diameter of the order of one wavelength. Well it has a Gaussian beam profile, with a I/e waist diameter in that range (or radius). My short term memory can’t recall the exact waist diameter.
But the point is that the 860,000 mile diameter sun would appear to be a much more distant roughly one micron diameter source.
Well the real sun is not a coherent source, it is quite incoherent, and its apparent angular diameter as seen from the mean earth sun distance is about 30 arc minutes.
The optical sine theorem which is based on the second law of thermodynamics, says that it is possible to focus the sun image down to a size which can reach a Temperature equal to the about 6,000 K sun surface Temperature (in air).
The limiting concentration is 1/sin^2 (0.5 deg.) = 13, 132 times (area wise).
Prof Roland Winston at UC Merced (actually in Atwater) has in fact focused the sun’s image to an areal density of over 50,000 suns; more than 4 times that above limit.
Well I left out a little factor in the above expression.
The concentration limit is n^2 / sin^2 (0.5 deg.) where n is the refractive index of the medium in which the image is formed.
Winston made a solid CPC (Compound Parabolic Concentrator) out of a YAG crystal which has a refractive index of about 2.78 so he got a factor of about 7.7 out of that, and with losses he ended up at I believe 56,000 suns; highest ever achieved. But you can’t ever get all of that energy out of the medium into air. Most of it will be trapped and vaporize your crystal. Dunno how Roland kept his YAG from melting with the real sun, but he would get one hell of a full moon spot (it is NOT an image).
Winston is one of the original and most regarded gurus of Non Imaging Optics, and one of the inventors of the CPC.
But in principle, you laser sun can be focused down to a Gaussian spot about 15 microns in diameter, in which case it would be much higher Temperature than 6,000 K.
But then the sun is not a laser or a coherent source, so nyet on doing that experiment.
g
Thanks Micro6500.
If you are right then CO2islife is correct in stating that 15 micron back radiation could only warm the earth to 192 K or -81 c. This would appear to an important conclusion that basically removes CO2 as something to worry about!
Thanks George also for your reply.
Though I did not mean for you to launch into an explanation about focusing lasers! Perhaps I should have re-phrased my question to avoid any reference to the sun and ask what the temperature of the earth would be if it was bathed only in 15 micron IR waves? Forget the sun. If Micro6500 is correct then CO2islife is correct and CO2 does not cause global warming. I wonder if you agree with this logic and this conclusion?
I fear this thread has aged out now but I think it is a topic that should get more discussion.
If the Sun had a peak at 15u, it wouldn’t, but it’s some 6,000 degrees, so it will depend what captures the 15u and if it’s thermalize, if it is, it will be based on the flux in joules, which could warm it more than 192K.
So it’s more depends. Just like you heat something with the Sun far warmer than 6000 degrees.
The sun clearly emits at all the wavelengths in the electromagnetic spectrum that we know about. The key issue is that CO2 absorbs IR at 13-15 microns and not at any other wavelengths (significantly). Those wavelengths equate to emissions from a black body at at a temperature ranging from -50c to -80c. Agreed, that even at those low temperatures, the black bodies emit at other wavelengths as well but that doesn’t make any difference as CO2 is only absorbing in the range of 13-15 microns. A body at a given temperature cannot raise the temperature of another body to a higher temperature than its own through its emissions. It doesn’t matter if the emitting body is a million times bigger than the receiving body, it still cannot raise the temperature of the receiving body above its own temperature. It would therefor seem that the effect of CO2 absorbing at 13-15 microns cannot be more that raising the earth’s temperature to -50c to -80c. In other words, CO2 is not a problem. Though not a scientist, I have raised this point several times and have so far not got an explanation of why this is not true.
I think it’s because there is a depends, if you’re in space have 2 identical “Black Bodies” one at 193K, it will not warm the other above 193K.
But photons carry energy, and when you stream a high power flux, if the flux is absorbed, it can make something far hotter than the temp of the flux, think microwaves, and lasers, even the 6,000 degree Sun can be focused to many times that.
And then there’s the ability to reduce cooling rate, while the surface of the earth cools to space, anything that sends some of that energy back, cold or not slows that cooling rate.
Bernard,
Courtroom lawyers are very careful to ask exactly the question they want answered; and their other rule is to never ask a question they don’t already know the answer to.
Same sort of thin works in science.
If you don’t ask specifically the question you want an answer to then you aren’t likely to get the answer you expected.
I’d like a dollar for every time somebody told me; “that wasn’t the question I asked.”
Well to the best of my ability I try to make sure that it certainly is the answer to the question they asked.
So you did ask what would be the temperature if the sun emitted only 15 micron radiation.
I gave you an answer to that based on the fact that there were no other conditions.
So now you posed a different question about the earth, sans sun, being bathed only in 15 micron radiation.
That is not sufficient information to give any meaningful answer other than the earth would be bathed only in 15 micron radiation.
What else might happen would be pure speculation.
Some typos there.
(on a wavelength scale graph)
And (20 times the solar spectrum peak wavelength) Wien’s Displacement Law.
And that’s a 15 micron source, not a one micron source.
The essential calculations for arbitrary spectra are given in the downloadable Array Programming Language K at http://cosy.com/Science/HeartlandBasicBasics.html .
Ban p-values? ‘You have to be joking’ as John McEnroe once said. This would be the equivalent of banning information about the probability of a false positive from a clinical (diagnostic) test. Possibly the biggest problem is bad tests. In some cases, it seems somewhat like researchers testing for lung cancer with a test which happens to be proficient at detecting herpes (and other irrelevant diseases). In more formal language, tests which are actually rejecting ancillary hypotheses not of direct relevance to the claimed results.
This article made me remember a question I’ve had for a while and it has to do with (purported) rising ocean levels. I go at this as an educated layman, not a professional in any related field.
I have read that one prediction of “global warming”, whether anthropic or not, is that ocean levels will rise. In some science articles I’ve seen, the topic material seemed to assume that “rising ocean levels” was a verified phenomenon. Yet, in other articles that are more skeptical of AGW the exact opposite seems to be assumed.
First, even if there has been a detectable increase in ocean levels I find it difficult to understand how that fact can somehow make AGW “more true” than otherwise. There seems to be other “predictions” in AGW that have not panned out beyond some reasonable MOE.
To cut to the chase: Are there trustworthy measurements that indicate ocean levels have indeed risen and over time beyond those one might attribute to non-human causes?
Even if the answer is “yes” it still does not rescue AGW theory from other serious failings but perhaps others have noticed this one “prediction” turns up time and again in science articles related to AGW.
Sea level is rising. The argument is over whether or not there is a rate of change of sea level outside natural parameters at work, presumably from an anthropogenic effect.
========
During my time I have seen a lot of researchers switch from measuring tolerances and computing the error range of the final number based on those measured tolerances, to statistical analysis. While statistical analysis does offer the ability to come up with a final number with a smaller uncertainty range (90% confidence interval) than error range, any mistake in the statistical analysis will often put the final number outside the error range and thus total meaningless.
Statistical analysis is a great tool, but it is a double-edge sword capable of shredding our work with ease. I’m afraid we have gotten lazy over the past few decades.
All statistical tests contain mathematical assumptions about the data being tested, and the nature of the underlying ‘true’ population from which the data sample is pulled. If any of those underlying assumptions are false, then that particular test is invalid. For example, it was assumed that commodity and stock price movements are Gaussian, so normally distributed. Benoit Mandelbrot showed they behave fractally, so are fat tailed, so not normally distributed. In 1972 I programmed the Kolmogorov- Smirnov test proving this for the entire NYSE over an entire decade of daily price movements, for John Lindner’s paper showing one therefore needed to do all commodity and stock price statistics using a lognormal population assumption.
And this is precisely the problem with statistical hypothesis testing: It’s comparing a proposed theory against the gubbins of an unknown and undescribed reality. There is no valid manner in which to choose the ‘actual’ probability distribution of reality until you’ve collected enough data. But if you’ve collected enough data, you don’t need null hypothesis testing. The actual data either invalidates the proposed theory or it does not.
Which is precisely backwards from, say, quality assurance in manufacturing. Where all the gubbins of reality are known, described, and laid out on the shop floor. The issue is not then the gubbins, by whether it’s doing what you designed it to do on the fringe ends of things.
The problem isn’t the use of p values. It’s that science and math are too hard for most scientists and mathematicians. If that were not so, it would be perfectly clear to them just what inferences can and cannot justifiably be drawn from a given p value in a given case.
We do seem to be in a bad mood today, don’t we?
Speak for yourself.
Most scientists or mathematicians do not find those subjects to be too hard. That’s how they were able to get degrees in those disciplines.
Now if the examiners (lazy SOBs) use multiple choice tests and exams; then no wonder they graduate students who aren’t qualified.
California (and other State) drivers licence written tests are the only multiple choice exams I ever took.
You can’t test somebody’s knowledge by giving them the answer and asking them if it is correct.
Was it not Sir Ernest Rutherford; who said,”If your experiment requires statistics, you need to design a better experiment”?
Of course this over reliance on “modified statistical methods” is the heart of the doomsayers creed.
Coulda,woulda shoulda, so give me all your money,enslave your children unto me.
Nice try by the author,Siegfried, but who is he addressing with the denier schtick?
Is that for the consensus crew or just a demonstration of his own bias?
However given the history of the Team IPCC ™ this use of statistics to give false credibility to meaningless numbers, is deliberate.
Science was nothing but a useful garb to cloak their mendacity in.
john robertson: Was it not Sir Ernest Rutherford; who said,”If your experiment requires statistics, you need to design a better experiment”?
He did but it is an incomplete thought: he did not say how anyone was supposed to have known how to have done a better experiment. A fuller idea is to use the statistical analysis from the last experiment to do a better job designing the next experiment.
No it was Lord Rutherford; not Sir Rutherford, and he said “If you have to use statistics, you should have done a better experiment. “