Quote of the Week: Statistics and peer review

WUWT reader J B Williamson writes:

I came across this obituary today which reminds us that its not just the climate universe that has problems.

Douglas Altman, statistician – obituary
Douglas Altman, who has died aged 69, waged a long-running campaign to improve the use of statistics in medical research.

A professor of statistics in medicine at the University of Oxford, in 1998 Altman described the problem as follows:

“The majority of statistical analyses are performed by people with an inadequate understanding of statistical methods. They are then peer reviewed by people who are generally no more knowledgeable. Sadly, much research may benefit researchers rather more than patients, especially when it is carried out primarily as a ridiculous career necessity.”

Requires sign in to read the rest.



Leave a Reply

16 Comment threads
34 Thread replies
Most reacted comment
Hottest comment thread
36 Comment authors

newest oldest most voted
Notify of

System Physio-Chemistry is the Achilles` Heel of climate science. Statistics just is the icing on a poisoned cake.

John Harmsworth

I would suggest that statistics is the cloak that hides the crime. A nod to Mikey Mann and the Hockey Team.


Wasn’t it Erst Rutherford who said “If your experiment needs statistics you should have designed a better experiment.”

Clyde Spencer

Fellow Readers,
I think that the links provided above by Chaamjamal are well worth reading — especially the first link. There is considerable food for thought and I think that the alarmists should respond to the charges.

Lance Wallace

Jamal actually has approximately 50 well-reasoned and convincing statistical arguments regarding many of the central dogmas of the alarmists, such as the relation between CO2 and global temperature ( there is none once both have been detrended);
no relation between sea level rise and any human influence;
no relation between ocean acidification and human activities;
and many more.


Walter Chips

Interesting. I just read this report over at GWPF, https://www.thegwpf.com/the-sun-and-volcanoes-cause-the-pause/
The one thing that struck me about this report, was the use of ‘margin of error’! How often do we see figures presented as absolute facts? Especially by the warmanistas? I mean, when they say, that this was the warmest xyz eva, by 0.1C, they speak as if it is absolute! But, if it was reported as 0.1 +/- 0.17, then I guess it would seem just a bit meaningless?
I would love to see the use of margin of error used a lot more in general reporting!


OK, so Professor Altman was 69 +/- 5%…..

Ben of Houston

Unlikely. These things are well documented. The worst would be if someone was just going based off his birth year and not his birthdate, which would give you six months both ways, ie: 69 +/- 0.72%

Sam C Cogar

But, but, but, …… what about his “conception” [impregnation] date?

Shouldn’t one be adding 5 months +/- 4 months to his birth date ….. but not his birth year. (Adding months as years really screws things up)


If the new temperature is not higher than the previous highest by more than the margin of error, then it is simply not higher and it should not be reported as such.


Nice point.
NASA have recently been highlighting a rise in global temperatures of 1.1 degrees Celsius over the last century or so. This is of course temperature anomaly but ignores the significant margin of error of plus and minus ~.2 degrees C.
Also apparently overlooked ( so far) is a drop of .56C between January 2016 and January 2018.


Here’s a link to a study that beautifully demonstrates the misuse of statistics.

A real study was conducted. The data was not faked.

Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a “statistically significant” result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.

They managed to get the study published. Because it presented something people wanted to hear anyway, it was all over the internet almost instantly.

One big complaint is that non statisticians misuse p-values.

A common misconception among nonstatisticians is that p-values can tell you the probability that a result occurred by chance. This interpretation is dead wrong, but you see it again and again and again and again. link

In climate science the outstanding example is Dr. Mann’s hockey stick. link His statistical methods produce a hockey stick even on a random dataset.

Jeff Alberts

“In climate science the outstanding example is Dr. Mann’s hockey stick. link His statistical methods produce a hockey stick even on a random dataset.”

That’s overstating it. It was a specific type of random dataset, red noise.


If you have true random numbers you get white noise. White noise does not exist in nature because it has infinite bandwidth and therefore infinite energy.

White noise can be removed from a signal by averaging. That’s why everyone assumes that you can remove any kind of noise by averaging.

Red noise, or something like it, is what you find in nature. Typically the mean of the noise spends long periods above or below zero. It is not removed by naive averaging.

The fact that M&M discovered that red noise produces a hockey stick using Mann’s analysis means that naturally occurring datasets would have a good chance of doing the same.

Alan D McIntire


That ‘jelly bean’ problem could be significantly reduced if EVERY report required a separate, independent study to back it up before being published.


But XKCD is a warmist even though 882 disproves 1732! https://xkcd.com/1732/

Alan D McIntire

I realize he’s a warmist, but as you say, his 1732 goes against 882 logic and good sense.

John Harmsworth

Mann’s mathematical baloney was applied AFTER he decided that a ridiculous and small subset of trees could provide any kind of conclusive information about past “climate”. Is there a human on Earth, even amongst his associates who doesn’t realize the whole thing was a calculated deception?


“One big complaint is that non statisticians misuse p-values.”

Popularily known as “p-chasing”.

It should be clear that correlation is necessary but not sufficient to prove causation. Correlation is an exploratory technique that can be used as a basis for searching for a causation mechanism.


It is interesting that institutions like the American Statistical Association have made so little headway with some areas of science in terms of the misuse of statistics. Climate science and epidemiology are two of the worst offenders. The ASA’s statement on statistical significance and p values is here:


and here:


We continue to see climate scientists, and their cheerleaders on here, claiming that p values of 0.05 are some sort of gold standard and “prove” all sorts of things, but according to the staisticians, that is simply wrong.

dodgy geezer

…I came across this obituary today which reminds us that its not just the climate universe that has problems….

I don’t like being directed to a site which needs me to register before reading the critical message. And I suspect that the message is much better presented by accessing three items of open information, which I will proceed to offer:


Others may add their favourite sites as they wish…

Mike S

I wish I still had the statistics textbook I taught college-level statistics from in the mid-80s. It had a full chapter called “How To Lie With Statistics” (it would have been more accurate, but not as catchy, to call it “How people will try to lie to you using statistics”). It covered many of these issues, plus others. I always devoted one full class session to that chapter.

Back in the Dark Ages of the 1970s, when I was studying statistics, I was introduced to a book titled: “How to Lie with Statistics”, by Darrell Huff. See here: https://archive.org/details/HowToLieWithStatistics. I now recommend this one, as well: https://www.amazon.com/product-reviews/3319397559.


“How to Lie With Statistics” is, itself, a small book today. I got my copy in print form from Amazon, it may be available from other on-line bookstores. I’ve not seen it on shelves though.

I agree.

I read the original obituary on Monday – I do read the [London] Daily Telegraph in hard copy. I am a dinosaur, too, I suspect. And it is a very interesting one.



Exactly, this reminds me of the ‘97% of climate scientists agree’ argument. It was actually 97% of climat scientists WHO RESPONDED to the questionaire, not 97% of all climate scientists.

If they want to say ‘97% of climate scientists agree’ , then they must include a range of error. Say, plus or minus 5 percentage points.

I’d love to hear them explain ‘ between 92% and 102% of climate scientists agree’. It would be worth a laugh.

Alan Rakes

Another area that I get sick and tired of is seeing error bars without describing it as a confidence interval. It is easy to say +/- 5% points without saying even that narrow a range was only a 50% confidence interval. The error bars are meaningless unless I know the statistical confidence associated with them.


Even worse, it was 97% of those who bothered to respond and who weren’t filtered out by the authors of the poll.


Even worse, parroting MarkW below, it was only 73 of the 75 government-employed “scientists” who were left AFTER those who did not return the questionaire were filtered out, those were were low-ranked had been filtered out (judging only by the number of government-funded papers each had published), and with all the other replies to all the other questions in the survey filtered out ….

Glad to see the clarification of Klem’s oversimplification, to understand how REALLY bad that 97% pseudo-statistic is.

As I recall, it went something like this: a bunch of questionnaires resulted in a fairly low response rate, and then, of those responses, successive filters were applied to rule out all except who were considered “officially” the most qualified climate scientists, and then more filters applied to skew interpretation of answers, and then, like mathemagic, the 97% rabbit appeared out of the hat.

That’s just my plain-language, vague memory of things (maybe not quite precise enough).


62.19% of all statistics are made up anyway.

I have read, in at least 87% of the publications* discussing this, that the figure is >76% of all statistics are made up.

* Mad Magazine, Nature, Playboy, Railway Modeller, Take That Fan Club International, and others.


Three statisticians went out hunting, and came across a large deer. The first statistician fired, but missed, by a meter to the left. The second statistician fired, but also missed, by a meter to the right. The third statistician didn’t fire, but shouted in triumph, “On the average we got it!”

David L. Hagen

Dr. John Ioannidis has shaken up medical research with his 2005 earthshaking article:
John P. A. Ioannidis, Why Most Published Research Findings Are False
Published: August 30, 2005 https://doi.org/10.1371/journal.pmed.0020124
Cited > 5000 times and viewed >2.5 million times.
Ioannidis has been cited >168,000 times.

He co-authored highly cited reports on what is needed
Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement
D Moher, A Liberati, J Tetzlaff, DG Altman… – PLoS medicine, 2009
Cited by 32472

Citing Ioannidis work has far greater impact.


This runs much deeper than most are willing to take on. Niels Bohr declaration that probability – statistics- are a fundamental law of nature at Solvay in 1925 had only 1 single objector – Albert Einstein who never surrendered. Since than a stunning reliance, at the cost of reason, even after Max Planck showed radiation statistics could not possible explain BBR without a new hypothesis.
And medicine is light years beyond a BBR problem – they try to use Baysian stats Apps. to replace an MD! Stats could quickly become personal, a life and death matter.
In physics the problem is especially vexing – unbelievably accurate predictions- just leave reason at the door. The end result of this is Hawking’s black hole 180 degree about turns over decades. QM is a trainwreck because of statistics. There are flickers of hope though….
There are great engineering uses for stats, no doubt. Climate, medicine and QM are about machines not made by any engineer, even if Monsanto wants to patent living machines. These natural machines are always ahead of us, unlike diesels- they were just one trick ahead.
Could it be the hubris of treating climate as a fixed known machine we could , in theory build, is the problem?


Steve Milloy, junkscience.com, has been crusading on this for over a generation.

steve case

Averaging up winter/summer, day/night, arctic/tropic temperatures and coming up with a single value is so much carefully calculated nonsense.

John Harmsworth

You forgot differing altitudes and changes in humidity. But I only add to your correct conclusion. The whole field is founded on LESS than observation.

steve case

Not to mention desert/jungle land/sea.

And skilled and unskilled observers [all doing their best], and local topography, including Urban Jungle proximity.


Clyde Spencer

Yes, at the very least, there should be a different average and SD for every climate zone.

Walter Chips

And don’t forget that to to get to this single figure, assumptions are made, as the whole globe is not covered by thermometers! So, I guess the accuracy would be? Not very good!

Ken Irwin

“Although this may seem a paradox, all exact science is dominated by the idea of approximation. When a man tells you that he knows the exact truth about anything, you are safe in inferring that he is an inexact man. Every careful measurement in science is always given with the probable error … every observer admits that he is likely wrong, and knows about how much wrong he is likely to be.” – Bertrand Russell

I think the climate alarmists need to ‘fess up to just how wrong they might be.


The alarmists are not being scientific, neither was Lord Bertrand Russell – he claimed just 1 year before Max Planck’s huge breakthrough, that science had left only the pursuit of more decimals. The truth of the quantum is nowhere in the statistics to be found. The error of the UV catastrophy Plank solved was extremely embarrasing, and unbelievably pushed under the carpet. Physics is still reeling from Planck’s hypothesis, a truth beyond any measurement. Of course we can measure the h constant to high accuracy once we know it exists truthfully.
The UV catastrophy makes the hockey.stick a joke – there no physical hypothesis is at work just number fudging , a parody of all science.

Crispin in Waterloo but really in Ulaanbaatar

Today I was measuring the draft in a chimney attached to a high performance coal stove. The calculated (expected) draft was 17 Pascals with an average temperature of 161 degrees C. The only instrument available was a gas analyser with a resolution of 10 Pascals. During the period of measurement, the draft was indicated as being 10 and later 10 Pascals. That is all the information I could get besides a reading at 101 C when the draft instrument failed to show at all.

What did I prove? I conclude the measurements generally support the calculations, but that I need an instrument with ten times to resolution to know for sure.

Climate science treats temperatures in a way that says if I measured 10 or 20 plus minus 5 thousands of times, I would know that it was really 17.108, confirming my ‘model’. All I have are enticing indications that I am on the right track, within bounds, possibly correct, and nothing more.

In the real world we are not allowed to make stupid assertions like 17.108. We have to get better data and report uncertainties and hope they support the theory being applied to a practical problem.


Indeed. The often repeated claim that uncertainty decreases in proportion to the square root of the number of measurements is only true for independent, identically distributed measurements. Climate data usually fails on the “independent” criterium, since they are almost always autocorrelated.


Since I have worked a fair bit on statistical analysis of data it has become a sort of hobby of mine since I retired to read each new climate catastrophe paper and look for elementary statistical mistakes. I almost always find at least one and often several.

A few things I look for:

Are normally distributed data assumed without verifying whether this is true, and if not verified do they use methods that only apply to normal distributions (e. g. putting 2 sigma = 95% confidence level).

Do they correct for autocorrelation (climate data are almost always autocorrelated).

If using smoothed data do they correct for decreased degrees of freedom?

If using smoothed data does the smoothed curve extend up to the present (impossible to do properly unless trailing averages are used)

If OLS is used is there any uncertainty in the y-axis values? If so is this corrected for?

If using Bayesian statistics, is the prior probability reported. If not, throw the paper away. If it is reported: is it reasonable, and is it correctly described (for example there is no such thing as a “vague prior”, a term I have seen used repeatedly).

These are all rather elementary matters, but all too often it isn’t necessary to go to any further to find errors.


Occasionally, I pop up to remind everyone of a simple fact and this is the perfect thread to do it again:
Climate is not a real thing. Climate is an *average* of the weather for a defined area over a period of time. And the word average means that it is all statistics. And Or, as Wikipedia puts it, “Climate is the statistics of weather over long periods of time.”

The public understands that there are lies, damned lies and statistics. If you want the public to understand the scientific problems with “Climate Change” and “Climate Science”, stop using those alarmists terms and start using its real name: “Weather Statistics”.

It is notable that a lot of uninformed opinion regarding climate science and plain wrong opinion on the supposed renewable cures come from medical doctors. I suspect because they have never understood hard physical science and think success rates of 70% or so prove their work. Climate science is an unprovable soft science, an ideal area for charlatans like Mann et al to work, because bad theories can never be proven wrong, or right, and the promised effects come after everyone is dead, the priests having lived off their fake predictions and promises of salvation by sacrifice for their whole lives. The same faith based parasiticism that religion offers its believers, cash for salvation. Real science is not like climate science, and can never be, “they haven’t proved any laws, well not yet anyway” – Feynman on pseudo scientists.