Lovejoy's 99% 'confidence' vs. measurement uncertainty

By Christopher Monckton of Brenchley

It is time to be angry at the gruesome failure of peer review that allows publication of papers, such as the recent effusion of Professor Lovejoy of McGill University, which, in the gushing, widely-circulated press release that seems to accompany every mephitically ectoplasmic emanation from the Forces of Darkness these days, billed it thus:

“Statistical analysis rules out natural-warming hypothesis with more than 99 percent certainty.”

One thing anyone who studies any kind of physics knows is that claiming results to three standard deviations, or 99% confidence, requires – at minimum – that the data underlying the claim are exceptionally precise and trustworthy and, in particular, that the measurement error is minuscule.

Here is the Lovejoy paper’s proposition:

“Let us … make the hypothesis that anthropogenic forcings are indeed dominant (skeptics may be assured that this hypothesis will be tested and indeed quantified in the following analysis). If this is true, then it is plausible that they do not significantly affect the type or amplitude of the natural variability, so that a simple model may suffice:

clip_image002 (1)

ΔTglobet is the measured mean global temperature anomaly, ΔTantht is the deterministic anthropogenic contribution, ΔTnatt is the (stochastic) natural variability (including the responses to the natural forcings), and Δεt is the measurement error. The last can be estimated from the differences between the various observed global series and their means; it is nearly independent of time scale [Lovejoy et al., 2013a] and sufficiently small (≈ ±0.03 K) that we ignore it.”

Just how likely is it that we can measure global mean surface temperature over time either as an absolute value or as an anomaly to a precision of less than 1/30 Cº? It cannot be done. Yet it was essential to Lovejoy’s fiction that he should pretend it could be done, for otherwise his laughable attempt to claim 99% certainty for yet another me-too, can-I-have-another-grant-please result using speculative modeling would have visibly failed at the first fence.

Some of the tamperings that have depressed temperature anomalies in the 1920s and 1930s to make warming this century seem worse than it really was are a great deal larger than a thirtieth of a Celsius degree.

Fig. 1 shows a notorious instance from New Zealand, courtesy of Bryan Leyland:

clip_image004

Figure 1. Annual New Zealand national mean surface temperature anomalies, 1990-2008, from NIWA, showing a warming rate of 0.3 Cº/century before “adjustment” and 1 Cº/century afterward. This “adjustment” is 23 times the Lovejoy measurement error.

 

clip_image006clip_image008

Figure 2: Tampering with the U.S. temperature record. The GISS record from 1990-2008 (right panel) shows 1934 0.1 Cº lower and 1998 0.3 Cº higher than the same record in its original 1999 version (left panel). This tampering, calculated to increase the apparent warming trend over the 20th century, is more than 13 times the tiny measurement error mentioned by Lovejoy. The startling changes to the dataset between the 1999 and 2008 versions, first noticed by Steven Goddard, are clearly seen if the two slides are repeatedly shown one after the other as a blink comparator.

Fig. 2 shows the effect of tampering with the temperature record at both ends of the 20th century to sex up the warming rate. The practice is surprisingly widespread. There are similar examples from many records in several countries.

But what is quantified, because Professor Jones’ HadCRUT4 temperature series explicitly states it, is the magnitude of the combined measurement, coverage, and bias uncertainties in the data.

Measurement uncertainty arises because measurements are taken in different places under various conditions by different methods. Anthony Watts’ exposure of the poor siting of hundreds of U.S. temperature stations showed up how severe the problem is, with thermometers on airport taxiways, in car parks, by air-conditioning vents, close to sewage works, and so on.

(corrected paragraph) His campaign was so successful that the US climate community were shamed into shutting down or repositioning several poorly-sited temperature monitoring stations. Nevertheless, a network of several hundred ideally-sited stations with standardized equipment and reporting procedures, the Climate Reference Network, tends to show less warming than the older US Historical Climate Network.

That record showed – not greatly to skeptics’ surprise – a rate of warming noticeably slower than the shambolic legacy record. The new record was quietly shunted into a siding, seldom to be heard of again. It pointed to an inconvenient truth: some unknown but significant fraction of 20th-century global warming arose from old-fashioned measurement uncertainty.

Coverage uncertainty arises from the fact that temperature stations are not evenly spaced either spatially or temporally. There has been a startling decline in the number of temperature stations reporting to the global network: there were 6000 a couple of decades ago, but now there are closer to 1500.

Bias uncertainty arises from the fact that, as the improved network demonstrated all too painfully, the old network tends to be closer to human habitation than is ideal.

clip_image010

Figure 3. The monthly HadCRUT4 global temperature anomalies (dark blue) and least-squares trend (thick bright blue line), with the combined measurement, coverage, and bias uncertainties shown. Positive anomalies are green; negative are red.

Fig. 3 shows the HadCRUT4 anomalies since 1880, with the combined anomalies also shown. At present, the combined uncertainties are ±0.15 Cº, or almost a sixth of a Celsius degree up or down, over an interval of 0.3 Cº in total. This value, too, is an order of magnitude greater than the unrealistically tiny measurement error allowed for in Lovejoy’s equation (1).

The effect of the uncertainties is that for 18 years 2 months the HadCRUT4 global-temperature trend falls entirely within the zone of uncertainty (Fig. 4). Accordingly, we cannot tell even with 95% confidence whether any global warming at all has occurred since January 1996.

clip_image012

Figure 4. The HadCRUT4 monthly global mean surface temperature anomalies and trend, January 1996 to February 2014, with the zone of uncertainty (pale blue). Because the trend-line falls entirely within the zone of uncertainty, we cannot be even 95% confident that any global warming occurred over the entire 218-month period.

Now, if you and I know all this, do you suppose the peer reviewers did not know it? The measurement error was crucial to the thesis of the Lovejoy paper, yet the reviewers allowed him to get away with saying it was only 0.03 Cº when the oldest of the global datasets, and the one favored by the IPCC, actually publishes, every monthy, combined uncertainties that are ten times larger.

Let us be blunt. Not least because of those uncertainties, compounded by data tampering all over the world, it is impossible to determine climate sensitivity either to the claimed precision of 0.01 Cº or to 99% confidence from the temperature data.

For this reason alone, the headline conclusion in the fawning press release about the “99% certainty” that climate sensitivity is similar to the IPCC’s estimate is baseless. The order-of-magnitude error about the measurement uncertainties is enough on its own to doom the paper. There is a lot else wrong with it, but that is another story.

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

268 Comments
Inline Feedbacks
View all comments
Ricky Jackson
April 11, 2014 8:48 pm

They say statistics are like bikinis. What they show is interesting and what they hide are the essentials. The science of climate change and the statistics they use are all too often found to be completely naked when an honest intelligent person reviews the supposed result. Pier reviews need to be that way for us all to be given factual and as accurate as possible information on any topic. Thank goodness for people like Christpoher Monckton

Magma
April 11, 2014 9:16 pm

A short comparison
S. Lovejoy: Physics PhD; climate, meteorological and statistical expert; 500+ publications
C. Monckton: Classics BA; nil; 1 (not peer reviewed)

Mac the Knife
April 11, 2014 9:22 pm

Ole!
We award the Lord Monckton ‘two ears and the tail’ for mercifully slaying the Lovejoy bull****!

Michael D
April 11, 2014 9:22 pm

As soon as I saw that headline this afternoon in WUWT the “99%” number was immediately obviously ridiculous. Thank you for starting to peel back the first and most obvious layer of absurdity. As you say, there is a lot else wrong with it.
I’m very disappointed that science has sunk this low.

Tez
April 11, 2014 9:27 pm

Magma, interesting comparison.
And if you cannot find fault with Moncktons article (which you have not), that speaks volumes for the standard required to achieve a phd in climate “science” and exposes the “expert’s” grasp of statistics.
Monckton knows climate.

gregole
April 11, 2014 9:31 pm

Excellent post; points well taken.
If indeed Man-Made CO2 is slowing heat energy leaving our atmosphere, to me, a humble mechanical engineer, it would seem to me to be incumbent that we measure as accurately as we can both the Man-Made added CO2 as well as the expected delta T as an indication of additional insulation-effect from that added CO2. An aside – shouldn’t we be concentrating on the thermometer placement and worldwide network of temperature monitoring, isn’t Man-Made Global Warming the crises of our times? But I digress.
Now. We have our measurement of CO2, Man-Made and other. We have our temperature monitoring network. Key point here is both are imperfect, to this any reasonable person would agree who has ever attempted to measure anything – considering Man-Made CO2 vs. global temperature… please. I mean really. This is hubris pure and simple that such enormous, highly variable upon locality (I live in Phoenix Arizona, and temperatures can vary by at least 5 degrees F from my front door to the park across the street which is at a slightly lower elevation, and I have personally measured 7 degrees UHI between urban and rural locations as I live close to the Amerind reservations…)
These are massive data sets and I am not convinced we currently have the ability to quantify the uncertainty. That alone would be a topic I would be interested in seeing – just how to quantify temperature uncertainty. That it cumulatively is within a 99% range of certainty is unfounded. It is not remotely possible. 99% certainty is utterly ridiculous. It is a figment of imagination.
How this was published in a peer-reviewed journal simply escapes my comprehension.

Rob Dawg
April 11, 2014 9:35 pm

Thankfully there has been no trend towards urbanization since 1880 to further mask any trend from permanent stations. Astronomers are likewise grateful that their remote locations have remained equally immune from any light pollution consequences.

Alan Robertson
April 11, 2014 9:39 pm

Magma says:
April 11, 2014 at 9:16 pm
_____________
Hello again, Magma. I no longer expect you to ever show up around here and offer anything other than logical fallacies, so do your damnedest. You aren’t fooling anyone.

April 11, 2014 9:41 pm

Magma says:
April 11, 2014 at 9:16 pm
A short comparison
S. Lovejoy: Physics PhD; climate, meteorological and statistical expert; 500+ publications
C. Monckton: Classics BA; nil; 1 (not peer reviewed)
Magma, maybe you should take another look at that YouTube video of Richard Feynman talking about the key to science for a refresher…
“…In that simple statement is the key to science. It does not make any difference how beautiful your guess is, it does not make any difference how smart you are, who made the guess, or what his name is — if it disagrees with experiment, it is wrong.”
Magma, do you disagree with Feynman’s statement on the key to science?

pat
April 11, 2014 9:48 pm

lord monckton –
***no uncertainty here! u need to be “environmentally correct”. LOL.
11 April: Reuters: IMF, World Bank push for price on carbon
The leaders of the International Monetary Fund, World Bank and United
Nations on Friday called upon finance ministers to use fiscal policies, such
as carbon taxes, to combat climate change.
IMF Managing Director Christine Lagarde and World Bank President Jim Yong
Kim were joined by UN Secretary General Ban Ki-Moon at their 2014 spring
meetings to address a group of 46 finance ministers and senior officials on
policies to reduce greenhouse gas emissions.
Lagarde said their goal was to explain to ministers and officials what
fiscal tools they can use that would benefit the environment while
stimulating global economies.
She said she would discuss how to shift taxation from the traditional labour
and investment base “to a base that is ***environmentally correct,” she told
reporters ahead of the meeting.
Lagarde said carbon taxes and removing fossil fuel subsidies are
“intelligent” ways to reallocate resources to benefit the environment…
Ban called on finance ministers and private investors to hold a meeting in
the coming months that “could pave the way for a common approach” and make
low-carbon investments more attractive to institutional investors…
http://uk.reuters.com/article/2014/04/11/climatechange-money-idUKL2N0N31GT20140411

peterg
April 11, 2014 9:50 pm

Because the climate models have failed predictively so dismally, and these models incorporate everything that is currently known and accepted as orthodox in the field of climate science, then it is axiomatic that the field itself has failed. They do not have a clue how climate works.
Then along comes this statistical climate modeller with 500 published scientific papers in his working life proving through statistics alone something important and fundamental about climate.
If I get a scientific paper, I expect it to incorporate a reasonably significant amount of work. This climate scientist has 500. Over a 20 year active period, that would amount to 25 per year. There must be some droll stuff in there.

Tsk Tsk
April 11, 2014 9:51 pm

Magma says:
April 11, 2014 at 9:16 pm
A short comparison
S. Lovejoy: Physics PhD; climate, meteorological and statistical expert; 500+ publications
C. Monckton: Classics BA; nil; 1 (not peer reviewed)
======================================================================
“Also, because the argument from authority is not a logical argument in that it does not argue something’s negation or affirmation constitutes a contradiction, it is fallacious to assert that the conclusion must be true.[3] Such a determinative assertion is a logical non sequitur as the conclusion does not follow unconditionally, in the sense of being logically necessary.”
http://en.wikipedia.org/wiki/Argument_from_authority
Now care to make a real argument?

MaxLD
April 11, 2014 9:52 pm

500 publications! I always find that interesting when someone has so many publications. Think about that for a minute–say you worked at a university for 30 years. 500 publications means over 16 papers a year or close to 1.5 papers a month for 30 years (with no time off). How is that possible when it takes about 2 years to do decent innovative research and publish a paper? Well, the way they do it is a big group publishes a lot of stuff and they put each other’s names on the papers. Lots of times a person never even fully reads the paper that has their name on it. They can’t, they don’t physically have the time.

W Kropla
April 11, 2014 10:01 pm

Line 108 of the preprint at the link reads
108 Tglobe (t ) = Tanth (t )+ Tnat(t )+ ε(t )
There is no division by \Delta T as in equation (1) of the post. Is this a typo or is the source of equation (1) some other document?

April 11, 2014 10:01 pm

Comparing the The National Weather Service’s list from 2002 with the list from April of 2012 of record highs and lows for my little spot on the globe has about 20 of the record highs changed and about 30 of the record lows changed. I don’t mean new ones set but the numbers changed.
If they aren’t even certain of the record temperatures, how can they be certain of the cause of any temperatures?

David in Cal
April 11, 2014 10:04 pm

You said it, MaxLD! My wife worked as a biostatistician in a medical school for almost 25 years and had something over 100 papers. But, most of these were someone else’s research, where she did the statistical analysis. Even so, she worked very hard indeed to participate on 4 or 5 papers a year.
My working hypothesis is that many of Prof. Lovejoy’s papers were as poor as this one. And, the field of climate “science” has such low standards that this sort of nonsense gets published.

Louis
April 11, 2014 10:04 pm

The global temperature reconstructions that Lovejoy used to go back to the 1500’s have larger uncertainties than the modern temperature records. So how can he claim a measurement error of only 0.03 C degrees? That’s insane. I’m beginning to think these climate researchers are having a contest to see who can show the greatest willingness to lie for the cause. It’s as if their access to future grant money depends on it.

Mac the Knife
April 11, 2014 10:07 pm

Magma says:
April 11, 2014 at 9:16 pm
magma,
Lovejoy’s data interval was cherry picked and his prox-tology statistical analysis is unsupported by the UN-IPCC accepted HadCRUT4 data and corresponding confidence interval. You could site 2000 ‘publications’ by Lovejoy…. and it would never make his pathetically poor and blatantly biased “Scaling fluctuation analysis and statistical hypothesis testing of anthropogenic warming”, acceptable.
If you find Lord Monckton’s take down of Lovejoy’s activist paean offensive, consider another scientists perspective:
”No amount of experimentation can ever prove me right; a single experiment can prove me wrong.” Albert Einstein
The plus or minus 0.15C confidence interval of the UN-IPCC accepted HadCRUT4 data set proves Lovejoy’s cherry picked, prox-tology analysis and confidence interval of 0.01C is wrong.
Valid data trumps proxy bull**** and statistical deceit, every time.

April 11, 2014 10:09 pm

I’m fairly certain that Dr. Lewangoingsking (or however you spell his name) won’t be investing in this paper.

April 11, 2014 10:13 pm

April 11, 2014 at 9:16 pm | Magma says:

A short comparison
S. Lovejoy: Physics PhD; climate, meteorological and statistical expert; 500+ publications
C. Monckton: Classics BA; nil; 1 (not peer reviewed)

I shudder to think how poor those 500+ publications are. Still, Einstein said to bring only one piece of counter evidence to destroy his theories. Consider Lovejoy’s theories, publications, etc., destroyed.

Magma
April 11, 2014 10:14 pm

[snip ]

April 11, 2014 10:14 pm

Christopher,
You mention in this article that “That record showed – not greatly to skeptics’ surprise – a rate of warming noticeably slower than the shambolic legacy record.” I presume you are referring to the Climate Reference Network. Can you provide evidence that the trend in CRN stations is lower than the trend in USHCN stations? As far as I can tell, it is not significantly different: http://rankexploits.com/musings/wp-content/uploads/2013/01/Screen-Shot-2013-01-16-at-10.37.51-AM.png

Neil Jordan
April 11, 2014 10:18 pm
Txomin
April 11, 2014 10:20 pm

Thank you, thank you, thank you, Magma. I needed something, anything, to undermine what Monckton have said but I too was unable to do it scientifically.

April 11, 2014 10:22 pm

500 papers ! There is a trick. There can be many authors in one paper. Professors can use their assistants and students to do the research and write papers. The real author is typically a PhD student who has done the work. The professor has of course supervised not one student but many.
Professors have connections to other professors and their co-author their results.
Of course you divide your research into many publications. Each of them contains a tiny part of the results of one research group.
Requiring a large number of papers effectively blocks the entry of outsiders to academia. Without research groups and connections, mass production of papers is not possible.

1 2 3 11