By Christopher Monckton of Brenchley
It is time to be angry at the gruesome failure of peer review that allows publication of papers, such as the recent effusion of Professor Lovejoy of McGill University, which, in the gushing, widely-circulated press release that seems to accompany every mephitically ectoplasmic emanation from the Forces of Darkness these days, billed it thus:
“Statistical analysis rules out natural-warming hypothesis with more than 99 percent certainty.”
One thing anyone who studies any kind of physics knows is that claiming results to three standard deviations, or 99% confidence, requires – at minimum – that the data underlying the claim are exceptionally precise and trustworthy and, in particular, that the measurement error is minuscule.
Here is the Lovejoy paper’s proposition:
“Let us … make the hypothesis that anthropogenic forcings are indeed dominant (skeptics may be assured that this hypothesis will be tested and indeed quantified in the following analysis). If this is true, then it is plausible that they do not significantly affect the type or amplitude of the natural variability, so that a simple model may suffice:
“ΔTglobe/Δt is the measured mean global temperature anomaly, ΔTanth/Δt is the deterministic anthropogenic contribution, ΔTnat/Δt is the (stochastic) natural variability (including the responses to the natural forcings), and Δε/Δt is the measurement error. The last can be estimated from the differences between the various observed global series and their means; it is nearly independent of time scale [Lovejoy et al., 2013a] and sufficiently small (≈ ±0.03 K) that we ignore it.”
Just how likely is it that we can measure global mean surface temperature over time either as an absolute value or as an anomaly to a precision of less than 1/30 Cº? It cannot be done. Yet it was essential to Lovejoy’s fiction that he should pretend it could be done, for otherwise his laughable attempt to claim 99% certainty for yet another me-too, can-I-have-another-grant-please result using speculative modeling would have visibly failed at the first fence.
Some of the tamperings that have depressed temperature anomalies in the 1920s and 1930s to make warming this century seem worse than it really was are a great deal larger than a thirtieth of a Celsius degree.
Fig. 1 shows a notorious instance from New Zealand, courtesy of Bryan Leyland:
Figure 1. Annual New Zealand national mean surface temperature anomalies, 1990-2008, from NIWA, showing a warming rate of 0.3 Cº/century before “adjustment” and 1 Cº/century afterward. This “adjustment” is 23 times the Lovejoy measurement error.
Figure 2: Tampering with the U.S. temperature record. The GISS record from 1990-2008 (right panel) shows 1934 0.1 Cº lower and 1998 0.3 Cº higher than the same record in its original 1999 version (left panel). This tampering, calculated to increase the apparent warming trend over the 20th century, is more than 13 times the tiny measurement error mentioned by Lovejoy. The startling changes to the dataset between the 1999 and 2008 versions, first noticed by Steven Goddard, are clearly seen if the two slides are repeatedly shown one after the other as a blink comparator.
Fig. 2 shows the effect of tampering with the temperature record at both ends of the 20th century to sex up the warming rate. The practice is surprisingly widespread. There are similar examples from many records in several countries.
But what is quantified, because Professor Jones’ HadCRUT4 temperature series explicitly states it, is the magnitude of the combined measurement, coverage, and bias uncertainties in the data.
Measurement uncertainty arises because measurements are taken in different places under various conditions by different methods. Anthony Watts’ exposure of the poor siting of hundreds of U.S. temperature stations showed up how severe the problem is, with thermometers on airport taxiways, in car parks, by air-conditioning vents, close to sewage works, and so on.
(corrected paragraph) His campaign was so successful that the US climate community were shamed into shutting down or repositioning several poorly-sited temperature monitoring stations. Nevertheless, a network of several hundred ideally-sited stations with standardized equipment and reporting procedures, the Climate Reference Network, tends to show less warming than the older US Historical Climate Network.
That record showed – not greatly to skeptics’ surprise – a rate of warming noticeably slower than the shambolic legacy record. The new record was quietly shunted into a siding, seldom to be heard of again. It pointed to an inconvenient truth: some unknown but significant fraction of 20th-century global warming arose from old-fashioned measurement uncertainty.
Coverage uncertainty arises from the fact that temperature stations are not evenly spaced either spatially or temporally. There has been a startling decline in the number of temperature stations reporting to the global network: there were 6000 a couple of decades ago, but now there are closer to 1500.
Bias uncertainty arises from the fact that, as the improved network demonstrated all too painfully, the old network tends to be closer to human habitation than is ideal.
Figure 3. The monthly HadCRUT4 global temperature anomalies (dark blue) and least-squares trend (thick bright blue line), with the combined measurement, coverage, and bias uncertainties shown. Positive anomalies are green; negative are red.
Fig. 3 shows the HadCRUT4 anomalies since 1880, with the combined anomalies also shown. At present, the combined uncertainties are ±0.15 Cº, or almost a sixth of a Celsius degree up or down, over an interval of 0.3 Cº in total. This value, too, is an order of magnitude greater than the unrealistically tiny measurement error allowed for in Lovejoy’s equation (1).
The effect of the uncertainties is that for 18 years 2 months the HadCRUT4 global-temperature trend falls entirely within the zone of uncertainty (Fig. 4). Accordingly, we cannot tell even with 95% confidence whether any global warming at all has occurred since January 1996.
Figure 4. The HadCRUT4 monthly global mean surface temperature anomalies and trend, January 1996 to February 2014, with the zone of uncertainty (pale blue). Because the trend-line falls entirely within the zone of uncertainty, we cannot be even 95% confident that any global warming occurred over the entire 218-month period.
Now, if you and I know all this, do you suppose the peer reviewers did not know it? The measurement error was crucial to the thesis of the Lovejoy paper, yet the reviewers allowed him to get away with saying it was only 0.03 Cº when the oldest of the global datasets, and the one favored by the IPCC, actually publishes, every monthy, combined uncertainties that are ten times larger.
Let us be blunt. Not least because of those uncertainties, compounded by data tampering all over the world, it is impossible to determine climate sensitivity either to the claimed precision of 0.01 Cº or to 99% confidence from the temperature data.
For this reason alone, the headline conclusion in the fawning press release about the “99% certainty” that climate sensitivity is similar to the IPCC’s estimate is baseless. The order-of-magnitude error about the measurement uncertainties is enough on its own to doom the paper. There is a lot else wrong with it, but that is another story.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Lord Monckton you have spared no expense to avoid calling Lovejoy an idiot so I am going to do that for you. And I wonder when last he might have opened a book on contemporary climate data, analysis, and observed vs modeled data the rest of the world is reading.
The ‘value’ of this paper is all in its press release , and that is all you need to know about its scientific worth.
Magma says:
April 11, 2014 at 9:16 pm
A short comparison
S. Lovejoy: Physics PhD
Can’t get a Physics PhD to my knowledge. After BSc all science is applied science. Me BSc physics, MSc Solid State Physics for example. Having qualification and written papers does not make you right.
Magma, a hot slimy mess with no direction known.
AND this is another paper of the lowest possible scientific value with too many errors to count.
99% in physics is illogical.
Lovejoy, where have I heard that name before?
Wikipedia:-
“Lovejoy is a British TV comedy-drama series based on the picaresque novels by John Grant….
The series concerns the adventures of the eponymous Lovejoy, played by Ian McShane, a likeable but roguish antiques dealer based in East Anglia. Within the trade, he has a reputation as a “divvie”, a person with an almost supernatural powers for recognising exceptional items as well as distinguishing genuine antique from clever fakes or forgeries.”
Good Lord. Fiction, Fakes & Forgeries based in East Anglia, no wonder it sounds familiar.
Peter Miller you have stolen my thunder.
But, just to reiterate, the geological record shows that there has always been climate change in the history of this planet, well before man came along. There have been glacial epochs when the carbon dioxide levels in the atmosphere have been much higher than they are today.
Lieber Herr Frey, bitte übersetzen. Ich danke und verbleibe mit freundlichen Grüßen Ihr Michael Limburg
Statistics?? Well if you have one foot in boiling water and the other in iced water, statistically you should be quite comfortable.
It’s even worse than that, he does not understand Baysian models. He uses a hockey stick C02 record with a Hockey stick (Ammann) paleo-reconstruction to reject the (small stochastic) deduced natural variability as a cause. It never occurred to him that the Ammann data could be (is) wrong… GIGO
Their measurement precision has always bothered me. I have a very sensitive thermometer that reads to 1/10th C. The last digit bounces all around when trying to take a temperature measurement normally. And if you walk around outside with it you see all sorts of temperature gradients than span several degrees at any given time. Heck, you can even feel the gradients on your face sometimes as you walk around. There’s an 8 F difference throughout the year between my back yard and a friends back yard that lives only 5 miles away. All this doesn’t even mention the hourly, daily, seasonal, and yearly fluctuations. Yet this can all be boiled down to a global average value of unthinkable precision? Me thinks not.
From the paper
:The second innovation is to use the CO2
77 radiative forcing as a surrogate for all anthropogenic forcings. This includes not only
78 the relatively well understood warmings due to the other long lived Green House Gases
79 (GHG’s) but also the poorly understood cooling due to aerosols. The use of the CO2
80 forcing as a broad surrogate is justified by the common dependence (and high
81 correlations) between the various anthropogenic effects due to their mutual
82 dependencies on global economic activity (see fig. 2 a, b below).
######################################################
This is NOT an innovation. It is what we did in our paper. I suppose I will have to write him
Is there *anything* in climate science that is reliable and credible anymore? It seems almost every data-set, every ‘peer reviewed’ paper, every over-exaggerated ‘new study’ is tainted, and every single time in the same direction.
This latest one is a typical and blatant example of a ‘report’ designed to fulfil a pre-determined agenda – 99% ffs. It’s clearly an attempt to uplift from the ‘97.1%’ that was previously bandied about.
Keep shining the light Lord M.
This seems to be a rehash of that old standby, temperature regressed against CO2 forcing for the period 1880 onwards. Except that Lovejoy does pay attention to the residuals from the regression – key to any statistical analysis. Remarkably however, it is argued (quoting the dreaded proxies) that these residuals represent all possible ‘natural variation’. This is despite the evident poor fit of the regression over 1880-1940, as indicated by the residuals (see fig 5 of Lovejoy). By definition, residuals from a regression with a constant term must have mean 0. The temperature increase of around 0.8C is then compared to the zero-mean distribution inferred from these residuals. No surprise – such a temperature increase is very unlikely for a distribution derived in this way!
Even if I have this only half right, what can one say, especially as regards the accompanying rhetoric?
Viscount Monckton of Brenchley:
I write to support your critique of the analysis by Lovejoy.
You rightly point out that measurement error prevents the conclusion of Lovejoy from being correct. I add that here is a more fundamental reason why Lovejoy’s analysis cannot be correct.
There is no agreed definition of ‘global temperature’.
Each team which provides values of ‘global temperature’ uses a unique definition of the parameter and frequently alters the definition it uses. Your article alludes to this when it states the variations in locations and numbers of measurement sites, and when it reports the frequent changes to the calculated collations of the measurements.
These variations and changes could be responsible for ALL the observed variations in global temperature values. Simply, an undefined parameter cannot have a precision. And alteration to its definition alters its determination which alters its value. Hence, the variations in global temperature values could be a result of the variations in the applied definition(s) of global temperature.
Therefore, any claim to have determined the precision of global temperature values is spurious.
A more full explanation of these issues is in Appendix B of this item
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm
Richard
““Let us … make the hypothesis that anthropogenic forcings are indeed dominant ”
Then a reduction in global temperatures would be impossible (if those anthropogenic forcings do not disappear)
A short comparison:
Magma (Latin for “the dregs”): A nobody (An avatar)
Christopher Monckton of Brenchley: A somebody ( A real person)
“mephitically”
CMoB,
As a hard working, articulate, clear thinking skeptic, you have my respect and thanks. Small criticism: The arcane words are not helpful.
Strange…….All or most of the CO2ers are looking down the road to what they hope is a “super” El Nino. This will cause an up tick in the global temps. You can almost hear them cheering for it. Anyway…..Isn’t this a naturally occurring event? Also…..How can they believe that 99% of it is caused by humans when they have spent the last several years using nature as an excuse for why the temps have flat lined.
So please explain why Prof Lovejoy only included an ‘estimate’ of measurement error and did not include an estimate of coverage & bias uncertainties? Is he assuming (or implying) that coverage & bias uncertainty are smaller than measurement error? Or did he ignore them because they can’t be ‘estimated’? Also, since the measurement error was an estimate, what was the uncertainty associated with the measurement error? Is it anything like the energy budget ‘measurement’ of .6 +- 17 w/m^2?
Aside from the fact that Global Average surface measurement is physically meaningless as others have pointed out and folks like Dr Hansen agree.
It’s worth remarking here that prior to the first Climategate revelations (17 November 2009), I hadn’t noted the expression “noble cause corruption” used except in reference to criminal violations of ethical conduct among police officers “planting or fabricating evidence, lying on reports or in court, and generally abusing police authority to make a charge stick.”
Explanations for why police are susceptible to such corruption are manifold, ranging from bribery through budget aggrandizement, but pervasive in the ranks of government thugs (both in and out of uniform) is a mutually reinforcing sense that “making the world safer” trumps and will always trump the social contract predicated on unalienable individual human rights.
So it has proven with the alarmist “climate catastrophe” cultists masquerading as scientists while actively conniving at suppressio veri, suggestio falsi with generous overlardings of outright data-cooking, not only in pursuit of big bucks in “research” grant funding but also feeding their bloated egos by way of lamestream media propaganda and authoritarian politicians praising them for “making the world safer.”
“Noble cause corruption” beyond the interrogation room with the blood-spattered walls down at the police station as these overweening underperformers glory in playing “Cops of the World.”
eo says:
April 11, 2014 at 10:26 pm
Once of the most confusing aspect of statistics to the layman is confidence limit. 100 per cent confidence actually means covering from minus infinity to plus infinity in the case of the normal curve. The larger the confidence or the higher the number of standard deviations applied meant the exercise is more imprecise and untrustworthy. At 100 per cent confidence level, everything is in.
You’re the one who’s confused. You’re confusing confidence level with confidence interval.
Correct me if I’m wrong, but if there were only 100 things the world could possibly do, then no matter what it did, there would be, on average, a 99% chance against that happening. If there’s a 99% chance against something happening, then it must have been caused by humans. Since the total number of things the world could possibly do is much much larger than 100, pretty much everything is caused by humans. Right?
How certain are you that the Atlantic Multidecadal Oscillation (AMO) is a natural cycle driven by changing conditions in the ocean circulation system.
I am about 80% certain of this but maybe others would only be 50%.
Have a look at just the Raw AMO index against Hadcrut4 going back to 1856. Now this is monthly data (rather than the sham of annual numbers that is used so often. The climate does not operate on annual timescales. It is operates with about 2 week periods of up and downs – little understood by most.)
The Raw AMO explains 49% of the monthly variation in Hadcrut4.
http://s8.postimg.org/7vabnd979/Hadcrut4_vs_Raw_AMO_Feb14.png
So multiply that 49% by your previous determination of whether the AMO is a natural cycle. In my case, I can say that at least 39% of the climate is driven by 1 natural climate variable alone.
Now throw in the ENSO and volcanoes and some of the other natural ocean circulation cycles and one can get up to 75% – not 1%.
It is good to see figures 1 and 2 so publicized, as rewriting of data is a core of the matter (although the HADCRUT4 subsequently displayed in this article is one of those rewritten sources).
That is not only because of its large significance in itself:
Converting relatively double-peak temperature history over the past century towards a hockey stick is what allows falsely claiming that it is unrelated to the double-peak history of solar activity meanwhile, while likewise concealing a multitude of other peak and trough matches as illustrated in http://tinyurl.com/nbnh7hq
As also shown there, the Modern Warm Period (the global warming scare’s basis) is similar to the Medieval Warm Period in both magnitude and prime cause.
(The common disinformation package that solar variation is too small to cause a tenths of Kelvin temperature change, global warming on the order of a 1/1000th change or less in average absolute temperature for now compared to the 1930s, is nonsense in the context of change in average low cloud cover and measured tropospheric ionization up to multiple percent, influenced by greater still variation in solar modulation of CRF).
.