By Christopher Monckton of Brenchley
It is time to be angry at the gruesome failure of peer review that allows publication of papers, such as the recent effusion of Professor Lovejoy of McGill University, which, in the gushing, widely-circulated press release that seems to accompany every mephitically ectoplasmic emanation from the Forces of Darkness these days, billed it thus:
“Statistical analysis rules out natural-warming hypothesis with more than 99 percent certainty.”
One thing anyone who studies any kind of physics knows is that claiming results to three standard deviations, or 99% confidence, requires – at minimum – that the data underlying the claim are exceptionally precise and trustworthy and, in particular, that the measurement error is minuscule.
Here is the Lovejoy paper’s proposition:
“Let us … make the hypothesis that anthropogenic forcings are indeed dominant (skeptics may be assured that this hypothesis will be tested and indeed quantified in the following analysis). If this is true, then it is plausible that they do not significantly affect the type or amplitude of the natural variability, so that a simple model may suffice:
“ΔTglobe/Δt is the measured mean global temperature anomaly, ΔTanth/Δt is the deterministic anthropogenic contribution, ΔTnat/Δt is the (stochastic) natural variability (including the responses to the natural forcings), and Δε/Δt is the measurement error. The last can be estimated from the differences between the various observed global series and their means; it is nearly independent of time scale [Lovejoy et al., 2013a] and sufficiently small (≈ ±0.03 K) that we ignore it.”
Just how likely is it that we can measure global mean surface temperature over time either as an absolute value or as an anomaly to a precision of less than 1/30 Cº? It cannot be done. Yet it was essential to Lovejoy’s fiction that he should pretend it could be done, for otherwise his laughable attempt to claim 99% certainty for yet another me-too, can-I-have-another-grant-please result using speculative modeling would have visibly failed at the first fence.
Some of the tamperings that have depressed temperature anomalies in the 1920s and 1930s to make warming this century seem worse than it really was are a great deal larger than a thirtieth of a Celsius degree.
Fig. 1 shows a notorious instance from New Zealand, courtesy of Bryan Leyland:
Figure 1. Annual New Zealand national mean surface temperature anomalies, 1990-2008, from NIWA, showing a warming rate of 0.3 Cº/century before “adjustment” and 1 Cº/century afterward. This “adjustment” is 23 times the Lovejoy measurement error.
Figure 2: Tampering with the U.S. temperature record. The GISS record from 1990-2008 (right panel) shows 1934 0.1 Cº lower and 1998 0.3 Cº higher than the same record in its original 1999 version (left panel). This tampering, calculated to increase the apparent warming trend over the 20th century, is more than 13 times the tiny measurement error mentioned by Lovejoy. The startling changes to the dataset between the 1999 and 2008 versions, first noticed by Steven Goddard, are clearly seen if the two slides are repeatedly shown one after the other as a blink comparator.
Fig. 2 shows the effect of tampering with the temperature record at both ends of the 20th century to sex up the warming rate. The practice is surprisingly widespread. There are similar examples from many records in several countries.
But what is quantified, because Professor Jones’ HadCRUT4 temperature series explicitly states it, is the magnitude of the combined measurement, coverage, and bias uncertainties in the data.
Measurement uncertainty arises because measurements are taken in different places under various conditions by different methods. Anthony Watts’ exposure of the poor siting of hundreds of U.S. temperature stations showed up how severe the problem is, with thermometers on airport taxiways, in car parks, by air-conditioning vents, close to sewage works, and so on.
(corrected paragraph) His campaign was so successful that the US climate community were shamed into shutting down or repositioning several poorly-sited temperature monitoring stations. Nevertheless, a network of several hundred ideally-sited stations with standardized equipment and reporting procedures, the Climate Reference Network, tends to show less warming than the older US Historical Climate Network.
That record showed – not greatly to skeptics’ surprise – a rate of warming noticeably slower than the shambolic legacy record. The new record was quietly shunted into a siding, seldom to be heard of again. It pointed to an inconvenient truth: some unknown but significant fraction of 20th-century global warming arose from old-fashioned measurement uncertainty.
Coverage uncertainty arises from the fact that temperature stations are not evenly spaced either spatially or temporally. There has been a startling decline in the number of temperature stations reporting to the global network: there were 6000 a couple of decades ago, but now there are closer to 1500.
Bias uncertainty arises from the fact that, as the improved network demonstrated all too painfully, the old network tends to be closer to human habitation than is ideal.
Figure 3. The monthly HadCRUT4 global temperature anomalies (dark blue) and least-squares trend (thick bright blue line), with the combined measurement, coverage, and bias uncertainties shown. Positive anomalies are green; negative are red.
Fig. 3 shows the HadCRUT4 anomalies since 1880, with the combined anomalies also shown. At present, the combined uncertainties are ±0.15 Cº, or almost a sixth of a Celsius degree up or down, over an interval of 0.3 Cº in total. This value, too, is an order of magnitude greater than the unrealistically tiny measurement error allowed for in Lovejoy’s equation (1).
The effect of the uncertainties is that for 18 years 2 months the HadCRUT4 global-temperature trend falls entirely within the zone of uncertainty (Fig. 4). Accordingly, we cannot tell even with 95% confidence whether any global warming at all has occurred since January 1996.
Figure 4. The HadCRUT4 monthly global mean surface temperature anomalies and trend, January 1996 to February 2014, with the zone of uncertainty (pale blue). Because the trend-line falls entirely within the zone of uncertainty, we cannot be even 95% confident that any global warming occurred over the entire 218-month period.
Now, if you and I know all this, do you suppose the peer reviewers did not know it? The measurement error was crucial to the thesis of the Lovejoy paper, yet the reviewers allowed him to get away with saying it was only 0.03 Cº when the oldest of the global datasets, and the one favored by the IPCC, actually publishes, every monthy, combined uncertainties that are ten times larger.
Let us be blunt. Not least because of those uncertainties, compounded by data tampering all over the world, it is impossible to determine climate sensitivity either to the claimed precision of 0.01 Cº or to 99% confidence from the temperature data.
For this reason alone, the headline conclusion in the fawning press release about the “99% certainty” that climate sensitivity is similar to the IPCC’s estimate is baseless. The order-of-magnitude error about the measurement uncertainties is enough on its own to doom the paper. There is a lot else wrong with it, but that is another story.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Mark Bofill says: “I’m not conversant with Haar fluctuations and fluctuation analysis, so I’m going to be puzzling over the math in this paper for awhile. ”
That the whole game. All the fancy stats stuff is sand in your eyes tactics. It starts out from some wild and questionable assumptions and approximations. relies proven inadequate multi-proxy mumbo-jumbo, and then pretends to get 99% significant results, in the clear hope that the method will losing any reading it.
It’s the emperor’s cloths all over again. Only really intelligent and well trained people (like Magma , LOL ) can understand it. So no one is supposed to say it’s wrong for fear of looking ignorant.
Luckily , there’s our resident classics BA educated Monckton to point the finger and cry “Look, he has no clothes!”
Thermometers: I have a about 10 them sitting in a beaker at work. About once a year I calibrate some thermocouple sensors against ice and water and boiling water (corrected for the 300 meter altitude). I include the thermometers just because they are there. We do not have good mercury thermometers anymore and the organic fluid filled ones have a horrible flaw. They contain a green dye so one can see the liquid column. I used one to monitor a hot water bath over a long period. Some of the fluid evaporated and condensed in the cooler top of the fine tube. The dye did not do the same. My thermometer now had an invisible part of the fluid column at the top and the indicated temperature was off by a nifty 10°C! During the calibration, I am happy if the thermometers are within 1° C.
Back to the calibration procedure: The ice bath is a Dewar filled with fresh snow, de-ionized water, and covered with a Styrofoam cap. It has as uniform a temperature as I can make. The best temperature indicator I have is actually a meter with a platinum resistance sensor. It indicates temperature variations of at least 0.1°C through the snow and water. Not that surprising as near the top, heat travels down the 1/8 inch diameter stainless rod of the sensor to heat the platinum resistor.
Yes I can easily measure a temperature difference of 0.01°C by making two thermocouples from the same reel of wire and hook them up in opposition. But I have no idea how one would measure an absolute temperature to that degree of precision with a limited budget. +/-0.03° from a bunch of ½° accuracy mercury thermometers stuck in wooden boxes out in the fields is ludicrous. You get a bigger change in a bare wire thermocouple by illuminating it with a laser pointer.
When I look at the paper, the model is defined as:
Tglobe (t) = … +ε(t)
As opposed to the Δε/Δt that is quoted above. The Δε/Δt could be interpreted as a derivative (what else could change over change mean?), but the ε(t) could be interpreted as a function of time.
Given that the value for ε(t) has been written as “0.03 K”, I don’t see how that could be a derivative, since it’s not given in the units of a derivative. The unfortunate thing is, according to the HadCRUT4 total uncertainty, the derivative with respect to time (decimal years) is -.0013.
See this image I made of the HadCRUT4 uncertainty that shows the slope.
The uncertainty as a function of time has a mean value of +/- 0.218, which is far greater than 0.03. So, interpreting the ε(t) as a function of time makes it hard to believe that it can just be ignored.
Digging in the paper, they say they use HadCRUT3 and two other sources for temperature data (NASA GDC and NOAA NCDC). These are gridded data, so my little plot for the global mean has a hard time applying. Don’t worry, though, things get interesting. The paper says:
Well, after scanning through the Lovejoy paper this came up: “In this paper we have argued that since ≈1880, anthropogenic warming has dominated the natural variability to such an extent that straightforward empirical estimates of the total warming can be made”. I don´t think even IPCC agrees with that? If the paper is set-up to “proove” that anthropogenic warming is dominating by first assuming that is a fact and then adjusting the input parameters accordingly, why should anyone be surprised at the outcome? Has Lovejoy been a consultant for the Crimean referendum perhaps?
In addidtion to Lord Monckton’s persistent perspicacity, his use of wonderfully descriptive phraseology such as “mephitically ectoplasmic emanation from the Forces of Darkness” are a sheer delight to read, as well as often forcing one to scurry for an online dictionary.
Also, “mephitically ectoplasmic emanation” sounds like a good name for a band.
Zeke Hausfather on April 11, 2014 at 10:14 pm
Christopher,
You mention in this article that “That record showed – not greatly to skeptics’ surprise – a rate of warming noticeably slower than the shambolic legacy record.” I presume you are referring to the Climate Reference Network. Can you provide evidence that the trend in CRN stations is lower than the trend in USHCN stations? As far as I can tell, it is not significantly different: http://rankexploits.com/musings/wp-content/uploads/2013/01/Screen-Shot-2013-01-16-at-10.37.51-AM.png
In a world where the peer of scientific publications was operationg correctly the Lovejoy paper would neve have seen the light of day. Monckton’s rebuttal is of such high quality that it should be formally submitted — as a rebuttal — to the journal that published Lovejoy’s paper. And,,if they don’t publish the rebuttal, more shame on them
John Archer, I noticed the same error on Fig. 2. I am assuming the adjustment should read that 1934 was 0.1C lower and 1998 was 0.3C higher, otherwise the argument doesn’t make sense.
Not to mention the oceans and many remote locations.
Huh! That would be 1 peer reviewed paper every 2 weeks, 1 day! How is this possible? You must have made a mistake or myself.
With some exceptions, the replies to “Magma’s” comment were the winner in this one, regardless of yet another priceless piece from Lord M. Ya gotta love the precision of good and proper English delivered by an established Englishman. Even if some of it goes over my head, it goes over well 😉 Sometimes the joy is simply in the reading.
==========================================================================
If you count “adjustments” made to the data then Lovejoy might be right.
Look at the current GISS here (http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt) and then look at the GISS from January 2012 here (http://web.archive.org/web/20120104220939/http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt).
Even the first numbers from January 1880 are different. What could happen between 2012 and 2014 that could change what happened in 1880? Does NASA have a time machine? Are we to believe that each and every paper record was looked at and analyzed?
I’m 99.9% certain that somebody just sat back in front of his computer and pushed a button to make the “adjustments”.
Really we need to start at the basic assumptions with any measurement system:
1) are the temperature values drawn from one days sample coming from the same population of temperatures found in any other day ?
NO, each days population of temperatures is probably and always potentially unique.
2) what is the standard deviation / error of a sample size of one ? infinity or unknown
3) what is the measurement / instrument error (limit of observation) of a mercury in glass thermometer according to manufacturers ? + – .5 degrees
4) can measurement / instrument error be altered, removed or decreased by a wave of the hand ? No, alteration requires a population of measurements from that individual instrument in situ adequate to quantify the nature & size of the error.
5) does the central limit theorem apply to any surface station temperature data ? No
a) Is the data sampled randomly ? No
b) Are sample values independent of each other ? Let me know if you can get an valid answer to this assumption applicable to the existing data set.
c) > 10% Condition ? Yes, 1 observation will be less than 10% heh.
d) Is the sample size must be sufficiently large ? No, 1 observation is somewhat too low /sarc
6) Obviously multiple, non-rigorous, ad hoc, “adjustment” of values with the wave of a hand is not going to do your certainly any favors.
Conclusion:
The conclusions of this paper are bullcrap. All surface station data is characterized by more certainty than is warranted from the data itself
Magma says: April 11, 2014 at 9:16 pm
” A short comparison: S. Lovejoy: Physics PhD; climate, meteorological and statistical expert; 500+ publications ”
Do you realize how much BS he has produced.
Personally, I suspect he is a skeptic. He will gather statistics on how many researchers and medias will cite him. After that, he will reveal that his study was a hoax to measure the credibility of those believing in the global warming sham.
“and sufficiently small (≈ ±0.03 K) that we ignore it.”
Yeah, right.
How did this arrant nonsense pass any sort of review, and how did it get published?
Did the taxpayer pay for this rubbish?
re: Christopher Monckton of Brenchley
…now there’s peer review one can trust to be honest and thorough, not to mention humorous at times.
re: Magma
…1st rule of holes.
What all have failed to understand here from this paper by Canadian Professor Lovejoy we (Canada) have “secret” high tech the rest of you bums have not yet produced. I’d tell you but it’s very, very secret. We have, shall I say loosely, developed the ability to measure, absolutely precisely, world wide temperatures at a glance. The jealousy, so evident here, is uncalled for and downright embarrassing. Moreover, we’re developing a “stealth” snowmobile so when we invade next winter you won’t see us coming until its too late.
albertalad says:
April 12, 2014 at 10:06 am
“What all have failed to understand here from this paper by Canadian Professor Lovejoy we (Canada) have “secret” high tech the rest of you bums have not yet produced…
… Moreover, we’re developing a “stealth” snowmobile so when we invade next winter you won’t see us coming until its too late.”
___________________
Bring it, hoseurs.
(Eh?)
500 papers. I guess that makes Lovejoy 200 papers smarter than Einstein.
Alan Robertson says: Bring it, hoseurs. (Eh?)
—————-
Lol!
As a layman, I have sufficient education and work experience to appreciate facts as amply displayed by contributors to this web site.
As a layman, I have and do, ask myself who has an interest in promoting untruths against the interest of all of us on this Planet and for what purpose. I wish I could answer my own question with solid facts and can only ascertain that greed/stupidity and lust for power, supercede knowledge and its quest. May this site and its contributors garner strength and continue. Well done Anthony Watts.
“His [Anthony Watts] campaign was so successful that the U.S. climate community were shamed into establishing a network of several hundred ideally-sited temperature monitoring stations with standardized equipment and reporting procedures.”
Um no. USCRN was established before Watts started WUWT or his Surface Stations project so the the above statement is simply false.
REPLY: Yes, DRowe is correct, USCRN preceded my work. What IS true however is that as a result of my owrk, NOAA removed many of the worst stations from the USHCN network – Anthony
Many thanks to all who have commented so very kindly.
I apologize for the error in the caption to Fig. 2, where I inadvertently transposed the words “higher” and “lower”.
Lord Monckton,
It’s difficult to say which is more fun: your shafting of that squealer, Lovejoy, or pig sticking. Come to think of it, it amounts to the same thing, with the only difference being whether one does it on horseback or not.
In either case it’s a noble sport for the truly dedicated gentleman. 🙂
Stick it to ’em, m’lud! Oink oink, squeal squeal!
Presumably humans had little influence before 1950. However the slope on Hadcrut4 between 1900 and 1950 is larger than from 1900 to date. Perhaps CO2 causes cooling?
See:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1900/plot/hadcrut4gl/from:1900/to:1950/trend/plot/hadcrut4gl/from:1900/trend
The “500+” publications of Mr. Magma, attributed to Lovejoy, might just be the number of reprints printed to hand out to the remaining subscribers of the Montreal Gazette and the Toronto Star…