By Christopher Monckton of Brenchley
It is time to be angry at the gruesome failure of peer review that allows publication of papers, such as the recent effusion of Professor Lovejoy of McGill University, which, in the gushing, widely-circulated press release that seems to accompany every mephitically ectoplasmic emanation from the Forces of Darkness these days, billed it thus:
“Statistical analysis rules out natural-warming hypothesis with more than 99 percent certainty.”
One thing anyone who studies any kind of physics knows is that claiming results to three standard deviations, or 99% confidence, requires – at minimum – that the data underlying the claim are exceptionally precise and trustworthy and, in particular, that the measurement error is minuscule.
Here is the Lovejoy paper’s proposition:
“Let us … make the hypothesis that anthropogenic forcings are indeed dominant (skeptics may be assured that this hypothesis will be tested and indeed quantified in the following analysis). If this is true, then it is plausible that they do not significantly affect the type or amplitude of the natural variability, so that a simple model may suffice:
“ΔTglobe/Δt is the measured mean global temperature anomaly, ΔTanth/Δt is the deterministic anthropogenic contribution, ΔTnat/Δt is the (stochastic) natural variability (including the responses to the natural forcings), and Δε/Δt is the measurement error. The last can be estimated from the differences between the various observed global series and their means; it is nearly independent of time scale [Lovejoy et al., 2013a] and sufficiently small (≈ ±0.03 K) that we ignore it.”
Just how likely is it that we can measure global mean surface temperature over time either as an absolute value or as an anomaly to a precision of less than 1/30 Cº? It cannot be done. Yet it was essential to Lovejoy’s fiction that he should pretend it could be done, for otherwise his laughable attempt to claim 99% certainty for yet another me-too, can-I-have-another-grant-please result using speculative modeling would have visibly failed at the first fence.
Some of the tamperings that have depressed temperature anomalies in the 1920s and 1930s to make warming this century seem worse than it really was are a great deal larger than a thirtieth of a Celsius degree.
Fig. 1 shows a notorious instance from New Zealand, courtesy of Bryan Leyland:
Figure 1. Annual New Zealand national mean surface temperature anomalies, 1990-2008, from NIWA, showing a warming rate of 0.3 Cº/century before “adjustment” and 1 Cº/century afterward. This “adjustment” is 23 times the Lovejoy measurement error.
Figure 2: Tampering with the U.S. temperature record. The GISS record from 1990-2008 (right panel) shows 1934 0.1 Cº lower and 1998 0.3 Cº higher than the same record in its original 1999 version (left panel). This tampering, calculated to increase the apparent warming trend over the 20th century, is more than 13 times the tiny measurement error mentioned by Lovejoy. The startling changes to the dataset between the 1999 and 2008 versions, first noticed by Steven Goddard, are clearly seen if the two slides are repeatedly shown one after the other as a blink comparator.
Fig. 2 shows the effect of tampering with the temperature record at both ends of the 20th century to sex up the warming rate. The practice is surprisingly widespread. There are similar examples from many records in several countries.
But what is quantified, because Professor Jones’ HadCRUT4 temperature series explicitly states it, is the magnitude of the combined measurement, coverage, and bias uncertainties in the data.
Measurement uncertainty arises because measurements are taken in different places under various conditions by different methods. Anthony Watts’ exposure of the poor siting of hundreds of U.S. temperature stations showed up how severe the problem is, with thermometers on airport taxiways, in car parks, by air-conditioning vents, close to sewage works, and so on.
(corrected paragraph) His campaign was so successful that the US climate community were shamed into shutting down or repositioning several poorly-sited temperature monitoring stations. Nevertheless, a network of several hundred ideally-sited stations with standardized equipment and reporting procedures, the Climate Reference Network, tends to show less warming than the older US Historical Climate Network.
That record showed – not greatly to skeptics’ surprise – a rate of warming noticeably slower than the shambolic legacy record. The new record was quietly shunted into a siding, seldom to be heard of again. It pointed to an inconvenient truth: some unknown but significant fraction of 20th-century global warming arose from old-fashioned measurement uncertainty.
Coverage uncertainty arises from the fact that temperature stations are not evenly spaced either spatially or temporally. There has been a startling decline in the number of temperature stations reporting to the global network: there were 6000 a couple of decades ago, but now there are closer to 1500.
Bias uncertainty arises from the fact that, as the improved network demonstrated all too painfully, the old network tends to be closer to human habitation than is ideal.
Figure 3. The monthly HadCRUT4 global temperature anomalies (dark blue) and least-squares trend (thick bright blue line), with the combined measurement, coverage, and bias uncertainties shown. Positive anomalies are green; negative are red.
Fig. 3 shows the HadCRUT4 anomalies since 1880, with the combined anomalies also shown. At present, the combined uncertainties are ±0.15 Cº, or almost a sixth of a Celsius degree up or down, over an interval of 0.3 Cº in total. This value, too, is an order of magnitude greater than the unrealistically tiny measurement error allowed for in Lovejoy’s equation (1).
The effect of the uncertainties is that for 18 years 2 months the HadCRUT4 global-temperature trend falls entirely within the zone of uncertainty (Fig. 4). Accordingly, we cannot tell even with 95% confidence whether any global warming at all has occurred since January 1996.
Figure 4. The HadCRUT4 monthly global mean surface temperature anomalies and trend, January 1996 to February 2014, with the zone of uncertainty (pale blue). Because the trend-line falls entirely within the zone of uncertainty, we cannot be even 95% confident that any global warming occurred over the entire 218-month period.
Now, if you and I know all this, do you suppose the peer reviewers did not know it? The measurement error was crucial to the thesis of the Lovejoy paper, yet the reviewers allowed him to get away with saying it was only 0.03 Cº when the oldest of the global datasets, and the one favored by the IPCC, actually publishes, every monthy, combined uncertainties that are ten times larger.
Let us be blunt. Not least because of those uncertainties, compounded by data tampering all over the world, it is impossible to determine climate sensitivity either to the claimed precision of 0.01 Cº or to 99% confidence from the temperature data.
For this reason alone, the headline conclusion in the fawning press release about the “99% certainty” that climate sensitivity is similar to the IPCC’s estimate is baseless. The order-of-magnitude error about the measurement uncertainties is enough on its own to doom the paper. There is a lot else wrong with it, but that is another story.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Professor Lovejoy gets two Brownie points: the first for having the courage to enter the lions’ den (we shall growl at him, but we shall not eat him), and the second for having the sense of humor to sign off as “Forces of Darkness”. Courage and a sense of humor are rare qualities among supporters of the official line on climate, so I begin by welcoming him.
He raises three points. First, he asks why I expressed terms in an equation concerned with changes in temperature over time as ΔTn / Δt ? Alas, I had a corrupt copy of the draft paper which appeared to have been written that way.
Secondly, he says that the standard deviations of the differences between his four globally averaged temperature series since 1880 (NOAA, GISS, HadCRUT3 and 20CR) agreed with one another to 0.03 K up to centennial timescales, justifying him in ignoring any error term. Here we come up against a methodological concern about the approach to error and its propagation.
I shall go along with him to the extent of observing that the least-squares linear-regression trends on the first three datasets (using monthly data) since 1880 are within 0.05 K of one another.
However, as the head posting pointed out, the 2-sigma error bars published by Hadley/CRU occupy an interval [+0.15, -0.15] K, or 0.3 K in all – ten times his error term, and a very large fraction of the 0.9 K global warming since 1880 (at which time, as my graph plotting the error bars makes clear, the error bars were considerably broader than they are today).
It follows that the conformity between the three datasets may be (and probably is) either coincidence or a consequence of inter-comparison, a deadly process in which keepers of the temperature datasets seem to indulge no less vigorously than climate modelers. The danger in inte-rcomparison that in the end everyone agrees to make the same errors.
Thirdly, the Professor argues that the data tampering to which I had referred was likely to be insignificant because one of his four input datasets, the 20th-Century Reanalysis, did not use land station temperatures at all but (if I understand him correctly) inferred the global temperatures from land station pressure data and sea-surface temperatures, and still produced results similar to those of the three temperature datasets.
While I welcome in general any reasonable attempt to calibrate uncertain and rather poorly-maintained temperature records (see Climategate emails, passim), the sea surface occupies 71% of the Earth’s surface, so I should not a priori expect that much difference between a dataset which, as to its land values, naturally had to be calibrated against existing temperature records and, as to its sea values, was dependent upon the same data as they.
Fourthly, the Professor rather optimistically says that because of the negligible discrepancy between 20CR and the rest “any biases from manipulation of temperature data must be small”. Alas, even if that too were no mere coincidence, other biases are all too evident.
For instance, McKitrick & Michaels (2007) found so significant a correlation between regional rates of industrial growth and regional rates of warming that they concluded that temperatures over land had been overstated by double between 1980 and 2002. “Using the regression model to filter the extraneous, nonclimatic effects reduces the estimated 1980–2002 global average temperature trend over land by about half”. And 1980-2002 was where the thick end of the warming since 1950 is thought to have occurred. More about that period in a moment.
Fifthly, the Professor says the multiproxies agree with one another on the 125-year timescale in question, even if they disagree over longer timescales. He says we can even take the medieval warm period as being warmer than 2013 [as the overwhelming weight of evidence from the proxy reconstructions all over the world suggests: see the medieval warm period database at co2science.org], but only the past 125 years are important.
Here again, my concern is methodological. If the medieval warm period was indeed warmer than the present, and if the rate of warming from, say, 1695-1735 was twice the fastest supra-decadal warming rate that has occurred since, as the Central England record suggests it was, then the mere fact that the 20th century warmed at a far from unprecedented rate and to a far from unprecedented absolute temperature tells us – on its own – nothing about what caused the warming.
From 1983-2001, for instance, a naturally-occurring reduction in global cloud cover caused a radiative forcing of 2.6 Watts per square meter (Pinker et al., 2005). Now, the entire anthropogenic influence from 1980 to the present, according to IPCC (2013), amounts to just 1 Watt per square meter. So the natural forcing during the period of rapid global warming that coincided with the positive or warming phase of the Pacific Decadal Oscillation was thrice the anthropogenic forcing.
So, let us cast up the temperature record since 1880. From 1880 to 1910, global temperatures fell. From 1911-1945, they rose. But on any view we cannot have been much to blame, because according to IPCC the anthropogenic forcing from 1750-1950 was at most 0.6 K.
From 1951-1975 temperatures did not change. From 1976-2001, coincident with the positive phase of the PDO and the naturally-occurring reduction in cloud cover, they rose. When the PDO transited to its negative phase late in 2001, at which moment the cloud cover returned to normal, the warming stopped and has not occurred since.
So the only period since 1880 during which a) the weather was getting warmer and b) we could in theory have had something measurable to do with it was 1976-2001. And during that period the natural forcing from the lack of cloud cover was thrice the anthropogenic forcing.
Based on these considerations, Monckton of Brenchley (2010) concluded that the IPCC’s interval 3.3 [1.5, 2.5] K for climate sensitivity is excessive by at least a factor 2. And that is before one takes into account McKitrick & Michaels (2007) and the statistically-significant influence of the urban heat-island effect.
As best I can make it out, Professor Lovejoy did not take into account any of these papers in his analysis. The notion that one can determine climate sensitivity to 2 decimal places, without taking into account the substantial error bars in the temperature data themselves, without taking account of the previous periods of more rapid warming and of higher global temperatures, and without taking account of the other major influences to which I have referred, is, to say the least, problematic.
Any paper that pretends to a very precise determination of climate sensitivity without taking explicit account of all these (and many more) necessary considerations amounts to little more than guesswork. Fashionable, yes. Reliable, no.
References
Monckton of Brenchley, C.W., 2010, Global brightening and climate sensitivity, in Proceedings of the 42nd Annual International Seminar on Nuclear War and Planetary Emergencies, World Federation of Scientists [A. Zichichi and R. Ragaini, eds.], World Scientific, London, 167-185, ISBN 978 981 4531 77 1.
McKitrick, R.R., and P.J. Michaels, 2007, Quantifying the influence of anthropogenic surface processes and inhomogeneities on gridded global climate data, J. Geophys. Res., 112, D24S09, doi:10.1029/2007JD008465.
Pinker, R.T., Zhang, B., and Dutton, E.G., 2005, Do Satellites Detect Trends in Surface Solar Radiation? Science 308, doi:10.1126/science.1103159.
A few more comments:
Since I’m in the lion’s den, I might as well make a last attempt to be understood.
The peer review system functioned well:
I’de like to correct the record here: from your perspective, the peer review system worked very well – it only failed 25% of the time. Indeed, the paper was rejected from three different journals before being accepted. Ironically none of the reviewers even mentioned the main conclusion about the statistical testing. Instead, they were fixated on: a) claiming that the first part of the paper was unoriginal – and b) in one memorable case (that quoted in the acknowledgements), that the paper’s claims could not be done without a GCM, I was told to “go get your own GCM”.
In other words, the system did indeed do its best to keep out the riff-raff but no system is perfect….
No tax payer money was wasted:
Again, you should be pleased: in Canada there is currently no body that funds academic research into the Atmosphere or Climate, the conservative government axed the Canadian Centre for Climate and Atmospheric Science back in 2011; this work was unfunded.
The accuracy of the data:
The issue of accuracy of the surface measurements is overblown for several reasons
a) The key is the accuracy of the data at century scales, not at monthly or even annual scales. One could imagine that as one goes to longer scales, the estimates get better since there is more and more averaging (if each annual error was statistically independent, the overall error would decrease as the square root of the averaging period). Actually the figure referred to in the paper (the source of the estimate +-0.03 oC), shows that – interestingly – the accuracy does not significantly improve with time scale! Yet the root mean square differences of the four series (one of which uses no station temperature data whatsoever), is still low enough for our purposes: about 0.05oC. If the corresponding variance is equally apportioned between the series, this leads to about +-0.03 oC each.
b) If you don’t like the global series I picked, choose your own and then use fig. 10 to work out the probability. You will find that the probability rapidly decreases for changes larger that about 0.4oC, so that to avoid my conclusion, you’ll need to argue that the change in temperature over the last 125 years is about half of what all the instrumental data, proxy data and models indicate.
c) Again, I think there is a misunderstanding about the role of time resolution/ time scale. Taking differences at 125 years essentially filters out the low frequencies. In other words, you can have your medieval warming. You can roast the peasants, but that is irrelevant for the result.
-Forces of Darkness
“The key is the accuracy of the data at century scales, not at monthly or even annual scales.”
Please be so kind to explain to a common man at what time intervals the data is collected. How often does a meteorologist or his ADC goes out to his weather booth and takes a reading of the data reqired? Once a day? Once a week? Once a month or once a year or even once per century? I presume that the accuracy of data is best when there are as many reliable data as possible available. Unfortunately, the number of wx-stations has considerably decreased and as you may concede, satellites cannot spot every place at the required times where once a wx-station used to be. And that becomes a question of the grid applied. IMHO it makes absolutely no sense at all to choose grids that are measured in miles by three- or even four-digit numbers and then trying to interpolate data from it. As you may know, most of the earth’s surface, may it be land or sea, is notorious for the lack of reliable wx-stations.
So I quite agree with you on the required accuracy, but accuracy is, in many parts of the world, wishful thinking.
So you can’t seriously call for accuracy of data without being able to harvest them neither in required quantity or quality.
Predictions and probabilities on wishful thinking is, so it seems to me, unsound and unprofessional. That is, besides other flaws, one of the reasons why the IPCC-preferred models all failed.
Shaun,
I am beginning to like you! Having had papers rejected for similar reasons I know the frustration, especially if the reviewers appear not to have understood what the paper is about. Your paper indeed essentially proves that stochastic natural variation alone cannot explain the temperature record. What it doesn’t show is that a combination of AGW and natural climate cycles is the best explanation of the observed temperature record. It is not a binary decision between either natural variation or AGW but is far more likely a combination of both. The pause in warming is probably due to the same downturn as that observed between 1940 and 1970. The good news is that taking natural cycles into account results in lower CO2 climate sensitivity.
cheers
I have to say that I have a real concern with an underlying assumption that somehow in the 1950s global carbon emissions took off in comparison with previously. What did happen in the 1950s was a growing interest in air pollution that allowed us for the first time to quantify emissions. Prior to that our numbers are at best scant.
Consider for example the Battle of the River Plate. It took four British warships to destroy one German warship. One of the main differences between these ships was that the Graf Spee was an oiler whilst the British ships were steamers running on coal. This (amongst other things) was why the German pocket ships were such a problem they had much greater range.
Now if the transition from coal/steam to oil could produce such a greater efficiency then why do we not see any of these gains in the so called emissions estimates.
Consider also that in the 1950s/60s Europe first transitioned from dispersed space heating (mainly coal fires) to localised town gas generation (using town gas) and then to natural gas each step of which would have introduced significant carbon efficiencies and yet once again we see no evidence of this in the data.
Consider also that at the turn of the last century, massive amounts of logging and land clearing were going on in Australia as land was cleared for farming and the forests destroyed. Don’t think this is insignificant – the cleared area in Western Australia alone is almost the same size as the entire United Kingdom and all of that happened in approximately 60 years. The method used was logging and chaining where a chain was dragged through the bush after all mature trees were logged and then the piled up heaps were burned.
My point is that I have real doubts that the databases on carbon emissions are accurate before the 1950s, since the quality of the records would not be good enough, except perhaps during the highly inefficient global industrialised war that was waged in the late 1930s/early 1940s – (how can that be insignificant?).
I’m not sure what that means – in terms of the overall AGW meme but I do think that we might need to be careful in considering that human carbon emissions were only significant after the 1950s.
@pokerguy
http://www.oxforddictionaries.com
It never ceases to amaze me that the claimed precision quoted by is so far beyond the precision of the source records. Measuring temperatures to the precision often claimed is not trivial and far beyond the precision limits of the worst data in their sources.
At 9:22 PM on 13, April, Larry Ledwick had commented:
For the past thirty years, I’ve been wondering very much along these lines about the multiple-decimal-places claims of the “climate catastrophe” cadre, seldom or never with a “plus-or-minus” figure anywhere in their presentations. I couldn’t help recalling my undergraduate course in Instrumental Analysis back in the 1960s, particularly in light of all the emphasis on statistical significance since “evidence-based medicine” came to be emphasized in the 1990s.
To the best of my decidedly non-mathematical appreciation of “wriggle room” – the borderlands of uncertainty in the measurement of physical phenomena – the intrinsic defects of various investigatory methods tend effectively to compound as you build from one kind of assessment to another and so forth, until the precision and accuracy of your results are a whole helluva lot less than the sort of “pickle barrel” specificity claimed (to take one spectacular example) for the oh-so-secret Norden bombsite during World War Two.
That was when I caught the whiff of a con game.
following comments – ignore
[Please stop posting this inane comment. You do it all the time, and it makes no sense. ~ mod.]
Steve Mosher says “there are 10,000 temperature stations in N America alone”.
But the graph at his link seems to indicate that although there were about 10,000 recently its down to about 600? If so, what is the reason for the big change?
Do temperature stations always measure the same thing, eg do they all do max and min over a 24 hour period, or do they do samples at set points in time.
Professor Lovejoy’s latest response is remarkable in a number of respects. First, it leaves almost all the points in the head postings and in my detailed response to his earlier comment unanswered.
Let me begin, as before, by finding something to agree with. The fact that none of the reviewers who (rightly) rejected the paper did so on the most obvious ground – that the statistical analysis is, to put it mildly, defective and the result unduly ambitious in the precision claimed – merely confirms how unsatisfactory the pal-review system has become.
I also agree with Professor Lovejoy that the insistence of one reviewer that he “go get his own general-circulation model” reflects badly on the pal-review system. GCMs have visibly failed to predict global temperature accurately, and the inference is that they have overegged climate sensitivity just as they have overegged warming.
And I am delighted that the Canadian Prime Minister accepted advice from me and others that no more money should be spent on nests of totalitarians masquerading as “scientists” and making fortunes at the taxpayer’s expense by whipping up a baseless scare among innumerate, woolly-headed politicians and journalists.
Also, for the sake of being maximally agreeable, I agree that taking periods that are approximate multiples of the 60-year cycles of the Pacific Decadal Oscillation will be less prone to naturally-caused distortion than other periods. 125 years is close enough (though, in the global instrumental record, the mean PDO cycle length is around 58.4 years, so 117 years would have been better than 125).
Professor Lovejoy suggests that I do not understand what he did. I understand it all too well. I also understand what he did not do. It is simply not pardonable to hand-wave about how errors may perhaps get smaller the longer the period of record. For with global temperature records the error bars get considerably larger the further back one goes.
Let us put some numbers on it.
On HadCRUT4 for February 2014, the central estimate of global temperature followed by the uncertainty interval is +0.30 [+0.14, +0.46] K, a range of 0.32 K. However, for January 1850, the values are –0.69 [–1.09, –0.29] K, a range of 0.8 K. And we’re only talking about a warming thought to be 0.9 K, of which some appreciable fraction – perhaps as much as a third – probably did not occur in reality at all.
And the Professor, with a bit of statistical jiggery-pokery, can magic these substantial uncertainties away? I don’t believe it. I don’t think he had any idea how large the error-bars on the monthly temperature anomalies are. Most people – including climate scientists – don’t. Do you see any error-bars on the global temperature record shown in the IPCC’s summaries for policy-makers? Of course not: they would give the game away at once. The measurement error is a substantial fraction of the total claimed warming. They have to hide the error-bars so they can keep on telling us “the science is settled” when even the quantum of warming isn’t.
Yet in real physics, rather than in the fantasy world of the modelers, every result is dependent upon measurement, and every measurement is subject to a measurement error, so every result is – at least to the degree of the measurement error – uncertain.
One is reminded of Richard Feynman’s celebrated remark that if one finds oneself having to try to demonstrate a result by statistics rather than by math and physics one should stop and rethink.
Furthermore, the annual errors are not statistically independent: that was the whole point of my demonstrating just a couple of examples of the data tampering that has been going on. There are many more such examples globally. One cannot, therefore, pray in aid the alleged “independence” of the monthly temperature readings from one another in the hope that the large errors will sufficiently diminish over time.
By the time one takes into account deliberate biases from rent-seeking scientists wanting to sex up the temperature record to keep the panic dollars flowing, and inadvertent biases from the urban heat-island effect, and failures to take into account natural forcings such as the cloud-cover forcing that was thrice the supposed anthropogenic forcing from 1983-2001, I say again that it is simply not possible to declare a climate sensitivity to two decimal places, as Professor Lovejoy has done.
When I was taught math (I’m still learning), one of the first things I was taught was not to express a result to a precision greater than that of the least precise of the initial conditions. Professor Lovejoy has offered insufficient justification for breaking that rule.
How, then, will climate sensitivity be determined? Even if there were a definitive method of determining it to a precision of, say, 0.5 K, I am not sure that the profiteers of doom in the climate-science industry would ever accept it. My own approach to determining it is to go back to the central physical equations. The errors in those equations are numerous, and – like the temperature tamperings – they all point in the direction of exaggeration.
So we are going to have to continue the CO2 experiment and just wait and see. In November 2006, when I first put my head above the parapet and asked questions in public about the official viewpoint, I said that some warming was to be expected, but on balance probably not very much. There were shrieks of rage from the totalitarian academic Left when I wrote that, as the Climategate emails show. Well, from November 2006 to March 2014, taking the mean of the two satellite datasets, the least-squares linear-regression trend has been just 0.06 K, equivalent to 0.8 K/century, exactly the same as in the 20th century. In other words, not a lot.
Sorry, sir, but your paper was bad science, bad math, and bad statistics.
Reading Professor Lovejoy’s paper, I couldn’t believe that it wasn’t written with tongue firmly planted in cheek!
It is the hockey stick in disguise, complete with finely crafted handle, created with input by the master himself and finished off with “Mikes Nature trick”.
Which explains his thinly veiled snark about roasting peasants (#comment-1612991):
Put simply, it is natural variability flat-lined with a grafted athroprogenic / “Data” uptick!
Have a closer look, it is hokey statistical!
I hereby christen this paper the “Hokey Stat” 🙂
The first point that needs to be made is Moncton’s “Corrupted Paper” which he insists he has and is the reason he got the maths wilfully wrong, was clearly pointed out to him. This should be a clear indicator of the type of person we are dealing with here. Then his claim that “the Climate Reference Network [showed] a rate of warming noticeably slower than the shambolic legacy record” this in Moncton’s language is verbal diarrhea. Moncton goes on, and on, and on about the amounts of warming and cooling over the last 125 years and how it can’t be specifically attributed to anthropogenic causes, but alas! Lovejoy shows where Moncton was wrong and of cause Moncton concedes, stating “I agree that taking periods that are approximate multiples of the 60-year cycles of the Pacific Decadal Oscillation will be less prone to naturally-caused distortion than other periods. 125 years is close enough”. But at the end Moncton thinks all the data sets are corrupted as he states “keepers of the temperature datasets seem [to] agrees to make the same errors.” All these conniving traits are key to his inability to publish anything in a peer review journal.
[snip. Enough with your ad hominem junk science. ~ mod.]
At last, that rarity these days, a sneering troll who, in the end, resorts to referring readers to a Communist blog funded by a convicted internet-gaming fraudster. Meanwhile, the world fails to warm as ordered, though the ridiculous Whittless, who as far as I know has published no more in a peer-reviewed journal than Al Gore, demonstrates his unfamiliarity with the reviewed literature by suggesting that I have never published anything there. More homework needed, one feels.
I hope you can see with your eyes the irony of you calling someone a troll.. At the end of the day I dont mind who runs a particular website, as long as the facts are true about you, which in this case they are, so it will suffice. I only wonder why anyone would let you post the profanity you sequester. But you only have to look at the Heartland Institute funding for WUWT to see why they allow you. By all means amuse us all with a link to your published peer reviewed journal article, because you can’t just keep pretending something exists in the hope someone out there will believe you mockton.
Michael,
You are contributing about as much to this thread as a drunk who comes to dinner and craps himself. You haven’t contributed anything, refuted anything, you haven’t even delivered an impressive insult. Pathetic and poorly organized ad hominem has been the best you’ve been able to deliver. I expect your comments have largely been ignored until now because nobody cared to get cockroach guts on their shoe by stepping on you.
Since you apparently lack the self awareness to realize it, let me do you a favor and point out that you are accomplishing nothing here except demonstrating your intellectual bankruptcy and petty spite. It’s embarrassing to have to point that out to another human being and nobody enjoys it, so why don’t you scurry off and spare the rest of the readers the disgust of dealing with you.
Sincerely.
What is there to add, that paper speaks for itself.. It clearly comes to a 99% confidence rate of anthropogenic warming over the last 100 years. The paper explains that given the error bars from each data set are averaged out and with the use of four different data sets coinciding within 0.03oC of each other, 0.03oC became the new error bars. The paper shows over 125 year intervals from the year 1500, volcanic eruptions and solar forcing are constant and there was no climate shift at all. Not until the beginning of 1900 that CO2 forcing is added to the climate system do we see a climate shift. Nothing else changes, solar stays the same, volcanic eruptions stay the same. To add more confidence to the results the paper even considers aerosols and CO2 temperature lag. The paper is spot on and the only remark that is being stated by Monckton is his distaste for such a high confidence rate, and that remark is based on conspiracy theories.
Michael Whittemore,
Read this:
http://wattsupwiththat.com/2014/04/11/claim-odds-that-global-warming-is-due-to-natural-factors-slim-to-none
…because you’re fixated on your True Belief that man-made global warming exists. It doesn’t, at least not to any measurable, observable degree.
db
The link you gave explains the Lovejoy paper in good detail, it does not contradict it. This post of Moncktons is an attempted rebuttal of the paper. This is why I am here.
I noticed before you linking your Greenland temperature record which can be seen here http://i.snag.gy/BztF1.jpg Last time we spoke I told you that it was a fake! Yet you still use it, again I will show you the facts http://hot-topic.co.nz/wp-content/uploads/2011/01/GISP210klarge.png
Michael,
In which case, why are you posting? You freely admit you’ve got nothing to contribute.
Could you possibly demonstrate any more clearly that you have no concern about questionable scientific methodology and that you are suffering from confirmation bias, because I’m at a loss to imagine how. Doing away with the error bars because the data are close? Do you think this is acceptable in principle or will you admit that this is acceptable to you only because you like the result?
So you move on to demonstration that not only do you not understand the paper, but that you are ignorant of the Medieval Warming Period and the Little Ice Age. Dr. Lovejoy makes an argument that MWP and LIA are irrelevant,
but his argument is not that they did not occur.
Presumably because you like the result.
Cockroach overachiever award! Not being content with establishing that you’ve got nothing to contribute, that you couldn’t possibly care less about questionable scientific methodology, that you don’t understand the paper and appear to be ignorant of the mainstream accepted historical temperature record, you proceed to show that you have poor reading comprehension skills. Additionally, it speaks none too highly of your critical thinking abilities if it does not trouble you at all that such claims are derived from noisy and unreliable proxies. If this is all you could gather from what Lord Monckton wrote, perhaps you should close your mouth, put on your big boy pants, and try reading again. Slowly.
The cherry on top of the turd sundae. The accusation of conspiracy theory.
Having advanced your own baseless conspiracy theory,
you proceed to accuse others of indulging in conspiracist ideation. Well, no thanks. I deal with cleaning sewer spills like that enough when the topic of Dr. Lew comes up, I’m not taking a dive back into that sceptic tank to follow an amateur cockroach.
Please don’t hesitate to respond if I can be of any further assistance.
How many times do you want it repeated Mark, 3 data sets 1 completely independent, adding them all up and focusing on 125 year intervals, reduces the error bars and this is the same for the paleoclimate records. What number do you want the error bars to be? How many different independent data sets do you want? If one person sees you walking down the street then another sees you and then another, then a bus full of people drives by and they all see you! There is a high probability you are walking down the street. The probability is not the same for each person like Monckton suggests, but for all of them combined.
Your MWP and LIA comment suggests that you did not read the paper or you did not understand it. The paper found that solar and volcanic forcing are constant, this means that even if solar forcing was slowly increasing, the warming was constant. When looking over 125 year intervals there was no significant climate shifts, the climate is steady up until 1900.
Monckton makes himself very clear, the data sets have big error bars! But he thinks that all the data sets even if we had hundreds of them from multiple sources, would not matter, because they would all be corrupted. As he stated “keepers of the temperature datasets seem [to agree] to make the same errors.”
Michael,
Arguments of substance? Good. This is an improvement.
The paper did not find that solar and volcanic forcings are constant. The paper did not say that warming was constant. The paper argued that solar and volcanic forcings are stationary. You need to understand the difference between the terms constant and stationary for this to make sense. A stationary time series doesn’t mean it’s constant, it means it’s statistical properties don’t vary over time.
I have read the paper. I don’t understand Haar fluctuations or fluctuation analysis, as I’ve previously stated in this thread. I know a little bit about error propagation in general though, and this:
sure isn’t the way I learned to handle uncertainties. In my experience, given X and Y with uncertainties then the uncertainty of Z(X,Y) is obtained by adding the uncertainties of X and Y in quadrature. It is possible that I overlook some valid statistical technique which can be used to reduce uncertainty in this case. Regardless, the statement I quoted in the paper is enough to justify scrutiny of the handling of uncertainty in my eyes.
With nothing but genuine respect for Dr. Lovejoy, I do not say that Dr. Lovejoy is wrong. I say that I am not convinced that Dr. Lovejoy is right.
Finally,
Why don’t you let Lord Monckton speak for himself? Nobody needs you to tell us what he thinks when he’s commenting on the thread personally.
Michael
I wrote an article on noticeable climate change in which I concluded that the 50 year or longer centred paleo proxy record failed to capture real world temperatures where we all actually live
http://wattsupwiththat.com/2013/08/16/historic-variations-in-temperature-number-four-the-hockey-stick/
You will also Note there a carefully researched glacier record. So glaciers advanced and receded even though climate was static.?
The highly variable climate with its severe downturn to the lia! the astonishing recovery between 1695 and 1740 and then the downturn again are all illusory?
The records show a notable upturn in the first half of the 16th century and a generally settled climate as warm or warmer than today from around 850ad to around 1200ad with another notably warm period around 1350
Tonyb
Michael Whittemore says:
“I noticed before you linking your Greenland temperature record which can be seen here http://i.snag.gy/BztF1.jpg Last time we spoke I told you that it was a fake!”
Take up your truly ignorant complaint with Prof. R.B. Alley. He provided the data, as you could see right in the chart if you had only looked.
You also don’t seem to understand that there is not just one ice core data source, and that each one shows some variation from the others. But in general, they all agree. And the fact is that the chart you linked to has no identifiable provenance. Apparently it was fabricated by one of the alarmist blogs you frequent for your misinformation, and now you pass it off as “the facts”. But as we know here, baseless assertions like that are dismissed as baseless opinion.
Your religious belief in climate catastrophe does not permit you to open your mind to any other point of view, or any facts that contradict your True Belief. It is amazing that despite the debunking of every alarmist climate prediction, that anyone can still take them seriously. Cognitive dissonance is the explanation: accepting only those limited things that allow you to remain a True Believer, and rejecting everything else.
One more time: EVERY alarmist prediction has failed. No exceptions. There has been no runaway global warming, despite [completely harmless, beneficial] CO2 rising steadily. All alarmist predictions were that “carbon” would cause climate catastrophe [otherwise, who would really care?]
There is nothing anyone can say that would open Whittemore’s mind. Fortunately, he is one of only a small clique that still believes in their CAGW nonsense [WUWT has far more readers than all the alarmist blogs combined]. The rest of us can look out the window and draw reasonable conclusions. Only the Whittemore types try to tell us the real world isn’t the way we observe it. Luckily, there aren’t very many in his clique, and as the world fails to warm, there are fewer every day.
For the sake of completeness, I still question the standard deviation of 0.2C derived from the multiproxies. The text in figure 7 says:
…and the dashed lines show one standard deviation error bars estimated from the three 125 year epochs indicated in fig. 5 indicating the epoch to epoch variability.
Figure 5 shows the means of the three multiproxies, with associated uncertainties +/- 0.084C, +/- 0.083C, and +/- 0.093C. Really? Maybe so, what do I know. That seems astonishing to me, that the proxy error bars are that tight. If I understand properly, the .2C standard deviation is key for making the statistical decision about natural vrs. anthropogenic.
DB stealey
I note that your graphic and the one from MIchael use a different temperature axis to each other on the left hand side. Surely if you add Michaels 1.44C to the 1855 Easterbrook figure we end up with a value somewhat lower than the MWP. Is that correct or am I misreading it?
To make a proper comparison the graphics ought to be on a like for like basis.
Mind you, I am always bemused as to why the ice cores are thought to be such a reliable global proxy as they certainly don’t reflect current circumstances-the arctic being much warmer, relatively, than elsewhere and with some places actually cooling sharply over the last decade..
tonyb
tonyb,
I have no idea where M. Whittemore’s graph came from. I agree that they are different. I was pointing out that his assertion that his graph shows “the facts” and claiming that the one I posed by R.B. Alley was a “fake” should be reversed: his graph has no provenance, while the one I posted has “by R.B. Alley” in the graph.
Whittemore can post whatever he wants here. But I am careful to link to credible charts, and do not appreciate them being called fakes. That was his opinion, and like most of his assertions, it is wrong.