Lovejoy's 99% 'confidence' vs. measurement uncertainty

By Christopher Monckton of Brenchley

It is time to be angry at the gruesome failure of peer review that allows publication of papers, such as the recent effusion of Professor Lovejoy of McGill University, which, in the gushing, widely-circulated press release that seems to accompany every mephitically ectoplasmic emanation from the Forces of Darkness these days, billed it thus:

“Statistical analysis rules out natural-warming hypothesis with more than 99 percent certainty.”

One thing anyone who studies any kind of physics knows is that claiming results to three standard deviations, or 99% confidence, requires – at minimum – that the data underlying the claim are exceptionally precise and trustworthy and, in particular, that the measurement error is minuscule.

Here is the Lovejoy paper’s proposition:

“Let us … make the hypothesis that anthropogenic forcings are indeed dominant (skeptics may be assured that this hypothesis will be tested and indeed quantified in the following analysis). If this is true, then it is plausible that they do not significantly affect the type or amplitude of the natural variability, so that a simple model may suffice:

clip_image002 (1)

ΔTglobet is the measured mean global temperature anomaly, ΔTantht is the deterministic anthropogenic contribution, ΔTnatt is the (stochastic) natural variability (including the responses to the natural forcings), and Δεt is the measurement error. The last can be estimated from the differences between the various observed global series and their means; it is nearly independent of time scale [Lovejoy et al., 2013a] and sufficiently small (≈ ±0.03 K) that we ignore it.”

Just how likely is it that we can measure global mean surface temperature over time either as an absolute value or as an anomaly to a precision of less than 1/30 Cº? It cannot be done. Yet it was essential to Lovejoy’s fiction that he should pretend it could be done, for otherwise his laughable attempt to claim 99% certainty for yet another me-too, can-I-have-another-grant-please result using speculative modeling would have visibly failed at the first fence.

Some of the tamperings that have depressed temperature anomalies in the 1920s and 1930s to make warming this century seem worse than it really was are a great deal larger than a thirtieth of a Celsius degree.

Fig. 1 shows a notorious instance from New Zealand, courtesy of Bryan Leyland:

clip_image004

Figure 1. Annual New Zealand national mean surface temperature anomalies, 1990-2008, from NIWA, showing a warming rate of 0.3 Cº/century before “adjustment” and 1 Cº/century afterward. This “adjustment” is 23 times the Lovejoy measurement error.

 

clip_image006clip_image008

Figure 2: Tampering with the U.S. temperature record. The GISS record from 1990-2008 (right panel) shows 1934 0.1 Cº lower and 1998 0.3 Cº higher than the same record in its original 1999 version (left panel). This tampering, calculated to increase the apparent warming trend over the 20th century, is more than 13 times the tiny measurement error mentioned by Lovejoy. The startling changes to the dataset between the 1999 and 2008 versions, first noticed by Steven Goddard, are clearly seen if the two slides are repeatedly shown one after the other as a blink comparator.

Fig. 2 shows the effect of tampering with the temperature record at both ends of the 20th century to sex up the warming rate. The practice is surprisingly widespread. There are similar examples from many records in several countries.

But what is quantified, because Professor Jones’ HadCRUT4 temperature series explicitly states it, is the magnitude of the combined measurement, coverage, and bias uncertainties in the data.

Measurement uncertainty arises because measurements are taken in different places under various conditions by different methods. Anthony Watts’ exposure of the poor siting of hundreds of U.S. temperature stations showed up how severe the problem is, with thermometers on airport taxiways, in car parks, by air-conditioning vents, close to sewage works, and so on.

(corrected paragraph) His campaign was so successful that the US climate community were shamed into shutting down or repositioning several poorly-sited temperature monitoring stations. Nevertheless, a network of several hundred ideally-sited stations with standardized equipment and reporting procedures, the Climate Reference Network, tends to show less warming than the older US Historical Climate Network.

That record showed – not greatly to skeptics’ surprise – a rate of warming noticeably slower than the shambolic legacy record. The new record was quietly shunted into a siding, seldom to be heard of again. It pointed to an inconvenient truth: some unknown but significant fraction of 20th-century global warming arose from old-fashioned measurement uncertainty.

Coverage uncertainty arises from the fact that temperature stations are not evenly spaced either spatially or temporally. There has been a startling decline in the number of temperature stations reporting to the global network: there were 6000 a couple of decades ago, but now there are closer to 1500.

Bias uncertainty arises from the fact that, as the improved network demonstrated all too painfully, the old network tends to be closer to human habitation than is ideal.

clip_image010

Figure 3. The monthly HadCRUT4 global temperature anomalies (dark blue) and least-squares trend (thick bright blue line), with the combined measurement, coverage, and bias uncertainties shown. Positive anomalies are green; negative are red.

Fig. 3 shows the HadCRUT4 anomalies since 1880, with the combined anomalies also shown. At present, the combined uncertainties are ±0.15 Cº, or almost a sixth of a Celsius degree up or down, over an interval of 0.3 Cº in total. This value, too, is an order of magnitude greater than the unrealistically tiny measurement error allowed for in Lovejoy’s equation (1).

The effect of the uncertainties is that for 18 years 2 months the HadCRUT4 global-temperature trend falls entirely within the zone of uncertainty (Fig. 4). Accordingly, we cannot tell even with 95% confidence whether any global warming at all has occurred since January 1996.

clip_image012

Figure 4. The HadCRUT4 monthly global mean surface temperature anomalies and trend, January 1996 to February 2014, with the zone of uncertainty (pale blue). Because the trend-line falls entirely within the zone of uncertainty, we cannot be even 95% confident that any global warming occurred over the entire 218-month period.

Now, if you and I know all this, do you suppose the peer reviewers did not know it? The measurement error was crucial to the thesis of the Lovejoy paper, yet the reviewers allowed him to get away with saying it was only 0.03 Cº when the oldest of the global datasets, and the one favored by the IPCC, actually publishes, every monthy, combined uncertainties that are ten times larger.

Let us be blunt. Not least because of those uncertainties, compounded by data tampering all over the world, it is impossible to determine climate sensitivity either to the claimed precision of 0.01 Cº or to 99% confidence from the temperature data.

For this reason alone, the headline conclusion in the fawning press release about the “99% certainty” that climate sensitivity is similar to the IPCC’s estimate is baseless. The order-of-magnitude error about the measurement uncertainties is enough on its own to doom the paper. There is a lot else wrong with it, but that is another story.

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

268 Comments
Inline Feedbacks
View all comments
April 16, 2014 6:03 pm

Mr Whittemore, who displays a poor understanding of elementary statistics, tries to assert that averaging several datasets each with large error bars produces a new dataset with error bars that are less large. There’s no legitimate way to tighten error bars of 1º C and more – the errors that obtain from 1500-1990 – and obtaining an error-bar of only 0.03º C. I suspect that Professor Lovejoy did not know how wide the error-bars were. Even in the 20th century, the uncertainty interval (to 2 standard deviations) is 0.8º C at the beginning of the century, and 0.3º C at the end. The HadCRUT4 dataset, which is publicly available, publishes not only the central estimate of the monthly global mean surface temperature anomaly but also the error interval, together with the individual contributions from the measurement, coverage, and bias uncertainties.
No amount of statistical prestidigitation can make so large an error interval disappear. Professor Lovejoy accordingly produced a result based on a misunderstanding of the reason for and the extent of the errors in measuring global temperature, combined with a further misunderstanding of the additional uncertainties introduced by the poor representation of aerosols and of variations in global cloud cover. Taking all of these uncertainties together, no definitive conclusion can be reached by any method about climate sensitivity. In my own published papers on the subject, I say no more than that the warming in response to a doubling of CO2 concentration will probably be less than 1º C, which was heresy at the time when I first published it but is now increasingly widely supported in the reviewed literature.
For the biggest uncertainty of all is in the determination of the temperature feedbacks that, in the models, multiply the small direct warming from CO2 by 3. There are powerful mathematical reasons why any such large multiple is implausible, if not impossible. The direct warming from CO2, all other things being equal, is little more than 1º C. And we do not even know whether all other things are equal: for the climate exhibits formidable temperature homeostasis, and it is possible that warming will be even less than the 1º C that theory suggests. For these reasons, even the IPCC has abandoned its attempts to identify a best estimate of climate sensitivity.

April 16, 2014 8:10 pm

Lord Monckton provides an excellent example of why on-line peer review is far preferable to the usual old timey peer review process. Thousands of reviewers are more knowledgeable than 3, even if 900 out of every thousand may be less educated. Crowd sourcing works, and it works well — it is much better than pal review when it comes to honest science.
Dr. Lovejoy’s paper has been ripped to shreds here and elsewhere. Of course he doesn’t see it, because he views it as his ox being gored, instead of the way he should view it: as an advance in scientific knowledge. Only those hypotheses that remain standing after skeptical review can be considered valid science. Lovejoy’s paper fails, but he should not be criticized for trying. Falsifying his claims is worth something. The basic problem for people like Dr. Lovejoy is that the Real World does not agree with, or support his conclusions. And Real World observations always trump assertions — cf: Feynman
Lovejoy also appears to have no understanding of the climate Null Hypothesis. Nothing observed now is either unusual, or unprecedented. It has all happened before, and to a much greater degree — prior to human industrial emissions. Thus, there is no “human fingerprint” discernable in global warming. There never was.

Michael Whittemore
April 16, 2014 9:45 pm

Mark Bofill says: April 16, 2014 at 10:37 am
I agree with what you said, “A stationary time series doesn’t mean it’s constant, it means it’s statistical properties don’t vary over time.”
Tonyb says: April 16, 2014 at 11:02 am
Now I know where Monckton cut and pastes from! As you state “CET [Central England Temperature Record] is said by a number of scientists to be a reasonable proxy for global”.
When you use all the proxy data sets the warming during the last century becomes more in line with Lovejoys paper http://wattsupwiththat.files.wordpress.com/2013/08/clip_image0041.jpg. A question though, why does the year 1820-1920 take the form of a century and all the others are in 50 year gaps? And also to further the Lovejoy paper, he looks at 125 year intervals, not 50 years as you have.
dbstealey says: April 16, 2014 at 11:03 am
The graph is fine db! It’s the fact it ends at 1855!!! That is not the Hockey Stick at the end.
Monckton of Brenchley says: April 16, 2014 at 6:03 pm
A person with the worst reputation in the climate change arena! Or a distinguished scientist..And I quote Mr. Lovejoy (Shaun Lovejoy says: April 14, 2014 at 6:10 am) “The key is the accuracy of the data at century scales, not at monthly or even annual scales. One could imagine that as one goes to longer scales, the estimates get better since there is more and more averaging (if each annual error was statistically independent, the overall error would decrease as the square root of the averaging period). Actually the figure referred to in the paper (the source of the estimate +-0.03 oC), shows that – interestingly – the accuracy does not significantly improve with time scale! Yet the root mean square differences of the four series (one of which uses no station temperature data whatsoever), is still low enough for our purposes: about 0.05oC. If the corresponding variance is equally apportioned between the series, this leads to about +-0.03 oC each.”

tonyb
Editor
April 16, 2014 11:38 pm

Michael
The 1820 to 1920 period is the only one where two consecutive 50 year records averaged out the same. Which illustrates the difficulties in using long term paleo proxies, because as you can see the temperature oscillated considerably and the average of lots of extremes is not really an equitable climate.
We are far better to look at the annual and decadal record which shows the often astonishing variability of our climate rather than the longer term which averages it out to something it never was.
tonyb

climatereason
Editor
April 16, 2014 11:50 pm

db Stealey
I’m still bemused as to why arctic ice cores should be seen as a good proxy for global temperatures when our experience shows the Arctic often does not reflect what is happening elsewhere.
Also, as far as I can see Michaels 1855 figure plus 1.44C still puts us below the MWP doesn’t it?
tonyb

Michael Whittemore
April 17, 2014 12:23 am

climatereason says: April 16, 2014 at 11:50 pm
“The bottom black line shows his 1855 “present”, and it intersects the red line in the same places as his chart. I’ve added a grey line based on the +1.44ºC quantum calculated from the GRIP temperature data, and two blue crosses, which show the GISP2 site temperatures inferred from adjusted GRIP data for 1855 and 2009.” (http://www.skepticalscience.com/10000-years-warmer.htm)

Mark Bofill
April 17, 2014 6:24 am

Michael,

A person with the worst reputation in the climate change arena! Or a distinguished scientist..And I quote Mr. Lovejoy

And we’re back to the slime carnival of logical fallacies I see. I hoped for better, for a moment there you were doing well.
Listen to me for just a minute, please. I’ll try to be as brief and clear as I can.
Do you like the fact that when your turn the key in your car, you can be confident that the machine won’t suddenly explode and kill you? That you can sleep on an long flight with high confidence that you’ll wake up alive? That you can flip light switches, drive over bridges, and so on, and take it for granted that nothing catastrophic is at all likely to happen?
That confidence doesn’t come for free. Guys like me (engineers), use mathematical and scientific principles, developed by guys like Dr. Lovejoy, to accomplish these things. But we care about these things because they work, not because of anybody’s credentials. I listen carefully when guys like Dr. Lovejoy speak, not because their credentials mean they are right, but because their credentials flag them as someone I’d expect them to have a legitimate point. But they have to be able to demonstrate that what they claim is true for me to be able to accept it. I don’t have the luxury of taking their word for it. I either have to understand why they are correct, or be able to demonstrate statistically that even though I don’t fully understand the science that I can depend on it in some context. That’s my responsibility as an engineer. That’s why the world (well, the civilized world anyway) is a more or less safe place, despite all of the rolling gasoline bombs we ride around in and all the cruise missiles we sit in to get from place to place.
To make matters worse, it’s not always black and white. Dr. Lovejoy makes an argument about the 0.03C. It’s not (in my view anyway) that he is certainly wrong, the problem is that the approach he uses doesn’t appear to pass the standard rigor tests I’m used to. Sure he could be right. Has he demonstrated that he’s right? That’s the question.
This is plenty hard enough without guys like you running around injecting smear or logical fallacy into the mix. Give it a rest.

Michael Whittemore
April 17, 2014 7:00 am

Mark Bofill says: April 17, 2014 at 6:24 am
I don’t think you understand who Monckton is.. Go and research who he is and what he has done. Until then don’t tell me how I should behave.

Mark Bofill
April 17, 2014 7:31 am

Michael,

I don’t think you understand who Monckton is.. Go and research who he is and what he has done. Until then don’t tell me how I should behave.

How can you not understand this? Lord Monckton could be Adolf Hitler or the second coming of Christ himself and it wouldn’t make the slightest bit of difference regarding the validity of his arguments. Same for Dr. Lovejoy, same for you, same for me. If the most evil guy in the world observes that gravity near the earth accelerates objects at 9.8 m/s^2 in a vacuum, what does his evil have to do with the correctness of his observation? If Mother Teresa were to tell you cyanide isn’t poisonous to humans, what difference does it make that it’s Mother Teresa saying it?
You understand this. You’re choosing to ignore it. Am I right?

climatereason
Editor
April 17, 2014 7:38 am

Michael
Neither you nor DB Stealey has answered my question as to why the Arctic ice cores are seen as such a good proxy for the globe? The arctic-as it is currently-often seems to go its own way.
tonyb

Michael Whittemore
April 17, 2014 7:51 am

Monckton Bunkum Part 1 – Global cooling and melti…: http://youtu.be/fbW-aHvjOgM

Michael Whittemore
April 17, 2014 8:11 am

Mark Bofill says: April 17, 2014 at 7:31 am
How can you not understand? Monckton is well known for what he does and he has not rebutted Lovejoys published peer reviewed paper or even had a comment posted to the journal. He simply feels that “By the time one takes into account deliberate biases from rent-seeking scientists wanting to sex up the temperature record to keep the panic dollars flowing”… that some how this means Lovejoy should have had bigger error bars..
climatereason says: April 17, 2014 at 7:38 am
Tony B did you not link some work you did on the Central England Temperature Record? and claim it is good for a global perspective? Either way, no, it is definitely not a global record, its one at one spot on the whole Earth!. Which is also why claiming at the end was the Hockey Stick is ridicules!

April 17, 2014 8:48 am

TonyB,
Polar temperatures are a good proxy for global T over long time frames per the 2nd Law. Heat does not remain in one place, it tends to even out, planet-wide. If polar T begins to rise or fall over long time frames, it is reflecting what is happening with global temperatures.
Polar temperatures are not precise records of the past. But they do show long term trends, which are corroborated by other proxies like stalagmites, tree rings, etc. Ice cores are accepted by the scientific community as being representative of long term changes in global temperatures. Only a few anti-science cranks like Michael Whittemore believe that both Poles [which both show concurrent rises and declines in T over long time frames] are ‘one spot on the whole earth’. But there is no teaching people like that, is there?
M. Whittemore says:
I don’t think you understand who Monckton is.. Go and research who he is and what he has done… Monckton Bunkum… And so on.
Whittemore, you are a truly despicable character. Take your ad hominem attacks and accusations elsewhere. Readers of thinly-trafficked blogs like SkS might like your invective, but it is not appreciated here. Lord Monckton is a better man than you will ever be, doubled and squared.
And:
The graph is fine db! It’s the fact it ends at 1855!!! That is not the Hockey Stick at the end.
Now that we’re back to discussing science facts and observations, this chart is recent. So what is your complaint now? And where is Mann’s bogus Hokey Stick? It seems to have disappeared for the past 17+ years. That is a long time, especially considering all the wild-eyed predictions from the alarmist crowd — all of which have been falsified.
When one’s predictions have all turned out flat wrong, normal people re-assess their position. They admit that their conjecture was wrong. But not the swivel-eyed lunatics who comprise the alarmist crowd. They refuse to believe what the planet is clearly telling us: that CO2 is not the bogey man they thought it was. In fact, there is no evidence whatever of any downside due to the rise in CO2. It is harmless, and beneficial. More is better. Not that Mr Whittemore could ever accept that fact.
Finally, Michael Mann’s predictions have been debunked. ALL of them. He has zero credibility now. The question is: why do you still Believe? Planet Earth is decisively showing that Mann was flat wrong.
Where is your god now? Time for a new religion, eh?

Mark Bofill
April 17, 2014 9:16 am

Michael,
Lord Monckton puts forward his (rather uncharitable) opinion of the motivation of ‘rent-seeking scientists wanting to sex up the temperature record’. If you find that offensive, OK. I can understand where you’re coming from. I’m not much concerned about anybody’s opinion in that regard.
So vehemently disagree with Lord Monckton’s opinions. Let’s say I do too, just for the sake of argument. OK? Just to see where it takes us. That dirty Lord Monckton, how dare he make such outrageous claims about panic dollars. Preposterous!
Great. Now that we’ve gotten that out of the way, what’s left? Oh, the error bars.
What’d be really great is if I could show that pesky Viscount that he’s full of it. Wouldn’t it? I think so! Let’s try. Let’s take a look at error bars and how we deal with them. Let’s google it.
Errors: What they are, and how to deal with them a PDF of a series of lectures from Rutgers University. I read the document and see that the basic approach for combining uncertainties is to add them in quadrature (Refinement of the rules for independent errors). Is that what Dr. Lovejoy did? Hmm. Doesn’t look like it.
Error Propagation and the Meaning of Error Bars another PDF, this one from the University of British Columbia. This one is more in depth. There’s a method if I know the covariances of the variables. They warn of some pitfalls, and talk about the more general approach again of combining uncertainties in quadrature. They warn that even though this is widely used, it won’t necessarily give the correct answer! Combining uncertainties is a complicated subject. Did Dr. Lovejoy deal with this? Hmm. It looks to me like he dismissed the whole issue.
ERROR ANALYSIS INTERLUDES:…Adding in quadrature is the rule we use when we think the two errors are
independent of each other. In the example above, it’s quite likely that your measurement
is too low and your friend’s is too high, or vice versa. However, if the two quantities are
not independent of each other, for example if the same person paces off both halves of
the field with a systematic error in stride length that effects both halves equally, then
adding in quadrature isn’t the right thing to do. In that case, the errors would simply add
together….

So on and so forth.
It is not for me to presume to teach Dr. Lovejoy, who obviously has a vastly stronger grasp of maths than I do, how to handle propagation of uncertainty. I’m not delusional. I’m not going to disrespect the man by approaching him with my bachelor degree understanding of how to handle this. He already knows it better than I do. He teaches guys like me how to do this stuff for a living. I don’t know why he approached the problem the way he did. I can speculate all day, and at the end of the day it doesn’t matter.
What does matter is that I’m not permitted to take his word for it. For whatever reason, he wrote his argument the way he did. It doesn’t look like he handled the error bars properly. If I’m making an error, or overlooking something that makes his approach OK, I’m sure somebody sharper than me will eventually bring that to light. Until then though, it’s not persuasive.
Darn, right? It would have been lovely if the evil Lord Monckton had it wrong and I could have shown it. Oh well. That’s the way it goes.

Mark Bofill
April 17, 2014 9:45 am

Huh, did I hose my links? Must’ve screwed up the formatting. Here:
Errors: What they are, and how to deal with them
http://www.physics.rutgers.edu/ugrad/389/errors.pdf‎
Error Propagation, and the Meaning of Error Bars – UBC …
http://www.phas.ubc.ca/~oser/p509/Lec_10.pdf‎
ERROR ANALYSIS INTERLUDES – the Department of Physics
http://www.phys.uconn.edu/~hamilton/…/laberrors.pdf

Michael Whittemore
April 17, 2014 10:24 am

dbstealey says:
April 17, 2014 at 8:48 am
LOL 🙂 I am extremely happy you have a new graph db, if anything, you have made all this worth it.
Mark Bofill says:
April 17, 2014 at 9:16 am
If you have one data set that when averaged says that globally May 1901 was say 21oC and then you have three other data sets that all say May 1901 was within 0.03oC of 21oC. How many data sets would you need before you could have high confidence that globally May 1901 was 21oC +-0.03oC?

Mark Bofill
April 17, 2014 11:10 am

Michael,
We’re scrambling together two different concept.
1) Using the error bars:
If I had one data set that says globally May 1901 was 21C +/- .03C and the errors were done properly I’d be confident. The trouble may be that we’ve become desensitized to seeing mutually exclusive claims being treated as if they’re all correct. If I have a dataset that says 21 +/- .03C and another dataset that says 22 +/- .03C, a mistake has been made somewhere. They can’t both be right, the error bars don’t even overlap. In this case, I treat all of the error bars as suspect. Lest you start to think this is unreasonable, remember how nice it is that cars don’t explode for no apparent reason when you crank them.
1b) Applying this to the real problem at hand:
Say, let’s check Lord Monckton’s claim about Hadcrut4. Maybe he was blowing smoke and we can catch him at it:
http://www.metoffice.gov.uk/hadobs/hadcrut4/diagnostics.html
Looks to me like the error bar on Hadcrut4 run around 0.2C prior to 1900 actually, according to the method of Mark 1 Eyeballing. Maybe 0.1C or 0.15C in modern times? Hey, the magic of fact checking, it looks like Lord Monckton may have overstated the case.
This being said, the error bars are a heck of a lot more than 0.03C. If we don’t know the temperature in 2000 better than +/- .1C, how to swallow that we know the temperature to within +/- 0.03C hundreds of years ago?
2) Forget the error bars and look at each dataset as an observation:
With enough observations we can get a handle on this. The expected value of a random variable is the mean, so averaging the observations gives us an estimate of the expected / most likely value. With enough observations we use the sample mean and the observations to compute the sample variance and standard deviation, and we can use those to estimate confidence intervals to put probability bounds on the estimate of the mean. The trouble with this approach is it takes more than 3 or 4 observations to get a good estimate, unless you already know the characteristics of the population you’re sampling. The estimate improves with the number of observations. I think the rule of thumb is around 40 minimum when you don’t know anything about the population ahead of time.

climatereason
Editor
April 17, 2014 11:57 am

Michael
Britain is a ‘weather vane’ surrounded by ocean and very well tied in with weather systems, AO, Jet stream etc. The arctic is not.
There are many scientists who have done studies on CET and correlation to the much wider geographic area , including Phil Jones and De Bilt. I think the relationship to the Northern Hemisphere is stronger than to the entire globe
tonyb

Michael Whittemore
April 17, 2014 12:01 pm

Mark Bofill says:
April 17, 2014 at 11:10 am
Great post. I had assumed you would be up to the task if I posed you a question.
From what the paper says “e is the measurement error. The latter can be estimated from the differences between the various observed global series and their means; it is nearly independent of time scale” and from what Lovejoy said “The key is the accuracy of the data at century scales, not at monthly or even annual scales”. There seems to be more to it?

Michael Whittemore
April 17, 2014 12:06 pm

Lovejoy references this paper to explain it “How scaling fluctuation analyses change our view of the climate and its models” http://www.earth-syst-dynam-discuss.net/3/C793/2013/esdd-3-C793-2013.pdf

Mark Bofill
April 17, 2014 12:21 pm

Thanks Michael.

and e is the measurement error. The latter can be estimated from the differences between the various observed global series and their means; it is nearly independent of time scale and sufficiently small (+/- 0.03K) that we ignore it.

I think an analogy for what Dr. Lovejoy is doing here could be stated like this: I measure a flat. I use my ruler, I use my measuring tape, and I use my caliper. I get 3 numbers that are awfully close and the difference between the numbers is darn small. This is the measurement error, it’s small, and I’m ignoring it.. I think the trouble with this is that these global series might not be reliable the way that a ruler and a measuring tape and a set of calipers are, and that this approach isn’t the safe way to handle the uncertainty. If the numbers were close by coincidence, or due to systemic error, there’s no protection in this approach against that risk.

…The key is the accuracy of the data at century scales, not at monthly or even annual scales…

Unfortunately, you’ve come to the well once too often and the well is dry. I don’t understand what Dr. Lovejoy is saying here. I don’t want to muddy the waters by stating what I think he seems to be saying, because what I think he seems to be saying here doesn’t make sense. The two possibilities I can think of are, 1. Dr. Lovejoy is making an argument that doesn’t make any sense because it’s invalid and 2. Dr. Lovejoy is making an argument that I’m utterly missing the point of.. Between the two possibilities I’m erring on the side of caution for now and assuming it’s #2. But I haven’t figured this out yet.

Michael Whittemore
April 17, 2014 12:25 pm

climatereason says: April 17, 2014 at 11:57 am
During the end of the last ice age, warming in the north which melted ice (fresh water) into the ocean, caused the atlantic meridional overturning circulation (AMOC) to stop/reduce. This caused the southern hemisphere to warm and the north to cool. My point is, there is a lot to consider when thinking globally. http://sciences.blogs.liberation.fr/files/shakun-et-al.pdf
Of cause if you want to have a good idea of the temperature of the Earth it has been found that over the last 500 million years (as far as we have proxys) CO2 has governed the temperature of the Earth http://www.sciencemag.org/content/330/6002/356.full.pdf. Even Lovejoys paper has shown that CO2 is the cause of the recent warming.

April 17, 2014 12:50 pm

Michael Whittemore says:
April 17, 2014 at 12:25 pm
Of cause if you want to have a good idea of the temperature of the Earth it has been found that over the last 500 million years (as far as we have proxys) CO2 has governed the temperature of the Earth
+++++++++++++++
If CO2 was the control agent why have an ice age when the PPM was higher than today?

Mark Bofill
April 17, 2014 1:04 pm

Oh. Maybe that was the point of all the discussion about whether or not the forcings were stationary!
So here’s what I think, and thanks Michael for making me think it through a few extra times. He’s talking about getting more accuracy on longer timescales. More observations give you a better idea of the mean, right? The reason it didn’t make sense to me was because I thought this was like shooting at a moving target; the mean changes and so observations over time aren’t observations of the same thing. But the mean doesn’t change if you can show the series is stationary!
I’ll reread it and think about it and rethink my position after that, but I betcha that’s what he’s getting at. My apologies for being dense if this was the case and it was obvious to everybody else. :/

Michael Whittemore
April 17, 2014 1:08 pm

mkelly says:
April 17, 2014 at 12:50 pm
The sun was 4% weaker 500 million years ago. It has slowly increased to its current output. When you add up the forcing from the sun and CO2 you get the temperature of the Earth over the last 500 million years. As the sun has gotten hotter over the last 500 million years, we have been luck that mountains have weathered CO2 out of the atmosphere. This is why the Earth goes in and out of ice ages. The last ice age ended due to the sun changing its orbit/tilt increasing sunlight to the northern hemisphere which melted ice and stopped the atlantic meridional overturning circulation (AMOC). This caused the southern hemisphere to warm up because all the warm water that went up to the north through the AMOC was stuck in the south. This caused large amounts of CO2 to be released from the southern ocean. Due to CO2 being a global gas, it mixed thought out the globe and warmed the Earth. The amount of CO2 that was released governed how warm the Earth was and this is what we call the Holocene. As time goes by very high mountains weather CO2 out of the atmosphere which pushes us back into an ice age. But we have increased CO2 that has stopped the chance of another ice age and will warm the Earth.