Bob Carter's essay in FP: Policymakers have quietly given up trying to cut ­carbon dioxide emissions

Deal with climate reality as it unfolds

  May 23, 2012

Dr. Bob Carter

By Dr. Bob Carter

Over the last 18 months, policymakers in Canada, the U.S. and Japan have quietly abandoned the illusory goal of preventing global warming by reducing carbon dioxide emissions. Instead, an alternative view has emerged regarding the most cost-effective way in which to deal with the undoubted hazards of climate change.

This view points toward setting a policy of preparation for, and adaptation to, climatic events and change as they occur, which is distinctly different from the former emphasis given by most Western parliaments to the mitigation of global warming by curbing carbon dioxide emissions.

Ultimately, the rationale for choosing between policies of mitigation or adaptation must lie with an analysis of the underlying scientific evidence about climate change. Yet the vigorous public debate over possibly dangerous human-caused global warming is bedeviled by two things.

First, an inadequacy of the historical temperature measurements that are used to reconstruct the average global temperature statistic.

And, second, fueled by lobbyists and media interests, an unfortunate tribal emotionalism that has arisen between groups of persons who are depicted as either climate “alarmists” or climate “deniers.”

In reality, the great majority of working scientists fit into neither category. All competent scientists accept, first, that global climate has always changed, and always will; second, that human activities (not just carbon dioxide emissions) definitely affect local climate, and have the potential, summed, to measurably affect global climate; and, third, that carbon dioxide is a mild greenhouse gas.

The true scientific debate, then, is about none of these issues, but rather about the sign and magnitude of any global human effect and its likely significance when considered in the context of natural climate change.

For many different reasons, which include various types of bias, error and unaccounted-for artifacts, the thermometer record provides only an indicative history of average global temperature over the last 150 years.

The 1979-2011 satellite MSU (Microwave Sounding Units) record is our only acceptably accurate estimate of average global temperature, yet being but 32 years in length it represents just one climate data point. The second most reliable estimate of global temperature, collected by radiosondes on weather balloons, extends back to 1958, and the portion that overlaps with the MSU record matches it well.

Taken together, these two temperature records indicate that no significant warming trend has occurred since 1958, though both exhibit a 0.2C step increase in average global temperature across the strong 1998 El Niño.

Advertisement

In addition, the recently quiet Sun, and the lack of warming over at least the last 15 years — and that despite a 10% increase in atmospheric carbon dioxide level, which represents 34% of all post-industrial emissions — indicates that the alarmist global warming hypothesis is wrong and that cooling may be the greatest climate hazard over coming decades.

Climate change takes place over geological time scales of thousands through millions of years, but unfortunately the relevant geological data sets do not provide direct measurements, least of all of average global temperature.

Instead, they comprise local or regional proxy records of climate change of varying quality. Nonetheless, numerous high-quality paleoclimate records, and especially those from ice cores and deep-sea mud cores, demonstrate that no unusual or untoward changes in climate occurred in the 20th and early 21st century.

Despite an estimated spend of well over $100-billion since 1990 looking for a human global temperature signal, assessed against geological reality no compelling empirical evidence yet exists for a measurable, let alone worrisome, human impact on global temperature.

Nonetheless, a key issue on which all scientists agree is that natural climate-related events and change are real, and exact very real human and environmental costs. These hazards include storms, floods, blizzards, droughts and bushfires, as well as both local and global temperature steps and longer term cooling or warming trends.

It is certain that these natural climate-related events and change will continue, and that from time to time human and environmental damage will be wrought.

Extreme weather events (and their consequences) are natural disasters of similar character to earthquakes, tsunami and volcanic eruptions, in that in our present state of knowledge they can neither be predicted far ahead nor prevented once underway. The matter of dealing with future climate change, therefore, is primarily one of risk appraisal and minimization, and that for natural risks that vary from place to place around the globe.

Dealing with climate reality as it unfolds clearly represents the most prudent, practical and cost-effective solution to the climate change issue. Importantly, a policy of adaptation is also strongly precautionary against any (possibly dangerous) human-caused climate trends that might emerge in the future.

From the Financial Post via Dr. Carter in email correspondence

Bob Carter, a paleoclimatologist at James Cook University, Australia, and a chief science advisor for the International Climate Science Coalition, is in Canada on a 10-day tour. He speaks at Carleton University in Ottawa on Friday.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

236 Comments
Inline Feedbacks
View all comments
richardscourtney
May 28, 2012 4:14 pm

Bart:
At May 28, 2012 at 11:00 am you say to me:

You said “within the measurement errors of the Mauna Loa data”. That suggests you mean that it threads the bumps and wiggles, which you have arbitrarily labeled “measurement errors”. That is not an unqualified “perfect”. In fact, it is not perfect at all.

No!
It means to within an accuracy of +/-0.2 ppm of the maximum monthly value of CO2 recorded at Mauna Loa each year. It is not an arbitrary choice.
For information on why that is the stated measurement error see e.g.
http://www.esrl.noaa.gov/gmd/ccgg/about/co2_measurements.html
Of course. you are welcome to dispute the stated measurement error (I think it is larger than stated) but that is what the compilers of the Mauna Loa data claim so that was the ‘target’ we chose for the fit. And to within +/-0.2 ppm, we obtained a perfect fit for each annual datum using each model.
Richard

May 28, 2012 4:25 pm

“And to within +/-0.2 ppm, we obtained a perfect fit for each annual datum using each model.”
For crying out loud, +/- 0.2 ppm in a month is BIG! Show me the derivative! Oh, you can’t. Well, then, I guess we have reached the end of the road.
This is a really simple problem. I am tired of holding your hands showing how these things work and getting ignorant abuse in return. Believe what you want. Eventually, you will learn that I am correct. Frankly, I don’t give a damn.

May 28, 2012 4:31 pm

No wonder this is such a monumental fiasco in the making. Even those smart enough to be leery of the orthodoxy are too smug about things they haven’t studied in adequate detail to make broad, sweeping statements of omniscience. I am so sick of this whole godammed farce. Clowns to the left of me, jokers to the right… Ei yi yi…

richardscourtney
May 28, 2012 5:04 pm

rgbatduke:
All 6 of our models assume the system is moving towards an equilibrium which it never achieves.
There are three basic models which each assumes a different process dominates the behaviour of the carbon cycle.
* One is Ahlbeck’s model which assumes ocean/atmosphere exchange dominates the system.
* Another is a power equation (of the kind often used in process engineering) which assumes several different processes determine the flow into the sinks.
* The third is derived from biology, or rather biochemistry, because we were mindful that the absorption of CO2 takes place at least partly in the biosphere (the theory behind enzyme kinetics says the surface of an enzyme is continuously in equilibrium with its substrate and that a part of the substrate at the enzyme surface – its active site – will be digested to a product).
These models can each be adjusted to obtain a fit with the Mauna Loa data.
Then the anthropogenic emission for each year was added as an input to each basic model to obtain three more models (thus obtaining a total of 6 models).
The three “anthropogenic input” models can also be adjusted to obtain a fit with the Mauna Loa data.
We conclude that the assumption of assuming the system is moving towards an equilibrium at an unknown rate permits almost any model with at least two variables to fit the data. Indeed, I was surprised by how easy it was to tune each model to get an accurate fit when it is assumed there are delays in the system and little immediate response to temperature and/or the anthropogenic input.
Simply, our models show the available data can indicate anything one wants and, therefore, cannot indicate anything specific. This confirms your statement saying

It isn’t enough to show that your very simple model works, you need to show that alternative models do NOT work, AND — and this is a very important and — you have to come up with a concrete physical model, not just point out a coincidence in the data.

Richard

richardscourtney
May 28, 2012 5:13 pm

Bart:
At May 28, 2012 at 4:25 pm you say to me:

I am tired of holding your hands showing how these things work and getting ignorant abuse in return.

I apologise for any “abuse” I have given you. It was not my intention.
It seems I have unwiittingly used imperfect language on another thread today so I am distressed to learn that I have also abused you without knowing it. Please be assured that it was not intentional.
Richard

rgbatduke
May 28, 2012 5:20 pm

I just wasted a considerable part of my Holiday morning showing you were dead wrong about this, and apparently, you just toss it off without even reading.
I did read it, although it was after I posted — I spent a fair amount of MY holiday morning replying to you as well. Nobody holding a gun to either of our heads, right?
You persist in missing what I’m saying. I agree that your solution works! But I am also saying that I have programmed your equations into matlab — I can’t make head or tails of your using a circuit diagram analyzer (that in any event fails to run on my system) as an ODE solver, but I guess we use the tools we know. When I want to solve ODEs, I just use e.g. RKF45 or any of a half-dozen other general purpose adaptive ODE solvers, I don’t try to think up an analog circuit that has similar derivatives.
Using this, I run your equations. Running your equations, I can reproduce a scaled correlation between the derivative and delta at least as good as the actual data for a wide range of parameter choices. Some of those choices allow for H(t) to be responsible for signficant gain over C_0 alone.
As far as I can tell, you don’t know what C_0 is or should be, what sets it, how it varies, or the timescale of variation — or at least, if you know, I have yet to see it since you — again, although I have indeed read your replies, perhaps I missed it — have yet to tell me what it is, precisely, that acts as a CO_2 reservoir that is a source/sink that sets the baseline, what its time constants are, and so on. So far, you have just established by fiat that dC_0/dt = k2 Delta T compared to some unknown baseline T_0 and some equally unknown baseline C_0(T_0) — perhaps even unknowable given a relaxation time of infinity or “very long”.
From this I conclude — again — not that your conclusion is wrong, but that it is not proven. You say “you can’t reproduce the data correlation with H(t) as a significant input compared to C_0”. I say that I have done so, and that if you download octave and run the code I posted, you can too. Hence the data correlation alone is not a smoking gun. You can say after the fact that there are physical grounds for excluding the solution range that I’ve discovered that works, and while it still won’t make the data correlation alone a smoking gun, it will better support the hypothesis.
If you can actually stop discussing electrical circuits for a moment and think concretely instead of metaphorically, perhaps you can actually turn the toy model into a model model, one with a physical basis and at least an estimate (ideally evidence based) for each term, rate, coupling and so on. Then instead of a toy model, you might have a theory for the CO_2 cycle. That theory would (presumably) be falsifiable — it might do things like make new predictions that can be checked, or people could criticize and argue about whether or not your proposed physical model is in fact well-justified.
In the meantime, you obviously think that you are right. I obviously remain unconvinced.
I close with two remarks, and then — having survived a near-death experience last week (long story, but we are all busy and giving time to discuss this, and for me time is still feeling very precious indeed) and recovered to where I can get about pretty well again, I’m off to teach physics for a few hours to my summer students before hooking up to my IV for more antibiotics. So I may not come back to a discussion where little progress is still being made, although I do admit that it has been most informative. Bear in mind that while I remain unconvinced that you are right because of the unanswered questions, I do at this point think it is more likely that you may turn out to be right when they are answered. I merely await the answers.
The two remarks are — how to put this in EE terms that you will grok — if you hit an RC circuit with a square wave pulse, the response (capacitor potential/charge) strictly lags the pulse. After all, how can its charge/voltage go up before the current that charges it is available?
I’m not going to tediously work through LC circuits (that do oscillate) or driven LRC circuits (where one can have a phase angle relative to a periodic driver — if you are a practicing EE you probably understand that better than I do, although I do teach all that stuff literally twice a year to engineering students and sundry undergrads and am not exactly ignorant of ODEs and PDEs as I teach graduate E&M and Quantum and mathematical physics from time to time. Am I perfect or too cool there to be wrong? Absolutely not, but neither is it safe to assume that I’m stupid and don’t understand ODEs.
I do not see anything whatsoever in the system of equation you posted and that I implemented in matlab that could possibly permit dCO_2/dt to lead a secular change in Delta T. If it could, I presume that we both agree that it would be a problem, an error, not a feature. So I still have no idea why you are trying to assert that it is “OK” somehow for CO_2 concentration to accelerate in front of the Delta T that is presumably its cause.
Second, because I do read what you write — and indeed looked at the woods for trees graphs in the generator and made a few alternative versions of them on my own at the very beginning of this sub-thread — I was aware and am reminded of the sliding window average, or averages(?), of the data.
If it is averages plural, so both have the same window, the problem most likely persists. Especially when one has a strong CO_2 acceleration at least 6 months before the causal Delta T in similarly windowed curves. But I agree, with so short a baseline it may be an artifact of the averaging process. If anything, it points in the need to run more detailed versions of your ODEs where at least some sources of short-timescale noise are present on both Delta T and H(t) or k1(t) or k2(2) or whereever to try to replicate the visible noise in the raw data, and then see if sliding window averages are likely to make CO_2 pre-accelerate the Delta T driver. In the meantime, you aren’t even handwaving this problem away.
The fact is that neither of us knows why the derivative of CO_2 pre-accelerates compared to the Delta T signal in the processed data, in some cases rather remarkably. Your model will not reproduce this, I think, ever, but at the very least it would require some very peculiar noise or additional physics to explain — perhaps the CO_2 derivative is driven by just one feature of the surface temperature, ENSO temperatures for example, that can sometimes precede global temperatures by some lag (requiring a more complex model to get right) But this is data, so (within some unstated error) this is presumably what nature actually did, and until it is explained within whatever model you propose it is an inconsistency that will reach and an slap you in the face every time you try to argue that Delta T causes acceleration of CO_2 concentration — sometimes even before Delta T itself changes.
rgb

May 28, 2012 11:31 pm

rgbatduke says:
May 28, 2012 at 5:20 pm
“Some of those choices allow for H(t) to be responsible for signficant gain over C_0 alone. “
NO THEY DO NOT. You haven’t READ WHAT I HAVE WRITTEN!!!!!!!!! Or, if you have, you have read selectively and glossed over the details you did not get, and not had the courtesy to ask me to help you understand, just gone straight to issuing pompous proclamations.
H(t) has NONE OF THE FINE DETAIL of the temperature series. H(t) IS KNOWN!!!! You CANNOT JUST PICK AND CHOOSE ANY H(t) YOU WANT!!!!!!
Let me reiterate that: You CANNOT JUST PICK AND CHOOSE ANY H(t) YOU WANT!!!!!! You CANNOT JUST PICK AND CHOOSE ANY H(t) YOU WANT!!!!!! You CANNOT JUST PICK AND CHOOSE ANY H(t) YOU WANT!!!!!! You CANNOT JUST PICK AND CHOOSE ANY H(t) YOU WANT!!!!!! You CANNOT JUST PICK AND CHOOSE ANY H(t) YOU WANT!!!!!! You CANNOT JUST PICK AND CHOOSE ANY H(t) YOU WANT!!!!!! You CANNOT JUST PICK AND CHOOSE ANY H(t) YOU WANT!!!!!!
To reproduce the fine detail, YOU MUST HAVE A BANDWIDTH WHICH ALLOWS IT THROUGH from the temperature forcing!!!! And, that bandwidth DISALLOWS human forcing as a significant contributor.
I showed you that in the simulations here, and here.
You appear to understand very little about filtering theory. You appear to have little understanding of the role of bandwidth. You and Richard both appear unable to comprehend that my case is built on the fine detail – you just ignore it, and Richard thinks he can shrug it off as noise.
You have been VERY RUDE, and I am fed up. I have told you everything you need to understand the issue. There is ZERO doubt about this: I have proved my case for anyone well versed in the requisite technical knowledge, whether you or Richard understand it or not.

May 28, 2012 11:32 pm

“The fact is that neither of us knows why the derivative of CO_2 pre-accelerates compared to the Delta T signal in the processed data, in some cases rather remarkably. “
AAAAAAAAAAARRRRRRRRRRGGGGGGGGHHHHHHHHH!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Can you READ?????

May 28, 2012 11:38 pm

richardscourtney says:
May 28, 2012 at 5:04 pm
“The three “anthropogenic input” models can also be adjusted to obtain a fit with the Mauna Loa data.”
Not in the FINE DETAIL. You have stated as much plainly when you specified the error bars. Precisely what scale do you see in this plot?

Carbon500
May 29, 2012 12:41 am

There are two things which gnaw at me about AGW.
Firstly, as far as I know no-one since John Tyndall has formally set up a proper experiment to assess the impact of of CO2 in a true lab model – ie real gases and real water vapour at known concentrations and in a professional laboratory, not equations in a computer or on paper.
This wouldn’t be a true mimic of the atmosphere of course, but the maybe we’d have a better idea as to the real effects of CO2. I would have thought it essential to do this. Neither the Met Office nor the CDIAC can point me to such a paper, simply dismissing my query with ‘the physics is well known’.
Secondly, the ‘hockey stick’. Using proxy data, this purports to show anomalies relative to 1961 to 1990 within a fraction of a degree going back over a thousand years. This strikes me as technically implausible, to say the least. I’ve read Montford’s ‘Hockey Stick Illusion’.
I would appreciate any comments on both the above.

richardscourtney
May 29, 2012 2:19 am

Bart:
At May 28, 2012 at 11:38 pm you quote my saying:
“The three “anthropogenic input” models can also be adjusted to obtain a fit with the Mauna Loa data.”
And you reply saying:

Not in the FINE DETAIL. You have stated as much plainly when you specified the error bars. Precisely what scale do you see in this plot?

There is no “FINE DETAIL” to be seen at resolution better than the “error bars”: there is only meaningless random variation.
You state your misunderstanding when you say “[I] specified the error bars”.
No, nobody is entitled to decree “error bars”:
1. nature determined the accuracy of the data,
2. the Mauna Loa Laboratory (MLL) assessed what nature determined, and
3. I (with my co-authors) accepted what MLL assessed.
Richard

richardscourtney
May 29, 2012 3:29 am

rgbatduke:
Robert, I was not aware of your health problems until I read your post.
Clearly, at present your health is by far the most important concern which you – and any interacting with you – should have.
Please take care of yourself. Take every care to not over exert yourself: only help us here on WUWT if that gives you a distracting pleasure.
You are in my thoughts and I hope you are not offended that you are in my prayers.
Richard

NickB.
May 29, 2012 7:01 am

Gail (and Smokey),
Thanks for the replies. Like I said, it’s been a while (a couple of years) since I’ve been out here and I have obviously been missing out on some interesting discussions.
Best Regards,
-NickB.

Gail Combs
May 29, 2012 7:49 am

Carbon500 says:
May 29, 2012 at 12:41 am
…. Using proxy data, this purports to show anomalies relative to 1961 to 1990 within a fraction of a degree going back over a thousand years. This strikes me as technically implausible, to say the least. I’ve read Montford’s ‘Hockey Stick Illusion’.
I would appreciate any comments on both the above.
_________________________________________
You might want to take a look at AJ Strata’s (NASA Engineer) Error analysis: http://strata-sphere.com/blog/index.php/archives/11420

May 29, 2012 9:06 am

richardscourtney says:
May 29, 2012 at 2:19 am
“There is no “FINE DETAIL” to be seen at resolution better than the “error bars”: there is only meaningless random variation.”
Says who? What tests were done? What correlations are involved? Can be be explained by observations?
Yes. It’s right there. Dismissing it as “error” is merely a way of packing it away because nobody knew how to explain it. It’s the fine detail of the temperature dependence. And, it shows that the CO2 level is temperature driven.
richardscourtney says:
May 29, 2012 at 3:29 am
I hope you are feeling better, too, Robert. It is not my intent to be churlish in such a situation. But, I must point out the facts.

richardscourtney
May 29, 2012 9:28 am

Bart:
At May 29, 2012 at 9:06 am you quote me having said

“There is no “FINE DETAIL” to be seen at resolution better than the “error bars”: there is only meaningless random variation.”

then you ask me:

Says who? What tests were done? What correlations are involved? Can be be explained by observations?

I answer:
The definition of measurement error says that.
If two values differ by less than the measurement error then they are not discernibly different (i.e. they are ‘the same’ within the measurement error). And any data with values between those two data are also ‘the same’. In other words, each datum within the measurement error has ‘the same’ value as every other datum within the measurement error.
Hence, for data that differ by less than the measurement error
* any discerned correlations are spurious
and
* the explanation of those observations is that they are not discernibly different from each other.
However, one can statistically process a set of such data to obtain e.g. a mean (but then one needs to obtain an RMS error for the mean).
This is very basic measurement theory which I am surprised you do not know.
Richard

rgbatduke
May 29, 2012 10:05 am

I hope you are feeling better, too, Robert. It is not my intent to be churlish in such a situation. But, I must point out the facts.
richardscourtney says:

Thank you both (Bart and Richard) for your kind thoughts. I discovered that a sudden onset sore throat can actually be life threatening (when it turns out to be caused by a deep-tissue infection in your throat — the hard way, one that left me intubated post-emergency surgery for two days and in the hospital for two more. Intubation sucks. But hey, I lived, and am about to do my last dose of IV antibiotics and go on orals only. I’ve been back to work (teaching) for almost a week at this point, although I lectured in a whisper for the first few days.
And Bart, I appreciate that you feel the need to point out facts as you see them, and don’t interpret your passion as churlishness. I have very thick skin, and like Honey Badger, in the end I just don’t care (or at least, not that much). At this point we’ve (I suspect) communicated to each other what we have to say, and aren’t making much progress, so it is probably time to give the topic a rest. In any event, I’m going back to doing recitation for the first time in two weeks this afternoon and relieve my poor TA from having to run the whole thing herself, and will have less time for a variety of reasons to spend on WUWT quite outside of that, as well. Entrepreneurial stuff that got blown off for the last couple of weeks.
So pardon me gentlemen, if I retire from the debate for at least the time being. Perhaps if/when I have time to return to the matlab code and fancy it up a bit I may start a top post on the subject and we can resume.
rgb

Carbon500
May 29, 2012 12:39 pm

Gail Combs:
‘You might want to take a look at AJ Strata’s (NASA Engineer) Error analysis:’
Thanks for the link – a fascinating and thoroughly enjoyable ‘read’

May 29, 2012 12:55 pm

richardscourtney says:
May 29, 2012 at 9:28 am
“The definition of measurement error says that.”
No, that is not it at all. Measurement “error” is everything in the residual that you cannot account for. If the measurement error is uncorrelated (white noise), then and only then have you have extracted all of the information out of the data which can be obtained. This is why there are statistical tests for whiteness. I am surprised you do not know this.
This measurement error, at the level you are claiming, is far from white. All you have to do is look at the plot. The correlations which exist below the level at which you are cutting off are obvious. They are what links the temperature to the CO2 level. No wonder you are adrift, when you have arbitrarily excluded from consideration the mother lode of information contained in these data.

May 29, 2012 1:05 pm

richardscourtney says:
May 29, 2012 at 2:19 am
“No, nobody is entitled to decree “error bars”:
Except, apparently, MLL.

richardscourtney
May 29, 2012 2:13 pm

Bart:
Your posts at May 29, 2012 at 12:55 pm and May 29, 2012 at 1:05 pm are silly. I will try to explain the matter.
Consider a barometer graduated in tenths of a p.s.i. but only calibrated to whole p.s.i. It can be read to tenths of a p.s.i. but is only accurate to p.s.i..
The barometer may vary, for example, in response to temperature change. But that variation is not relevant if it is never greater than one p.s.i. because the barometer is only calibrated to one p.s.i..
However, the barometer it can be read to a tenth of a p.s.i.. Measurements of tenths of a p.s.i. can be recorded but they are only accurate to +/- 1 p.s.i. (i.e. the measurement error).
Analysing the pressure changes at better than the measurement accuracy of one p.s.i. is very misleading. This is because variations of indicated pressure less than one p.s.i. may be an effect of temperature change (n.b. not an indication of pressure change).
In this case, temperature is a variable affecting the values of the data within the measurement error.
Now consider the Mauna Loa data.
The Mauna Loa data contains many possible – both known and unknown – variables that affect the values of the data within the determined measurement errors. Those variables include variations in the measurement method.
The Mauna Loa data are accurate to +/-0.2ppm. So, any variations less than +/-0.2ppm are meaningless: they could be indicating variations in the measurement procedure (e.g. whether or not the coffee-maker was being used in the lab.).
Richard

May 29, 2012 3:30 pm

Richard – Why to you continue to refuse the evidence right before your eyes?
You are telling me the agreement between the temperature and CO2 rate of change in this plot is random happenstance?
You are imputing God-like powers to MLL. They don’t know how accurate their measurements are. They just know the level down to which it no longer behaves as they expect. But, we do know why – it is because of the temperature correlation.
Even if you have an instrument which is actually quantized, so that you actually cannot instantaneously see below a particular level, you can still get resolution below that level over time when the signal you are looking for is low frequency and quantization levels are pseudo-randomly traversed. Electrical engineers do it all the time.
So, why don’t you just do like I ask, and try looking at your CO2 derivatives, and see how well they correlate with the temperature?

richardscourtney
May 29, 2012 3:58 pm

Bart:
I am not claiming any “powers” (deific or otherwise).
I merely try to explain very basic measurement theory. Press your case if you like but, as Robert tried to explain to you, your arguments will not gain traction if you do not consider fundamental empirical procedures.
I don’t know what your plot indicates and nor do you. I am not claiming the apparent “relationship” is “random happenstance”. I am stating that it is not possible to know what it is. Please explain the “powers” you think you have which enable you to claim the relationship you have detected is not induced by the measurement procedure.
And please note the providers of the MLL data you are analysing have stated a measurement accuracy which says they do not trust the data to have the resolution your plot analyses. That is the evidence before our eyes which you are ignoring but I am not.
Richard

May 29, 2012 3:58 pm

“Electrical engineers do it all the time.”
Anticipating that you might not believe it, I whipped this up to show you. In the top box, the original signal is reconstructed from the quantized data and the two series lie on top of each other.

May 29, 2012 4:14 pm

“So, why don’t you just do like I ask, and try looking at your CO2 derivatives, and see how well they correlate with the temperature?”
Use a running average filter or a series of them to get the noise down if the data are too variable to see it. You may note that, in the woodfortrees plot, I had the gadget apply a 24 month running average to the data. A non-causal running average, mind you – something Robert has not yet managed to wrap his head around. The delay of an average is half the width of the average, so you will have to slide the data up by that amount to remain current.

Verified by MonsterInsights