Bob Carter's essay in FP: Policymakers have quietly given up trying to cut ­carbon dioxide emissions

Deal with climate reality as it unfolds

  May 23, 2012

Dr. Bob Carter

By Dr. Bob Carter

Over the last 18 months, policymakers in Canada, the U.S. and Japan have quietly abandoned the illusory goal of preventing global warming by reducing carbon dioxide emissions. Instead, an alternative view has emerged regarding the most cost-effective way in which to deal with the undoubted hazards of climate change.

This view points toward setting a policy of preparation for, and adaptation to, climatic events and change as they occur, which is distinctly different from the former emphasis given by most Western parliaments to the mitigation of global warming by curbing carbon dioxide emissions.

Ultimately, the rationale for choosing between policies of mitigation or adaptation must lie with an analysis of the underlying scientific evidence about climate change. Yet the vigorous public debate over possibly dangerous human-caused global warming is bedeviled by two things.

First, an inadequacy of the historical temperature measurements that are used to reconstruct the average global temperature statistic.

And, second, fueled by lobbyists and media interests, an unfortunate tribal emotionalism that has arisen between groups of persons who are depicted as either climate “alarmists” or climate “deniers.”

In reality, the great majority of working scientists fit into neither category. All competent scientists accept, first, that global climate has always changed, and always will; second, that human activities (not just carbon dioxide emissions) definitely affect local climate, and have the potential, summed, to measurably affect global climate; and, third, that carbon dioxide is a mild greenhouse gas.

The true scientific debate, then, is about none of these issues, but rather about the sign and magnitude of any global human effect and its likely significance when considered in the context of natural climate change.

For many different reasons, which include various types of bias, error and unaccounted-for artifacts, the thermometer record provides only an indicative history of average global temperature over the last 150 years.

The 1979-2011 satellite MSU (Microwave Sounding Units) record is our only acceptably accurate estimate of average global temperature, yet being but 32 years in length it represents just one climate data point. The second most reliable estimate of global temperature, collected by radiosondes on weather balloons, extends back to 1958, and the portion that overlaps with the MSU record matches it well.

Taken together, these two temperature records indicate that no significant warming trend has occurred since 1958, though both exhibit a 0.2C step increase in average global temperature across the strong 1998 El Niño.

Advertisement

In addition, the recently quiet Sun, and the lack of warming over at least the last 15 years — and that despite a 10% increase in atmospheric carbon dioxide level, which represents 34% of all post-industrial emissions — indicates that the alarmist global warming hypothesis is wrong and that cooling may be the greatest climate hazard over coming decades.

Climate change takes place over geological time scales of thousands through millions of years, but unfortunately the relevant geological data sets do not provide direct measurements, least of all of average global temperature.

Instead, they comprise local or regional proxy records of climate change of varying quality. Nonetheless, numerous high-quality paleoclimate records, and especially those from ice cores and deep-sea mud cores, demonstrate that no unusual or untoward changes in climate occurred in the 20th and early 21st century.

Despite an estimated spend of well over $100-billion since 1990 looking for a human global temperature signal, assessed against geological reality no compelling empirical evidence yet exists for a measurable, let alone worrisome, human impact on global temperature.

Nonetheless, a key issue on which all scientists agree is that natural climate-related events and change are real, and exact very real human and environmental costs. These hazards include storms, floods, blizzards, droughts and bushfires, as well as both local and global temperature steps and longer term cooling or warming trends.

It is certain that these natural climate-related events and change will continue, and that from time to time human and environmental damage will be wrought.

Extreme weather events (and their consequences) are natural disasters of similar character to earthquakes, tsunami and volcanic eruptions, in that in our present state of knowledge they can neither be predicted far ahead nor prevented once underway. The matter of dealing with future climate change, therefore, is primarily one of risk appraisal and minimization, and that for natural risks that vary from place to place around the globe.

Dealing with climate reality as it unfolds clearly represents the most prudent, practical and cost-effective solution to the climate change issue. Importantly, a policy of adaptation is also strongly precautionary against any (possibly dangerous) human-caused climate trends that might emerge in the future.

From the Financial Post via Dr. Carter in email correspondence

Bob Carter, a paleoclimatologist at James Cook University, Australia, and a chief science advisor for the International Climate Science Coalition, is in Canada on a 10-day tour. He speaks at Carleton University in Ottawa on Friday.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

236 Comments
Inline Feedbacks
View all comments
May 27, 2012 12:06 am

Correction: tau*(t-tau) = 1344, about 27% of the total. That puts anthropogenic contributions potentially up to roughly 1/3 of the total (there’s little point in being too precise here).
That’s not insignificant, but I just know I am missing something key here… Well, I’ll take it back up tomorrow.

May 27, 2012 12:32 am

Ah, the key point is that, with a narrower bandwidth (longer time constant), the output will not be able to track the temperature controlled CO2 level to any high degree of fidelity, and there should be a delay on the order of the time constant.
Judging by the plot (using SST now, hat tip to Werner), there is only a lag of a few years at most.
Suppose the time constant is about 3 years. Then tau*(t-tau) for t = 100 is 291, so the portion of anthropogenic responsibility would be something like 6%. If 2 years, 4%. That is about the range I would expect, about 4-6%.

May 27, 2012 12:38 am

My estimates are all, obviously, coarse. Using PSDs and cross-spectra and other system id tools, it could all be nailed down much more accurately. There is much neglected information here to be mined, for any who wished to do so.

richardscourtney
May 27, 2012 12:57 am

rgbatduke:
I thank you for your yet again having said what I tried to say in words I wish I had the capability of formulating.
I applaud all your post addressed to Bart at May 26, 2012 at 10:01 pm and I draw attention to its saying;

We’re (as your “referees”, if you like:-) trying to help you out here…;-)

Richard
PS I hope you will forgive the abruptness of this post which is because I must now rush to other duties.

richardscourtney
May 27, 2012 1:08 am

Bart:
I am already late in leaving for an important appointment so this reply is rushed, but I think I need to avoid your waiting for an answer to the question you pose to me May 26, 2012 at 7:28 pm; viz
I said

“I have published a variety of different models which each matches the empirical data.”

And you ask

Show me one in which the relationship dCO2/dt = alpha*(T-T0) best holds. That is the right one, or the closest one to being right.

I answer that all 6 of our models matches each annual datum of the Mauna Loa data within the stated measurement errors.
Richard

Myrrh
May 27, 2012 2:05 am

Jimbo says:
May 24, 2012 at 2:18 pm
All I ever ask Warmists is to provide evidence that man-made co2 caused most of the recent warming.
http://hockeyschtick.blogspot.com/2012/05/new-paper-finds-water-vapor-feedback-is.html

This argument, as outlined on that page, is not only redundant, it is a distraction from the core problem here – AGWScienceFiction has taken the Water Cycle out of the comic cartoon energy budget, completely. It’s a scientific fraud to begin with. The Greenhouse Effect is created by removing the Water Cycle. There is no Greenhouse Effect.
The Water Cycle cools the Earth by around 52°C from the 67°C it would be without water. Think deserts.
By taking out the Water Cycle they pretend that there is a rise of temp from minus 18°C to 15°C and claim this 33°C ‘warming’ is caused by ‘greenhouse gases’. The main greenhouse gas is water vapour, this gives 52°C cooling to get to down to the 15°C average norm.
This isn’t about science, this is about creating a fake fisics to promote all these anti-libertarian, anti small and medium business and direct theft of taxpayers money scams.
Why isn’t this basic science disjunct ever discussed by those supposedly ‘skeptic’? Why do they, like Singer, attack as “deniers” any who point out that there is no damn Greenhouse Effect because all the science is created on the never yet shown claim that carbon dioxide can raise the temp of the Earth? The fake fisics basics, like taking out the Water Cycle and claiming that the heat from the Sun, thermal infrared, doesn’t reach the Earth surface, is a sleight of hand to create this fictional “Greenhouse Effect”.
Bob Carter appears to be like those warmists in sheeps’ clothing who loudly announce how skeptical they are, but that its a good idea to tax carbon dioxide anyway..
Unless and until people get wise to the fact that is a deliberately created long con, those who created it will continue to fudge to keep the rip off going.
The emperor isn’t wearing any clothes at all…,

richardscourtney
May 27, 2012 6:29 am

Myrrh:
At May 27, 2012 at 2:05 am you say;

Bob Carter appears to be like those warmists in sheeps’ clothing who loudly announce how skeptical they are, but that its a good idea to tax carbon dioxide anyway..

No!
Bob Carter is one of the heroes of AGW-scepticism and is a leading public opponent of AGW-based taxation and energy policies in his home country of Australia.
Your smear is offensive nonsense.
Richard

richardscourtney
May 27, 2012 6:33 am

Terry Oldberg:
Thankyou for your comment at May 26, 2012 at 5:36 pm.
I hope it helps Bart to see that I am trying to be constructive and not destructive of his work.
Richard

May 27, 2012 10:40 am

richardscourtney says:
May 27, 2012 at 1:08 am
“I answer that all 6 of our models matches each annual datum of the Mauna Loa data within the stated measurement errors.”
Not good enough, Richard. Integrals hide a lot of detail. I want to know about the agreement with dCO2/dt = alpha*(T-T0).
richardscourtney says:
May 27, 2012 at 6:33 am
“I hope it helps Bart to see that I am trying to be constructive and not destructive of his work.”
I take no offense, and hope you have not either.Confrontation is what drives new understanding. Collegiality and deference, taken to the extreme, are death to science, and I think largely to blame for the fiasco which is coming to a head in climate science.
I have nailed down my case. The excellent tracking of the temperature in dCO2/dt indicates that the bandwidth of the system is wide. Thus, the net influence of human input must small. QED.

Reply to  Bart
May 27, 2012 3:42 pm

Bart:
At this stage of its development, your model rates as no more than a conjecture, for it is insusceptible to being statistically tested. To raise it to the level of a hypothesis, you must make it testable. Steps toward making it testable include: a) describing each independent event in the underlying statistical population and b) modifying the model such that it predicts the outcomes of the events in this population.

Myrrh
May 27, 2012 11:42 am

richardscourtney says:
May 27, 2012 at 6:29 am
No!
Bob Carter is one of the heroes of AGW-scepticism and is a leading public opponent of AGW-based taxation and energy policies in his home country of Australia.
“Your smear is offensive nonsense.

I stand corrected.
I think I must be suffering from an overdose of mild mannered arguments which then leave me with a sting in the tail – here,
“Importantly, a policy of adaptation is also strongly precautionary against any (possibly dangerous) human-caused climate trends that might emerge in the future.”
Perhaps I’m just getting too damn cynical., as he said elsewhere: “Control the language, and you control the outcome of any debate”.

richardscourtney
May 27, 2012 12:23 pm

Bart:
At May 27, 2012 at 10:40 am you respond to my having said

“I answer that all 6 of our models matches each annual datum of the Mauna Loa data within the stated measurement errors.”

By saying

Not good enough, Richard. Integrals hide a lot of detail. I want to know about the agreement with dCO2/dt = alpha*(T-T0).

I fail to understand how that is “not good enough”. It is perfect fit to the data for each model (within the measurement errors of the Mauna Loa data when input with the annual anthropogenic emission and the annual temperature data).
Our models do not emulate the seasonal variation. They emulate the annual values of atmospheric CO2 concentration as reported by Mauna Loa Observatory. So, for the annual values (which show the long-term increase we are discussing)
Case 1
If the Mauna Loa data agrees with
dCO2/dt = alpha*(T-T0)
then so will each of our models.
Case 2
If the Mauna Loa data does not agree with
dCO2/dt = alpha*(T-T0)
then none of our models will.
In either case so what?
Richard

May 27, 2012 5:34 pm

richardscourtney says:
May 27, 2012 at 12:23 pm
Richard – I maintain that the data which confirms that anthropogenic CO2 is a negligible contributor to the overall level is this close agreement in the coarse as well as fine detail. I want to know what this plot looks like using your models. I want to know how well your models track the actual CO2 measurements.
If one is effectively a straight line thorough it, and the other wiggles up and down in sync, then the latter is to be preferred. If two wiggle up and down in sync, then the one which tracks better is the one to be preferred. I would be willing to bet that you will find that, assuming your models are physically realizable, the better the agreement, the lower your anthropogenic contribution will be, at least until you get into the range of less than 10%.

May 27, 2012 5:35 pm

Rather, the property of the data which confirms…

rgbatduke
May 27, 2012 7:18 pm

I have nailed down my case. The excellent tracking of the temperature in dCO2/dt indicates that the bandwidth of the system is wide. Thus, the net influence of human input must small. QED.
Just for people that want to play — I wrote a small matlab program that implements bart’s coupled ODEs and found a set of parameters that at least qualitatively reproduces the kind of derivative tracking he describes. Grab it at:
http://www.phy.duke.edu/~rgb/bart.m
(It would probably run under octave for those with no matlab handy.)
I tried to make the driving temperature increase linearly at 0.1/decade (why not, make it whatever you like) and modulated this linear increase with a 0.1 degree sin wave with a period of 11 years. Most of the other parameters aren’t picked to be particularly physical — I don’t know how to pick them physically, after all, since I don’t know the mechanisms or timescales involved — but to get good qualitative reproduction of the data you need something like tau1 = 1 and tau2 = 100 — at least a factor of 100 between the “short” timescale physics that drives atmospheric CO_2 to the “target” (equilibrium) concentration and the “long” timescale physics on which relaxation of the target concentration in response to the external temperature driver occurs.
There are several problems visible in the implementation of the program. The derivative of CO_2 concentration is tiny and has to be enormously amplified to show up at all on a commensurate scale with the thermal variation (which is already only the delta). Human added CO_2 has a nontrivial effect (depending on how extreme you make the decay constants) — it basically drives the atmospheric concentration up above the baseline/equilibrium concentration by some nearly constant amount. Whether or not the constant is small does not affect in any way the tracking of the rescaled temperature anomaly and derivative.
Which, I think, finally refutes your assertion that correlation between the derivative of CO_2 concentration (rescaled) and temperature anomaly proves that temperature must be the primary driver of CO_2 concentration. Your own equations show that if H is large, atmospheric CO_2 will be maintained well above the CO_2 equilibrium set-point you more or less independently evaluate in the second ODE while at the same time, maintaining the observed correlation between temperature anomaly and derivative of CO_2! Because you rescale the derivative to fit on the same scale as the anomaly, you can dump enormous amounts of CO_2 in and maintain atmospheric CO_2 well above “non-anthropogenic equilibrium” and still quite clearly see correlations in the rescaled wiggles.
Now, one can argue that I have the wrong parameters, that my parameters are unphysical, that other parameters produce good tracking where anthropogenic CO_2 is NOT important, but since we don’t have any actual physical mechanisms to propose here with any actual numbers that can be set by something other than curve fitting and playing around, we are left right where I originally said we were — yes, your observation is suggestive, but it is not sufficient to show that anthropogenic CO_2 is not a significant contribution. To be frank, I left H a constant, but it really isn’t. Since H is really a time dependent monotone increasing function, there is pretty clearly parameter space in abundance to make nearly all of the CO_2 increase anthopogenic in origin and still track rescaled differentials of CO_2 concentration and temperature anomaly.
Now, I’m not — repeat not — asserting that your conclusion is incorrect. Only once again, that it is not sufficient, and will never be sufficient without a physical model to restrict the actual parameter ranges in the ODEs so that they exclude the possibility of anthropogenic forcing. That requires more than math, that requires experimental chemistry and much more. And even then, you would still have at best shown that THIS model works without anthropogenic forcing being dominant — you will have by no means proven that other models do not exist that are anthopogenic forcing dominated and yet still have significant correlation between temperature fluctuations and CO_2 concentration change rates.
Finally, you will still not have addressed the causality issue. Yes indeed, when I run the model the CO_2 derivative lags the temperature anomaly as it should (it’s built into the model so it could hardly be any other way). But when I look at the data, those pesky prescient rises in the derivative of the CO_2 concentration that precede the supposedly tightly causal thermal fluctuation are worrisome indeed. One might be tempted to e.g. conclude that something else entirely is causing CO_2 to spike, and the global temperature is following it, not the other way around… or that both are caused by a third thing and can lag that thing by either order depending on a fourth or fifth thing.
It’s a hard problem. Let’s not conclude that it is solved yet, shall we?
Anyway, enjoy the program. One of the true joys of living now is that with tools like matlab/octave, one doesn’t have to speculate about the behavior of ODE solutions for various parameter ranges — one can just code them up and find out, and get answers in the forms of pretty graphs.
rgb

May 27, 2012 9:38 pm

rgbatduke says:
May 27, 2012 at 7:18 pm
I do not even have to run the simulation to know what you will get out. y(1) will track y(2) plus the effect on y(1) due to k1*H, which will be approximately tau1*k1*H with a delay of about tau1 seconds.
y(2) will be approximately the integral of the input, fairly well for perhaps 1/3 of tau2.
So, let’s see, currently your value of tau1 shown is 1.0, your value of k1 is 1.0 (actually, you cannot be this large because a large fraction, which the IPCC claims is roughly 1/2, goes rapidly into the oceans), and your value of H is 0.1. So, the effect on the output y(1) from H is about 0.1 ppm.
For sizing things, you can do the following. Atmospheric levels have gone up maybe 100 ppm in 100 years. That is supposed to represent about 1/2 of human emissions in the time, so they should be about 200 ppm equivalent. If you assume H is a ramp, H = Hdot*t, then you should perhaps have 200 = 0.5*Hdot*100^2, so that Hdot= 0.04 ppm/year^2.
I will assume k1 = 0.5. Now, because you have selected tau1 to be 1 year, your routine should produce about 1*(100-1)*0.5*0.04 = 1.98 ppm in the output of y(1) after 100 years.
As I stated, the maximum value of tau1 is indicated by the max possible lag in CO2 with respect to temperature, and should be perhaps 3 years. If you set tau1 = 3 years, you should get about 3*(100-3)*0.5*0.04 = 5.8 ppm from the H input alone. The rest has to be made up by the Delta(t) input.
You can try a longer tau1, which will help amplify the input from H until, in the limit, you get a straight integral. But, you will find y(1) tracks y(2) less and less well, and the temperature will lead its effects on y(1) more and more.

May 28, 2012 12:22 am

rgbatduke says:
May 27, 2012 at 7:18 pm
“Finally, you will still not have addressed the causality issue.”
I believe I did, quite thoroughly. Did you not read my comment?

richardscourtney
May 28, 2012 1:00 am

Bart:
At May 26, 2012 at 3:07 pm I again told you:

I have published 6 “systems” which each matches the empirical data and 3 of them have the anthropogenic emission as the cause of the observed rise in atmospheric CO2 concentration.

At May 27, 2012 at 5:34 pm you say to me:

I would be willing to bet that you will find that, assuming your models are physically realizable, the better the agreement, the lower your anthropogenic contribution will be, at least until you get into the range of less than 10%.

But you wrote that in reply to my having written as expansion of my statement:
“I answer that all 6 of our models matches each annual datum of the Mauna Loa data within the stated measurement errors.”
Saying:

It is perfect fit to the data for each model (within the measurement errors of the Mauna Loa data when input with the annual anthropogenic emission and the annual temperature data).

I fail to understand how anything can be in “better agreement” than a perfect fit.
Richard

richardscourtney
May 28, 2012 1:08 am

rgbatduke:
At May 27, 2012 at 7:18 pm you report:

Just for people that want to play — I wrote a small matlab program that implements bart’s coupled ODEs and found a set of parameters that at least qualitatively reproduces the kind of derivative tracking he describes. Grab it at:
http://www.phy.duke.edu/~rgb/bart.m
(It would probably run under octave for those with no matlab handy.) .

etc.
Excellent! Thankyou. Case closed.
Richard

rgbatduke
May 28, 2012 7:32 am

I do not even have to run the simulation to know what you will get out. y(1) will track y(2) plus the effect on y(1) due to k1*H, which will be approximately tau1*k1*H with a delay of about tau1 seconds.
I assume you mean “years”, not seconds. And yes, this is precisely what I mean. It is exactly what I was saying with my example involving fertilizer. In the steady state (the solution has a transient from nearly all initial states before settling down) the lag in H is completely unimportant in a generally smooth monotone increasing function, by the way. But you know that.
The point is — and we seem now to agree, do we not — that there exists ranges of parameter space that cannot easily be excluded on physical ground (certainly not in an offhand way by pointing at the graphs alone!) where H is responsible for a variable percentage of the total CO_2 gain from any given post-transient initial state to a given future, all of which lead to curves in which dCO_2/dt — rescaled and centered — almost perfectly tracks Delta T. The observation of the latter in real-world data does not, therefore, suffice in and of itself to conclude that ACO_2 has negligible impact on atmospheric CO_2 gain above any given baseline.
I mean, you can play with the simulation yourself or not as you please, but I’ve empirically found values of the parameters that let me crank up the fractional gain due to CO_2 to >>nearly<< whatever you like (at some point I'd have to work harder to rescale the curves automagically in order to be able to tell because of my lazy all-on-one plot) while still preserving. I've also already done a crude "version 2" of the code where I let H(t) = H_0 exp(kappa t) to demonstrate that yes, it is pretty easy to make the variation of C_0 a weak function of temperature (it is independent of C after all), and still pick up the nearly perfectly correlated dCO_2/d T.
The problem, I think, is the rescaling. Because you rescale and shift the scale of the derivative to fit on top of the delta T curve, you lose any information about relative fractions. Be that as it may, it is a simple fact that the assertion of the correlation alone is not sufficient, and your own model is ONE of the models that refute it. Richard argues — and I have no reason to doubt him — that this is one of MANY he has tested that work, and that all of them have the unfortunate multivariate problem that it is usually possible to find parametric phase volumes that reproduce the data within its uncertainties with ACO_2 a major, or minor fraction of the whole.
If you can come up with physical arguments that eliminate the parameter space regimes in question, good for you! I'd love it if you COULD prove your point. But that's WHY I'm going to be extra-damn tough on you until you do. By the time I accept your conclusion, I'd expect that everybody will have to accept your conclusion because you have a complete physical argument, backed by explicit pieces of evidence and a concrete physical model with actual, identified sources and sinks (which this is still far, far from being — this is a mathematical toy model good for demonstrating plausibility — which you have done — and little more).
Finally, as for the effect leading the cause problem — yes, I read your explanation. Read your own explanation again yourself. Read my remarks again. This is not a periodic system of equations, there are no “phase shifts of \pi/2” to account for. You’re on the wrong side of complex. Even with a periodic driving function like the one in my matlab code, the phase shift of dCO_2/dt relative to a partially periodic Delta T(t) driver must strictly be a lag, never ever a lead.
I will help you here. There is no plausible explanation for the effect leading the cause in a two component model with otherwise smooth inputs. No glib mathematical explanation will suffice, because as you yourself have noted, our universal experience in all of science is of causes preceding effects (and yes, I do teach physics and am aware of the fact that this is an entropic illusion and so on — none of it relevant here). If you show your graph to 100 physicists, 90 or more of them are going to circle the three or four points on it where effect egregiously precedes cause and instantly — and correctly — reject your assertion of sole cause. The others, when the problem is pointed out to them, will agree. That would be 100%. I cannot imagine you convincing one single person that you are correct in an unqualified manner while this feature of the data remains unexplained, and in my opinion the only possible explanation is that your argument is not correct, that there is at least one more degree of freedom you are ignoring with an external signal on it.
I’ve tried to point out how important a clue that this is — perhaps it is THE clue (since your real results ARE an interesting analysis of real measured data!) that would let us all unravel the mess. Let me put it to you as a specific question:
Is there a third parametric, time-dependent driver coupled to both dCO_2/dt and to Delta T in such a way that the latter to are strongly covariant but either one can lead or lag the other in time?
It might take four — something (perhaps at the level of random noise or chaotic noise, perhaps not) to explain why one one or the other leads — but without at least three I don’t offhand see any way for atmospheric CO_2 to accelerate ahead of the forcing that supposedly produces the acceleration.
First my car starts to speed up — then I hit the accelerator? I don’t think so.
rgb

May 28, 2012 8:54 am

rgbatduke:
Speaking of entropy, it is a statistical concept that Bart has abolished from his frame of reference through failure to reference his model to a statistical population. In Bart’s entropy-free frame of reference, there is no bar to the existence of an effect that precedes its cause.

May 28, 2012 11:00 am

richardscourtney says:
May 28, 2012 at 1:00 am
You said “within the measurement errors of the Mauna Loa data”. That suggests you mean that it threads the bumps and wiggles, which you have arbitrarily labeled “measurement errors”. That is not an unqualified “perfect”. In fact, it is not perfect at all.
I know what you have to do to make it match dCO2/dt = alpha*(T-T0). I have used logic and a deep understanding of how such systems work to lay it out for you. You have to make the contribution from H negligible and the contribution from temperature overriding. Just run the exercise. You’ll be glad that you did.
rgbatduke says:
May 28, 2012 at 7:32 am
“,,,the lag in H is completely unimportant in a generally smooth monotone increasing function, by the way.”
Taking account of lags is absolutely key to replicating a time series which has to settle within a finite time frame.
“…where H is responsible for a variable percentage of the total CO_2 gain from any given post-transient initial state to a given future, all of which lead to curves in which dCO_2/dt — rescaled and centered — almost perfectly tracks Delta T.”
No, we do not agree on that at all. Matching the bumps and wiggles requires a particularly narrow range of contribution from the temperature dependent term with a particular level of smoothing. The level of smoothing required and the limits on how much H you can pump in then limit the maximum contribution from H to something negligible, I claim about 4-6%.
“I let H(t) = H_0 exp(kappa t) to demonstrate that…”
H is known and bounded. You cannot just specify it arbitrarily.
“Be that as it may, it is a simple fact that the assertion of the correlation alone is not sufficient, and your own model is ONE of the models that refute it”
It is. When you have taken account of all of the above, you will see it.
“This is not a periodic system of equations, there are no “phase shifts of \pi/2″ to account for”
Don’t go there, Doc. This is EE 301. I’ve explained it as best I can in this venue. Please have some professional courtesy and assume that I might know what I am talking about. If you do not, I am going to get mean.
Additionally, there is another rather important fact which I have left out: in the woodfortrees plot, I have averaged values by 24 months to average out the yearly cycling and reduce the noisy variation. The woodfortrees site automatically centers the average to get a zero phase response. That means that each point is an average of the 12 months before, and the 12 months after. Capiche?

May 28, 2012 12:15 pm

rgbatduke says:
May 28, 2012 at 7:32 am
Let me give you a hand. Here is a series of simulations which do exactly what I said they would do. Start at this plot and hit the “Next” button to step through the next 6 of them. I put in tau2 = infinity, tau1 = infinity, 10 years, 3 years, and 1 year. You will see that the human input becomes progressively more negligible, and the fine detail matches better and better.

May 28, 2012 1:07 pm

Here is a new series of sim plots in which I corrected the 11 year cycle, made the temperature input follow a ramp, and plotted the CO2 derivative.
These show why I say the real world observed phase relationship bounds the time constant tau1 to a relatively small value, which constrains the human induced atmospheric concentration to be negligible.

rgbatduke
May 28, 2012 2:51 pm

Speaking of entropy, it is a statistical concept that Bart has abolished from his frame of reference through failure to reference his model to a statistical population. In Bart’s entropy-free frame of reference, there is no bar to the existence of an effect that precedes its cause.
Yeah, but I can cope with that because I’m perfectly happy to transform e.g. H(t), k1(t), k2(2), tau1(t), tau2(t) into functions with at least parametric noise mentally (or in the model) after the fact. As I said, he’s written a toy model, and I just love toy models. That the toy model he proposes confounds his own conclusions by actually demonstrating that one CAN find parametric ranges with ACO_2 dominant on the gain and yet with a rescalable lagged correlation between dCO_2/dt(t) (effect) and Delta T(t) (non-H part of the driver) even for this model, and then there are more models…
Also, the toy itself does exhibit the right direction for causality — basically he chose the signs of the damping terms correctly so that T varies, C_0 exponentially relaxes to the new equilibrium point tau2*k2*Delta T, but with his clever choice that the relaxation time here is (much) longer than the timescale of secular variation of Delta T. No real problem with this, although neither is there any physical model or direct evidence presented from which k2 and tau2 can be set. But in the end, one can nearly ignore the -y(2)/tau2 term and set the derivative of C_0 to be k2*Delta T. Why? Why not? It’s a toy.
The first ODE then drives C towards a fixed point at:
C(t) = C_0(t) + k1*tau1*H
(where C_0(t) is now for all intents an purposes a fixed input function, since C_0 does not depend on C). Yes, C is constantly relaxing TOWARDS this — again satisfying causality just fine.
It does this quickly (the way he sets the parameters, to suit his purpose of showing that his model CAN produce the desired correlation), and at any rate needs to move it along faster than the C_0 relaxation, so C_0 remains tightly responsive to Delta T. However, it is also quite clear that if we replace H with a smooth monotone increasing H(t), C(t) is perfectly happy — for certain values of the parameters — to get most of its value from C_0(t), most of its value from k1*tau1*H(t), or anything in between. In all cases, as long as H(t) is smooth and boring (and hence contributes nothing but background, the wiggles in dC/dt will match wiggles in C_0 which will match wiggles in Delta T.
rgb

May 28, 2012 4:10 pm

rgbatduke says:
May 28, 2012 at 2:51 pm
“That the toy model he proposes confounds his own conclusions by actually demonstrating that one CAN find parametric ranges with ACO_2 dominant on the gain and yet with a rescalable lagged correlation between dCO_2/dt(t) (effect) and Delta T(t) (non-H part of the driver) even for this model, and then there are more models…”
I just wasted a considerable part of my Holiday morning showing you were dead wrong about this, and apparently, you just toss it off without even reading.
Dead. Wrong. Demonstrated. Proved.
What a stupid thing for you to say. Well, same to you, buddy. Stay clueless. Over and out.

Verified by MonsterInsights