Guest Post by Willis Eschenbach
[UPDATE: Steven Mosher pointed out that I have calculated the transient climate response (TCR) rather than the equilibrium climate sensitivity (ECS). For the last half century, the ECS has been about 1.3 times the TCR (see my comment below for the derivation of this value). I have changed the values in the text, with strikeouts indicating the changes, and updated the graphic. My thanks to Steven for the heads up. Additionally, several people pointed out a math error, which I’ve also corrected, and which led to the results being about 20% lower than they should have been. Kudos to them as well for their attention to the details.]
In a couple of previous posts, Zero Point Three Times the Forcing and Life is Like a Black Box of Chocolates, I’ve shown that regarding global temperature projections, two of the climate models used by the IPCC (the CCSM3 and GISS models) are functionally equivalent to the same simple equation, with slightly different parameters. The kind of analysis I did treats the climate model as a “black box”, where all we know are the inputs (forcings) and the outputs (global mean surface temperatures), and we try to infer what the black box is doing. “Functionally equivalent” in this context means that the contents of the black box representing the model could be replaced by an equation which gives the same results as the climate model itself. In other words, they perform the same function (converting forcings to temperatures) in a different way but they get the same answers, so they are functionally equivalent.
The equation I used has only two parameters. One is the time constant “tau”, which allows for the fact that the world heats and cools slowly rather than instantaneously. The other parameter is the climate sensitivity itself, lambda.
However, although I’ve shown that two of the climate models are functionally equivalent to the same simple equation, until now I’ve not been able to show that is true of the climate models in general. I stumbled across the data necessary to do that while researching the recent Otto et al paper, “Energy budget constraints on climate response”, available here (registration required). Anthony has a discussion of the Otto paper here, and I’ll return to some curious findings about the Otto paper in a future post.
Figure 1. A figure from Forster 2013 showing the forcings and the resulting global mean surface air temperatures from nineteen climate models used by the IPCC. ORIGINAL CAPTION. The globally averaged surface temperature change since preindustrial times (top) and computed net forcing (bottom). Thin lines are individual model results averaged over their available ensemble members and thick lines represent the multi-model mean. The historical-nonGHG scenario is computed as a residual and approximates the role of aerosols (see Section 2).
In the Otto paper they say they got their forcings from the 2013 paper Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models by Forster et al. (CMIP5 is the latest Coupled Model Intercomparison Project.) Figure 1 shows the Forster 2013 representation of the historical forcings used by the nineteen models studied in Forster 2013, along with the models’ hindcast temperatures. which at least notionally resemble the historical global temperature record.
Ah, sez I when I saw that graph, just what I’ve been looking for to complete my analysis of the models.
So I digitized the data, because trying to get the results from someone’s scientific paper is a long and troublesome process, and may not be successful for valid reasons. The digitization these days can be amazingly accurate if you take your time. Figure 2 shows a screen shot of part of the process:
Figure 2. Digitizing the Forster data from their graphic. The red dots are placed by hand, and they are the annual values. As you can see, the process is more accurate than the width of the line … see the upper part of Figure 1 for the actual line width. I use “GraphClick” software on my Mac, assuredly there is a PC equivalent.
Once I had the data, it was a simple process to determine the coefficients of the equation. Figure 3 shows the result:
Figure 3. The blue line shows the average hindcast temperature from 19 models in the the Forster data. The red line is the result of running the equation shown in the graph, using the Forster average forcing as the input.
As you can see, there is an excellent fit between the results of the simple equation and the average temperature hindcast by the nineteen models. The results of this analysis are very similar to my results from the individual models, CCSM3 and GISS. For CCSM3, the time constant was 3.1 years, with a sensitivity of 1.2 2.0°C per doubling. The GISS model gave a time constant of 2.6 years, with the same sensitivity, 1.2 2.0°C per doubling. So the model average results show about the same lag (2.6 to 3.1 years), and the sensitivities are in the same range (1.2 2.0°C/doubling vs 1.6 2.4°C/doubling) as the results for the individual models. I note that these low climate sensitivities are similar to the results of the Otto study, which as I said above I’ll discuss in a subsequent post.
So what can we conclude from all of this?
1. The models themselves show a lower climate sensitivity (1.2 2.0°C to 1.6 2.4°C per doubling of CO2) than the canonical values given by the IPCC (2°C to 4.5°C/doubling).
2. The time constant tau, representing the lag time in the models, is fairly short, on the order of three years or so.
3. Despite the models’ unbelievable complexity, with hundreds of thousands of lines of code, the global temperature outputs of the models are functionally equivalent to a simple lagged linear transformation of the inputs.
4. This analysis does NOT include the heat which is going into the ocean. In part this is because we only have information for the last 50 years or so, so anything earlier would just be a guess. More importantly, the amount of energy going into the ocean has averaged only about 0.25 W/m2 over the last fifty years. It is fairly constant on a decadal basis, slowly rising from zero in 1950 to about half a watt/m2 today. So leaving it out makes little practical difference, and putting it in would require us to make up data for the pre-1950 period. Finally, the analysis does very, very well without it …
5. These results are the sensitivity of the models with respect to their own outputs, not the sensitivity of the real earth. It is their internal sensitivity.
Does this mean the models are useless? No. But it does indicate that they are pretty worthless for calculating the global average temperature. Since all the millions of calculations that they are doing are functionally equivalent to a simple lagged linear transformation of the inputs, it is very difficult to believe that they will ever show any skill in either hindcasting or forecasting the global climate.
Finally, let me reiterate that I think that this current climate paradigm, that the global temperature is a linear function of the forcings and they are related by the climate sensitivity, is completely incorrect. See my posts It’s Not About Feedback and Emergent Climate Phenomena for a discussion of this issue.
Regards to everyone, more to come,
w.
DATA AND CALCULATIONS: The digitized Forster data and calculations are available here as an Excel spreadsheet.
From Willis Eschenbach on May 22, 2013 at 7:13 pm:
No it doesn’t.
Your “full equation” with bold added:
T(n+1) [degrees] = T(n) [degrees] + λ [degrees/W m-2] * ∆F(n+1) [W m-2/year] * (1-exp(-∆t [years] / τ [years]) * ∆T [years] + ΔT(n) [degrees/year] * exp( -∆T [years]/ τ [years] )
ΔF is change in forcing. From your Figure 1, “Adjusted Forcing” was “W sq.m”, obviously a slash is missing.
Why the hell would you be throwing in time for your change in forcing value? Forcing is a light bulb, on/off, gives Watts per square meter, which is Joules per second per square meter.
Now go back to your original equation, as found in “Black box of Chocolates”:
T(n+1) = T(n)+λ ∆F(n+1) / τ + ΔT(n) exp( -1 / τ )
What was the problem? Exponent had to be dimensionless, and the “1” was standing in for a Δt of one year.
So that was the problem! Where you said “1/τ” it should be “Δt/τ”, where Δt = 1 yr.
So try that for the middle term. Make it λ*ΔF(n+1)*Δt/τ, see how that works.
PS: If you still decide you need to so drastically change that equation, then please change it in “Black Box of Chocolates” as well, for consistency’s sake.
KDK,
“Make it λ*ΔF(n+1)*Δt/τ, see how that works.”
I agree with that. When it’s sorted out, it says that
dT/dt = λ*ES(dF/dt,τ), where ES means exponentially smoothed dF/dt, characteristic time period τ. And that seems like a reasonable model, and λ is acting as a sensitivity. I’m still not convinced that it can be equated with ECS or TCR. That needs to be shown.
The actual formula should be:
T(n+1) = T(n)+λ ∆F(n+1) * (1-exp(-1/tau)) + ΔT(n) exp( -∆T / τ )
Sorry Willis, not sure this is correct either. I’m not trying to be picky but I assume you wnat to get this right.
Dimensions are good (as long as you explicitly put ∆T in both expressions) but if you recognise the that T(n+1) – T(n) = ΔT(n) and rearrange you get (1-exp(-∆T/tau) on both sides and it disappears!
You are left wtih ΔT=λ ∆F and the exponential behaviour has disappeared.
I think my original suggestion is the most likely , the scaling factor should be ∆T / tau , but I can’t be sure because you have not given any indication of your starting point.
I suggest you always explicity write ∆T even if ∆T=1 year if that is what the foruma is. Obviously optimise the calculation in the spreadsheet cells but make sure when you write a mathematical formula that the terms are correct.
Paul K says:
http://wattsupwiththat.com/2012/05/31/a-longer-look-at-climate-sensitivity/#comment-1000758
If you want to apply a single formula for temperature, which DOES represent the solution to the linear feedback equation, you need, using your definition of lambda:-
T(k) = Fk*α *λ + (1-α) * T(k-1)
where Fk is the cumulative forcing at the kth time step,
α = 1 – exp(-DELT/τ)
===
That brings up back to Schwartz eqn 6 which was mentioned above and then the tau is spurious. as I previously pointed out.
Maybe you need to say how you got to that forumla so we can find out what the correct form is rather than just guessing.
Joe Born says:
May 22, 2013 at 8:05 pm
I was thinking of the ∆F term as the change in forcing from one year to the next, so it would have units of W/m2 per year … no? If not, then you are correct.
w.
Greg G,
” T(n+1) – T(n) = ΔT(n)” As used in the spread, T(n) – T(n-1) = ΔT(n), so they aren’t the same.
I think Willis’ new version, with typos and units fixed, is OK. This is how it works. We have:
ΔT(n+1) = λ ∆F(n+1) * (1-exp( -∆t / τ )) + ΔT(n) exp( -∆t / τ )
Solving:
ΔT(n+1)=λ *(1-a)*(∆F(n+1)+a*∆F(n)+a^2*∆F(n-1)+…) where a= exp( -∆t / τ )
This is exactly the formula ΔT(n+1)=λ *ES(∆F(n+1),a) where ES is exponential smoothing, a as factor
or ΔT=λ *ES(∆F,a). The (1-a) is what is needed to normalize the sum (to have area 1).
In the limit of small ∆t, 1-a = ∆t / τ, which is where the original 1 / τ came from.
I think none of this ensures that λ is numerically equivalent to ECS or TCR.
“T(n) – T(n-1) = ΔT(n), so they aren’t the same.” my bad, thanks.
“Solving: ΔT(n+1)=λ *(1-a)*(∆F(n+1)+a*∆F(n)+a^2*∆F(n-1)+…) where a= exp( -∆t / τ )”
Ah, so now we get to see where it came from , good work.
“In the limit of small ∆t, 1-a = ∆t / τ, which is where the original 1 / τ came from.”
And knowing where it comes from shows us the correct term is ∆t / tau which was my original suggestion.
So finally Willis’ original figures were correct.
In the limit of small ∆t, 1-a = ∆t / τ, which is where the original 1 / τ came from.
That explains the original “1/tau” but since t is comparable to tau , it should be used in the full form.
Nick” : I think none of this ensures that λ is numerically equivalent to ECS or TCR.”
T is temp of a constant heat capacity so I go with Mosh’s argument that this is TCS, but since F and dF are the total forcing, this CS is in the sense of it sensitivity to total forcing , not CO2x2 sensitivity.
Jeez,
What a pile of confusion!
Let me start again so that everyone can see where everything comes from, AND what the underlying assumptions are.
Firstly, a statement of energy balance relative to a pseudo steady state condition at time t = 0:-
Change in Net flux at top of atmosphere = change in Forcing – change in outgoing radiation (shortwave plus longwave).
Assume that (a) the ocean has a constant heat capacity, C, in units of watt-years/deg C/m2 and that (b) all of the net energy gained must end up in the oceans.
We write the equation as:
CdT/dt = F(t) – T(t)/λ (1)
Equation (1) is the most common form of the “linear feedback equation”.
Note that T is the total change in surface temperature from time t = 0.
λ is the climate sensitivity expressed in units of deg C/(W/m^2)
F(t) is the change in cumulative forcing from time t = 0 to time t
Now consider a fixed step forcing, F applied at time t = 0, that is F(t) = F constant Equation (1) can be solved analytically using an Integrating Factor to yield
T(t) = F*(1-exp(-t/(Cλ)) /λ (2)
For convenience only, we can make the substitution τ = Cλ, and Eq (2) then becomes
T(t) = F*(1-exp(-t/τ) /λ (3)
OK, then with the above solution available for a FIXED step forcing, we can apply either a convolution integral or a superposition solution to develop the solution for the more general case where F(t) is varying with time.
The superposition solution is given by
T(k) = Fk*α *λ + (1-α) * T(k-1) (4)
where Fk is the cumulative forcing applied at the start of the kth time step,
and α = 1 – exp(-DELT/τ)
where DELT = the timestep length
Equation (4) yields an accurate numerical solution to Equation (1) when F(t) is arbitrarily varying in time. To avoid a half-timestep displacement, the actual cumulative forcing data at time t=k should be replaced by mid timestep estimates, (Fk = (F(t = k+1) + F(t= k))/2 but that is a refinement.
This post is already too long, so I’ll show in a second post how to get from Equation (4) to Willis’s solution, and the implications for ECS.
Further to my last post, if we expand Eq (4) by swapping out alpha, we obtain
T(k) = Fk* α*λ + exp(-DELT/τ) * T(k-1) (5)
Rewriting for the next timestep, we obtain:-
T(k+1) = Fk+1 * α*λ + exp(-DELT/τ) * T(k) (6)
Subtracting Eq (5) from Eq (6), we obtain:
ΔT(k+1) = ΔFk+1 * α*λ + exp(-DELT/τ) * ΔT(k) (7)
Where the Δ values represent incremental values of T and F between timesteps.
Finally, we set the timestep value equal to 1 (year), and expand the first alpha value to obtain:-
ΔT(k+1) = ΔFk+1 * (1 – exp(-1/τ))*λ + exp(-1/τ) * ΔT(k) Eq (8)
Break open the LHS and voila, we obtain:
T(k+1) = T(k) + ΔFk+1 * (1 – exp(-1/τ))*λ + exp(-1/τ) * ΔT(k) Eq (9)
I notice that Willis has already now corrected to this form. (Thanks, Willis.) On one brief diversion, if the “alpha” term is expanded using Taylor series…we obtain
α = (1 – exp(-1/τ)) = 1 – (1 -1/τ + (1/τ)^2 -… )
This is roughly equal to (1/τ) if the higher order terms are dropped, and this would give the formula which Willis first reported. This approximation is not recommended. It introduces the “dimension” error which people picked up on, but more importantly while the approximation is not too bad for large values of τ, it introduces a substantial error for small values (like 2.9).
I’ll do one further post to try to untangle the issue of ECS vs Transient Climate Response vs transient climate sensitivity, if I can.
ERRATUM
In trying to dispel some confusion, I have added to it by incorrectly writing down Equations (2) and (3).
It should read as follows:- QUOTE
T(t) = F*(1-exp(-t/(Cλ)) *λ (2)
For convenience only, we can make the substitution τ = Cλ, and Eq (2) then becomes
T(t) = F*(1-exp(-t/τ) *λ (3)
ENDQUOTE
My apologies for any puzzlement caused.
Willis,
Steven Mosher differentiated between EFFECTIVE Climate Sensitivity and Transient Climate Response, and this seems to have led to a lot of confusion, including inducing you to modify your calculation. Let me have a go at this subject.
Let’s assume that you use the values of τ and λ derived from the “corrected” form of numerical solution, then we can return to the original linear feedback equation and its assumptions to work out what can and can’t be deduced from the values.
The critical assumptions are that the heat capacity C is invariant and that the temperature-dependent radiative feedback varies linearly with temperature. WITH these assumptions, it is perfectly reasonable to say that λ multiplied by a forcing of 3.7 W/m2 (which corresponds to a doubling of CO2) should yield an estimate of the Equilibrium Climate Sensitivity. NO CORRECTION IS REQUIRED to this calculation, but it is imperative to state the assumptions.
In fact we can see this directly from Equation (3) in my post above. For a fixed forcing F the temperature goes to F* λ as time goes to infinity. Alternatively, we see from Equation 1 that as the net flux (imbalance) goes to zero, then we have 0 = F – T/ λ , which also tells us that T-> F* λ as the system approaches equilibrium. So F* λ is indeed an estimate of the Equilibrium Climate Sensitivity if F is set equal to 3.7 W/m2.
The problem that I think Mosh is highlighting is a different one. In most of the GCMs, there is a curvature seen in the net flux response to temperature change. This means that the assumption that λ is a constant is not supported by the GCMs. In general, values of lambda estimated from plots of net flux vs temperature tend to increase with time and temperature. Opinions differ about whether this is a realworld phenomenon or an artifact of the GCMs, but it leaves open the possibility that the EFFECTIVE climate sensitivity estimated from “short-term” observational datasets with relatively small temperature change (a sort of transient estimate of climate sensitivity based on linear extrapolation of netflux-temperature behavior to the zero net flux line) may be lower than the true value of Equilibrium Climate Sensitivity.
One of the things you have confirmed with this work is that this effect is not small. (I already knew that for the several GCMs tested – you have extended that observation to the set of results.) The ECS estimates from the GCMs have a median value around 3.2 deg C. Using the same inputs and outputs over the instrumental period, you have deduced a value about half of that.
For those of you interested in how the numerical (superposition) solution is derived, check out Appendix A in this post
http://rankexploits.com/musings/2011/noisy-blue-ocean-blue-suede-shoes-and-agw-attribution/
I wouldn’t necessarily call this functionally equivalent. I tested Willis’s model on the “committment” scenario, where delta F is zero from 2006 to 2100. A paltry 0.00002C came out of the pipeline post-2005, whereas the CMIP3 MMM is about 0.3C. When I specified delta F as 0.02W/m^2 per year, the calculated temperature increase was about 0.10C/dec in the 2nd half of this century. That didn’t seem unreasonable, but my guess is that it would be low compared to the models.
Paul_K:
Just in case you’re wondering whether anyone is listening, I’ll raise my hand. My initial (and current) opinion is consistent with yours that the equilibrium T equals lambda times steady-state F under the linear-relationship assumptions.
“””””……Greg Goodman says:
May 23, 2013 at 12:04 am
The actual formula should be:
T(n+1) = T(n)+λ ∆F(n+1) * (1-exp(-1/tau)) + ΔT(n) exp( -∆T / τ )……”””””
No no no ! A thousand times NO !
(T) is Temperature; (t) is time; tau is time
So exp (-delta t /tau )
George, how about your read more that the first two lines before sounding off.? Note the line directly AFTER where you cut me off to criticise:
===
Greg Goodman says:
May 23, 2013 at 12:04 am
The actual formula should be:
T(n+1) = T(n)+λ ∆F(n+1) * (1-exp(-1/tau)) + ΔT(n) exp( -∆T / τ )
Sorry Willis, not sure this is correct either. I’m not trying to be picky but I assume you wnat to get this right.
====
I was pointing out to Willis that the revised form he had just posted was still wrong.
Anyway that is ancient history at this stage , why both bringing that back in now? The discussion’s moved on.
Seems like you and KDK are out to snipe,
How about we just try to get the maths right first.
Paul_K says:
For those of you interested in how the numerical (superposition) solution is derived, check out Appendix A in this post
http://rankexploits.com/musings/2011/noisy-blue-ocean-blue-suede-shoes-and-agw-attribution/
===
An excellent couple of articles there Paul. In particular, your SURE ties in with a couple of things I’ve found in analysing the data. I’ll ask Lucia to reopen comments rather then mixing it in here.
Very solid work.
Paul_K says:
May 23, 2013 at 9:52 am
First, Paul, my thanks as always for your very detailed and supportive posts.
Regarding the claim above about ECS and TCR, let me see if I can clarify it. The key issue is that the lambda’s aren’t the same. From Mosh’s graphic, the difference between the ECS and the TCR is the difference in lambda. In the TCR, lambda is T/F.
But in the ECR, lambda is T/(F-Q), where Q is the heat going into/out of the ocean. So you are right that as time goes to infinity, the final value is F* … but which lambda? All I have is the TCR lambda
I solved the problem by noting that
ECS/TCR = ∆T/(∆F-∆Q)
In the Otto paper there are calculated ESC and TCR values for four decades. One decade is suspiciously high from the uncorrected shift in the Levitus data due to the introduction of the Argo floats. It has an ECR/TCR value of about 1.5. The other three decades have an ECR/TCR value of about 1.3, so that’s what I used to convert my TCR values (from my TCR lambda) into ECR values. It could be more, might be 1.4, but in any case it is remarkably stable over the last forty years. That should be no surprise, because the ocean is a huge place, and lambda is a function of the rate at which the deep layers of the ocean exchange heat with the surface … I don’t see that changing a lot.
Isaac Held, in the paper Mosh cited, got a TCS of 2.3 and an ECS of 3.3, again a ratio of 1.5. However, he was using a four-year relaxation time, So in answer to your question, the ECS is about 1.3 to 1.5 times the TCS. I suspect I’ll use 1.4 in the future.
w.
Joe Born, Greg Goodman,
Thanks for your comments. Good to know someone read it!
Hi Willis,
Thanks for your response.
I think there is still some confusion here though, and it is arising from definitional differences.
I will start by repeating that you should not correct your value of lambda, at least not to account for the difference between TCR and ECS. The Equilibrium Climate Sensitivity from your scheme is unambiguously your derived value of lambda times the forcing associated with a doubling of CO2.
Let’s try to distinguish between four distinct definitions which I think is where the confusion is arising.
A) Equilibrium Climate Sensitivity. Units deg C. This is the temperature achieved after an infinite time following a forcing corresponding to a doubling of CO2. It can also be defined as the temperature achieved when the net flux goes to zero following a forcing corresponding to a doubling of CO2. With your definitions, this is just 3.7 times lambda.
B) Effective Climate Sensitivity. Units deg C/(W/m^2). This is an estimate of climate sensitivity per unit of forcing made when you only have transient information available. Suppose that you run a numerical experiment where you impose a fixed forcing ∆F corresponding to a doubling of CO2. At some arbitrary time, t, you make an observation of the change in temperature, T, and the residual net flux , ∆Q. Theory says that the change in net flux since t=0 was (∆F-∆Q). The Effective Climate Sensitivity is then estimated by
Effective CS = T/(∆F-∆Q) = lambdadash, say
Note that this is a method for estimating lambda, using your definitions. The Equilibrium Climate Sensitivity estimated from this calculation is then 3.7 * lambdadash.
C) Transient Climate Response. Units deg C. This is the transient temperature observed in climate models at the point in time when CO2 reaches a doubling during the 1% per year CO2 growth experiment (i.e. after about 70 years). In any given model, it is typically about 70% of the final ECS value for that model. THIS VALUE HAS NOTHING TO DO WITH WHAT YOU ARE DOING HERE, and there is no justification for using the ratio of ECS/TCR to correct your estimate of lambda.
To summarise, your uncorrected value of lambda when multiplied by the forcing corresponding to a doubling of CO2 yields a perfectly valid estimate of Equilibrium Climate Sensitivity under the assumption of a constant linear climate feedback. Hope this helps, seriously. Paul
“””””……Greg Goodman says:
May 23, 2013 at 3:11 pm
George, how about your read more that the first two lines before sounding off.? Note the line directly AFTER where you cut me off to criticise:…..””””””
Greg, I did not cut you off to criticize.
I excerpted enough of your post to point readers to where my comment, applied, then I merely pointed out that it needed to be lower case (t) and not upper case (T).
And it was not a criticism; I assumed a priori, that of course you knew which was correct, and it was just a typo.
Yes maybe with a drum roll, but that was for those who might not understand any of it; not for you; because by your very post, you demonstrated that you clearly understood the exponent needed to be dimensionless.
Hi AJ,
Given the very low values of tau found by Willis, anything left in the pipeline from Willis’s model would be resolved in about 6 or 7 years, so it is very important to compare exact dates when making a comparison. Having said that, I found some time ago that the temperature left in the pipeline from the “20th century commitment” runs in the AOGCMs was always higher than that gained from this linear feedback model. I now know that the reason for this is that the GCMs exhibit a curvature in the net-flux vs temperature relationship which has the effect of adding future temperature gain relative to the linear feedback model. As to which is more correct, the jury is still out, but it should be emphasised that you are comparing a PREDICTION from the GCMs with a PREDICTION from the linear model. I don’t think it is entirely fair therefore to argue that the linear feedback model is not a good emulator of the GCMs. It clearly is – over the entire instrumental record. As a matter of record, the prediction from the linear feedback model turns out to have been a lot more accurate than the GCMs in terms of PREDICTED temperature gain since 2000, although I wouldn’t make too much of that.
Paul_K:
Thanks very much for the enlightening comment defining terms. I for one had not (at least recognized that I had) encountered such a beast as “transient climate response.”
That said, for me at least your comment did not address Mr. Eschenbach’s point about the heat that “disappears” into the oceans. To caricature your positions, you see (surface) temperature as the total response to the forcing, whereas Mr. Eschenbach sees it as only part of the response, the other part being the warming of the deep oceans–although that warming will at some time outside of our experiment influence the temperature.
To state the caricature differently, you see what we’re looking at as the lumped-parameter system that you (correctly) say the differential equation dT/dt = (lambda / tau) F – T / tau expresses; restated in terms of heat capacity, that equation subsumes the (whole? well-mixed part of the?) ocean as well as the surface. He says that you must look beyond this equation because it does not reflect Poseidon’s vagaries.
Obviously, neither of you will agree with my characterization of your positions, and at the post here http://wattsupwiththat.com/2012/07/13/of-simple-models-seasonal-lags-and-tautochrones/ I demonstrated that the question is beyond what little remains of my once-passable mathematics, so I won’t hazard a position of my own. Since you have contributed constructively to the discussion, however, you may want to correct my characterization of where deep-ocean heat enters your view.
(And I believe it goes without saying–even though Mr. Eschenbach did say so explicitly–that none of us believes this linear-system exercise has much to do with what the real response to doubling CO2 enrichment.)