**Guest post by Joeseph H. Born**

Many of us laymen were fascinated by Willis Eschenbach’s recent post that touched on how insensitive to oscillation period the phase lag between insolation and temperature is. But the comments to that post closed shortly after Joel Shore advised us that “the more relevant equations for the sort of time lags that you are interested in for the climate system is to start looking instead at box models for the climate system.” This was unfortunate because it left unanswered the thereby-raised question of how the lumped-parameter models Mr. Shore espoused compare with the distributed-parameter model from which that post’s equations result. This question is of interest because many habitués of this site and related ones use lumped-parameter models for climate-related thought experiments.

The example model Mr. Shore specifically cited was a so-called “one-box” model (also known in some circles, I’m told, as the “single-pole” model), in which the relationship between the stimulus (e.g., incoming radiation) and the response (e.g., surface temperature) is that the response’s rate of change is proportional to the stimulus value, less some loss quantity proportional to the response itself:

where is the value the temperature will approach when the radiation has been constant for a long time, and is a measure of what a “long time” is.

This model (understandably) simplifies things by “lumping” heat storage into a single temperature value. But the “tautochrone” diagram in Mr. Eschenbach’s post illustrates that the ground’s surface warms and cools out of phase with, and with a greater amplitude than, the earth below. So, although there may indeed be a sense in which a lumped-parameter model’s equations are “more relevant equations for the sort of time lags that [we] are interested in,” we are justified in wondering how far that relevance extends.

To take a run at that question, suppose we know that some location’s average ratio of intra-day temperature swing to intra-day radiation-intensity swing is 0.011 K‑m^{2}/W and that the corresponding intra-year value is 0.060 K‑m^{2}/W. We can readily identify the single-pole lumped-parameter model that yields those two values. By comparing such a model’s behavior with that of a distributed-parameter model that similarly matches those values, we can get a sense of how the two types of models differ.

Specifically, let’s compare the thereby-identified lumped-parameter model’s behavior with that of a semi-infinite slab that extends through and receives and loses energy only at its free surface, the plane. For simplicity’s sake, we assume that the temperature is independent of and , leaving heat free to flow with respect to in such a manner as to make vary with and time in accordance with the one-dimensional diffusion equation:

where is the slab material’s thermal diffusivity, is its thermal conductivity, and is its volume heat capacity.

That is, heat flows down a temperature gradient in proportion to the material’s thermal conductivity, so heat collects at or bleeds off from places where gradient changes with position, with the result that at those locations temperatures change in inverse proportion to the material’s volume heat capacity .

We will require that the distributed-parameter model exhibit temperature-proportional heat loss, just as the lumped-parameter model does. But this is a distributed-parameter model, so we additionally have to specify which temperature we’re talking about: the (by assumption, uniform) surface () temperature. That is, we will require that at the plane the slab re-radiate at a net rate , where is the loss (“feedback”) coefficient and is the temperature at the slab’s () surface. And, as we did for the lumped-parameter model, we so configure our distributed-parameter model that the once-a-year and once-a-day values of its frequency-response magnitude match the above-identified target values.

Fig. 1

Fig. 1 compares lumped- and distributed-parameter models thus configured. Specifically, it compares the absolute values of their frequency responses. The vertical lines mark the once-per-year and once-per-day frequencies, and the horizontal lines mark the frequency-response magnitude targets for those frequencies. The drawing thereby shows that the two models’ magnitude curves coincide at the target values. But it also shows that those curves otherwise differ markedly; the distributed-parameter model varies much more at low frequencies and much less at high frequencies than the lumped-parameter model does. Still, Fig. 1 does not clearly establish either model’s superiority.

Fig. 2

Arguably, though, the same cannot be said of Fig. 2, which compares those responses’ phase angles. True, that drawing suggests diurnal temperature lags in the right ball park for both models: (

Now, as the R code I used shows, the results above are highly dependent on the assumed material properties and on the two target frequency-response values the models are required to match. Moreover, to match those values I assumed a feedback value considerably more negative than popular estimates–as well as a thermal-conductivity value nearly two orders of magnitude higher than water’s (although ocean mixing may make the effective conductivity there quite a bit higher). When this model is made to match those targets, that is, it results in tautochrones that don’t look much like those in Mr. Eschenbach’s post. So there is plenty to criticize in at least this example of a distributed-parameter model.

But it’s also far from self-evident that the lumped-parameter models’ equations are always the more-relevant ones for studying climate lags. So we may occasionally want to try diffusion-based models instead of the lumped-parameter models to which we’re accustomed.

slabStepResponse <-function(z, k, rhoC, f, delta_t, N_h) {

# Finds the temperature response of a semi-infinite slab made of a

# thermally homogeneous, isotropic material that extends throughout z > 0 and

# leaves a free surface at the xy plane, where it receives incoming radiation

# whose power per unit area is uniform in x and y is a unit step as a

# function of time.

# Here:

# z is the depth at which the temperature is to be determined;

# k is the slab material’s thermal conductivity;

# rhoC is the product of the material’s density and its specific heat;

# f is the coefficient of net re-radiation (and/or conduction and/or

# convection) by which the slab loses power from the slab surface in

# in proportion to the surface’s temperature;

# delta_t is the time interval between instants at which the routine

# provides response values; and

# N_h is the number of response values to be determined.

#

# Rather than use thermal diffusivity “a” as an argument directly, the routine

# uses rhoC under the assumption that we can constrain the relationship

# between thermal conductivity and thermal diffusivity to values that are

# realistic for liquids and solids if we keep rhoC between 1.5 x 10^6

# and 4.5 x 10^6 J/K-m^3.

# (For water, rhoC = 4.186 x 10^6 and k = 0.58 W/m-K. However, cirulation no

# doubt makes the effective thermal conductivity at the oceans’ surfaces

# much higher than this.)

# Compute diffusivity from conductivity and specific heat:

a = k / rhoC;

# In the application to which we will apply this function, it is convenient for

# the time values to equal odd numbers times half the time step:

t = seq(1, N_h) * delta_t – delta_t / 2;

# Find the step response

u=1/f*(erfc(z/2/(a*t)^(1/2))-exp((f/k)*z+(f^2/k^2)*a*t)*erfc(z/2/(a*t)^(1/2)

+ f/k*(a*t)^(1/2)));

# Kludge to eliminate artifacts of computational limitations; unlikely to be

# robust to large parameter changes, so do a sanity check by plotting the

# step response for the intended parameter set:

u = u[!is.nan(u)]; # Lose the NaNs

u[1:max(1,length(u) – 20)] # Return the result without the ragged end

# Although the step response has a value of zero at t = 0, this

# routine’s output will start out non-zero, since its first value is the

# temperature at t = delta_t/2, not t = delta_t.

}

slabImpulseResponse <- function(z, k, rhoC, f, delta_t, N_h) {

# Since it is numerically awkward to evaluate the slab’s impulse response

# directly from its analytically determined value, we numerically

# differentiate its step response.

u = slabStepResponse(z, k, rhoC, f, delta_t, N_h)

M = length(u);

c(u[1], u[-1] – u[-M]);

}

slabResponse <- function(surfaceStimulus, z, k, rhoC, f, delta_t, N_h){

# Determines a semi-infinite slab’s response to surfaceStimulus.

# See slabStepResponse() for a description of the other arguments.

# First compute the slab’s impulse response:

u = slabImpulseResponse(z, k, rhoC, f, delta_t, N_h);

# Then return the convolution of the stimulus with the impulse response:

convolve(u, rev(surfaceStimulus), type = “o”)[1:length(surfaceStimulus)];

}

# The criterion we use here to decide whether the slab approach may in some

# way be superior to a simple single-pole approach whether the phase-shift

# behavior of the slab frequency response whose amplitude values for daily and

# annual frequencies match the earth’s matches the earth’s phase-shift behavior

# better than that of a single-pole model whose amplitudes do.

span <- function(F) max(F) – min(F)

lastPeriod <- function(F, omega, delta_t) F[(1 + length(F) – round(2 * pi / omega / delta_t)):length(F)]

slabFrequencyResponse <- function(z, k, rhoC, f, N_h, f_lo, f_hi, N_c, N_F, N_freq){

# This routine computes the frequency response of a slab.

# f_lo and f_hi define the frequency range over which the response is to

# be computed.

# N_c is the number of sine-wave cycles in the stimulus record to be used.

# The ratio of temperature to radiation is actually based on only a single

# cycle, but, since the response calculation in effect assumes zero stimulus

# before time -delta_t / 2, it was though that computing the response for

# several cycles before the one on which the ratio is based would tend to

# suppress transient-caused inaccuracy.

# N_F is the number of “samples” in the stimulus record.

# N_freq is the number of frequencies for which the response is to be computed

r = exp(log(f_hi / f_lo) / (N_freq – 1)); # ratio between successive frequency values

freq = f_lo * r^seq(0, N_freq – 1);

delta_t = N_c / freq / N_F; # Different delta_t’s for different frequencies

H = rep(0, N_freq);

phi = H;

for(i in 1:N_freq){

t = delta_t[i] * seq(0, N_F – 1);

omega = 2 * pi * freq[i];

# Create the stimulus:

F = sin(omega * t);

# Compute the time-domain response:

T = slabResponse(F, z, k, rhoC, f, delta_t[i], N_h);

# Throw away all but the last cycle

F = lastPeriod(F, omega, delta_t[i]);

T = lastPeriod(T, omega, delta_t[i]);

# Compute one amplitude value of the frequency response:

H[i] = span(T) / span(F);

# Create a quadrature signal:

t = lastPeriod(t, omega, delta_t[i]);

F_quad = sin(omega * t – pi / 2);

# Compute the phase angle by correlating the response

# with the in-phase and quadrature signals:

phi[i] = -atan(sum(F_quad * T) / sum(F * T)) * 180 / pi;

}

cbind(freq, H, phi);

}

# Now, to compute the frequency response, we assume some parameter values:

z = 0; # Depth at which the slab’s temperature response is to be determined

# Looking up material properties suggests that these values are in the right range:

k = 1.0 # Thermal conductivity

rhoC = 2e6 # Product of density and specific heat

# And the following value approximates what climate types call zero “feedback”:

f = 3 # coefficient of net surface-temperature-proportional surface heat loss

# By trial and error, however, I instead adopted these values:

k = 48;

f = 13;

# because they make the slab’s response amplitude match the following targets:

H_daily = 0.011 # My estimate of the mid-latitude response at frequency = 1 / day.

H_annual = 0.060 # My estimate of the mid-latitude response at frequency = 1 / year.

N_c = 16 # Number of oscillations the (sinusoidal) stimulus record is to span

N_F = 2048; # Number of “samples” in the (synthetic) stimulus record

N_h = N_F + 20; # Number of impulse-response values to be used in the convolution

# See slabStepResponse() kludge for origin of magic number 20.

N_freq = 27; # Number of frequency-response values to be generated

r = exp(log(365.2425) / (N_freq – 9)); # Frequency-step ratio

# Lowest and highest frequencies at which frequency response is to be evaluated

f_lo = r^(-4) / 365.2425 / 24 / 60 / 60; # Four values below once per year

f_hi = f_lo * r^(N_freq – 1); # Four

# Using those parameter values, compute the frequency response:

dummy = slabFrequencyResponse(z, k, rhoC, f, N_h, f_lo, f_hi, N_c, N_F, N_freq);

amplitude = dummy[ ,2];

amplRange = range(amplitude);

frequency = dummy[,1];

phaseAngle = dummy[ ,3];

cyclesPerYear = 365.2425 * 24 * 3600 * frequency;

freqRange = range(cyclesPerYear);

# Plot response amplitude:

plot(cyclesPerYear, amplitude, type = “l”, xlab = “Frequency, cycles/year”,

ylab = “Temp. Response, K-m^2 / W”, log = “xy”);

# Add lines to show the criterion values:

lines(c(1, 1), amplRange, lty = 2);

lines(c(365.2425, 365.2425), amplRange, lty = 2);

lines(freqRange, c(H_daily, H_daily), lty = 2);

lines(freqRange, c(H_annual, H_annual), lty = 2);

# Also by trial and error, I found these earth-response-matching values for a

# single-pole lumped-parameter model:

tau = 73046;

lambda = 0.05999;

singlePoleAmp = lambda /sqrt(1 + (2 * pi * frequency * tau)^2)

singlePolePhase = -atan(2 * pi * frequency * tau) * 180 / pi

# Plot that model’s response amplitude, too:

lines(cyclesPerYear, singlePoleAmp, col = “red”)

legend(150, 0.07, c(“Distributed”,”Lumped”), cex=0.8, col=c(“black”,”red”), lty = 1);

title(paste(“Distributed: f = “, f, “, k = “, k, “\nLumped: lambda = “, lambda, “, tau = “, tau, sep = “”));

# That should show that the distributed-parameter model varies much

# more graduall with frequency than the lumped-parameter model does

# Now make the same graphical comparison for the models’ phase-angle behavior:

plot(cyclesPerYear, phaseAngle, type = “l”, ylim = c(-90, 0),log = “x”);

lines(cyclesPerYear, singlePolePhase, col = “red”);

legend(150, 0, c(“Distributed”,”Lumped”), cex=0.8, col=c(“black”,”red”), lty =1);

title(paste(“Distributed: f = “, f, “, k = “, k, “\nLumped: lambda = “, lambda, “, tau = “, tau, sep = “”));

lines(c(1, 1), c(-95, 5), lty = 2);

lines(c(365.2425, 365.2425), c(-95, 5), lty = 2);

# What that should show is that the distributed model exhibits a much less

# significant variation in phase angle than the lumped-parameter model does.

# Whereas the lumped-parameter model suggests less than a day of seasonal lag

# but over five hours of (equinox) diurnal lag:

singlePolePhase[5] / 360 * 365.2425 # -0.8453802

singlePolePhase[22] / 360*24 # -5.023814

# the distributed-parameter model suggests an eleven-day seasonal lag (low

# but the right order of magnitude) and about two and a half hours

# of (equinox) diurnal lag (perhaps a little low):

phaseAngle[5] / 360 * 365.2425 # -10.93371

phaseAngle[22]/360*24 # -2.560662

Thanks Joseph for your comparing lumped vs distributed models.

For ground temperature cycle & lag see:

G.P. Williams, L.W. Gold, Ground temperature CBD-180 National Research Council Canada July 1976

For a detailed model driven by the 11 year solar cycle, see David Stockwell’s Solar Accumulation model and discussion. Stockwell finds a Pi/2 (90 deg) phase shift or 2.75 year lag for the 11 year Schwab solar cycle.

Similarly, see Fred Haynie’s detailed modeling of carbon dioxide fluctuations with latitude and time, which similarly show a lag after the annual insolation.

humbug

The R code link is not working here.

Urederra: “The R code link is not working here..”

That’s okay; it turns out they included in the post all the code I intended to link to, so you can just copy it from the post itself.

Thank you, David L. Hagen, for the references. Particularly interesting was Williams et al.’s observation that surface temperature varies largely in phase with air temperature, since it suggests that air temperature can be used as a reasonable proxy for surface temperature–although the high-number track-temperature reports we get here in Indianapolis on race day lead me to question the degree to which we’re entitled to rely on that relationship.

To me, the ninety-degree phase shift Stockwell found would seem to be an argument in favor of a lumped-parameter model and against at least a diffusion-type distributed-parameter model, since I would not expect the latter’s lags to exceed forty-five degrees. Possibly the semi-infinite-slab analogy works to some extent at the cycle times involved in the phenomena to which Mr. Eschenbach invited our attention but the higher penetration depths associated with Stockwell’s cycle time run up against ocean-mixing-layer depth?

I don’t pretend to know the answer; I’m just hoping the scientists who hang out here might give us more insight into the implications of the phenomena described in Mr. Eschenbach’s post. Obviously, the heat-transfer cognoscenti seemed to find it a yawn; the diffusion equation is not exactly hot news. But it would be interesting at least to this layman to hear what reasons there may be for diffusion-type models’ having been rejected in this context.

For those just trying ‘R’, you will need to make some corrections to get this to run. Change all ‘minus’ characters with ‘dash’ characters (probably caused by wordpress translation) and Word style quotes with normal double quotes. Add the function ‘erfc’ to the very top. My copy then ran fine.

A comparison is the sort of response you get in various economic models based on free choice with a goal of the maximizing net present wealth and the most efficient allocation of scarce resources. Real-world behavior must be reckoned with.

If, for example, you look at a simple pricing model, generally you expect the demand for a commodity to rise as the price falls of the commodity falls and vice versa. There may, however, only be so much of the commodity that anyone needs — no matter how low its price — and there may be limits on how much of it can reasonably be offered irrespective of a higher price—perhaps because of relatively cheaper alternatives, rising production costs or of lost opportunities costs.

Accordingly, you can expect transactions in the commodity to occur within certain ranges.

To change the boundariesyou have to throw in a stink bomb that resets the system–e.g., things like behavioral changes that are impressed onto the system.So now we looking beyond explanations based on the use of a medium of exchange to changes in value that are due to factors that are better explained by psychology, irrationality, fear, superstition, dementia, warfare and paradigm shifts coming out of the blue which could be anything from prolonged global cooling (and warming) or the invention of recurve bows.

wayne:

Thanks for the assist. The erfc() routine worked right out of the box in my R distribution, so it didn’t occur to me to worry about portability. But I should have foreseen the Word problem. In my (partial) defense, though, I did try to submit the code in a text file, but that got garbled for some reason, so they suggested I submit it in Word instead.

When a model’s output is “too far” from an observation (that is, the right answer), then the model is not correct. “Too far” can be debated. I will argue (as you have) that a “single day” for a model’s “seasonal lag” is too far off. The seasonal response seems to be a more complex thing to model – or so it seems to me. So it does not surprise me that the lumped (discrete) model is less accurate than one that more closely matches the complexity. Still, 11 days is “admittedly low”, also. In Willis’ post to which you link, he claims “about a month and a half after the summer solstice” or about 45 days.

Joel Shore’s comment is unconvincing, also. He starts with “I think the more relevant equations . . .” — without explaining why; and ends with “I think it is still true that the time lag will be somewhere between 0 and 1/4 of a cycle.” For the seasonal lag the numbers will be ( for Joel) from no-lag to 90+ days. For the Northern Hemisphere that takes us out to about the Fall Equinox or Sept. 20th. This proposition isn’t very helpful.

It would be nice if I had the training and the time to add something useful to this discussion, rather than just think these models are not good enough – that is, not good enough scientifically and not good enough for politicians and policy-makers to be re-making the World.

I am appreciative of the efforts to bring understanding to Earth’s physical systems and to put it here on WUWT. So thanks to Anthony and the moderators for making that part possible.

Hey, thanks for the heavily commented R code.

It seems to me that a lot of the heavy-hitters offer less than 1 to 1 ratio of commentary lines to code lines. This seems sufficiently approachable that I, an R novice, might give it a shot to run and comprehend.

This is leadership behavior, worthy of emulation.

I’m still smart enough to figure out stuff not in my field if given a sufficient trail of breadcrumbs…

AC

John F. Hultquist: “When a model’s output is “too far” from an observation (that is, the right answer), then the model is not correct.”

If you’re saying that the diffusion model I used is not correct, I’m in total agreement. When it comes to climate, we know that no simple model is correct–and, really, we know the complex ones aren’t, either. In the particular case of seasonal lag, for example, the model I used would tell us that seasonal lags are the same for all locations at the same latitude, whereas comparing Louisville’s to San Francisco’s would no doubt disabuse one of that misapprehension quickly. (I actually don’t know what the range is, but I’m guessing it’s quite a bit wider than you may have inferred from Mr. Eschenbach’s comment.) Models of this type are good, to the extent they are, only for plausibility assessments and sanity checks.

But, given we’re condemned to using only models that in some way are wrong, there may be some value trying more than one, which may not be wrong in all the same ways.

try this

http://www.inside-r.org/pretty-r

Created by Pretty R at inside-R.org

Steven Mosher: “try this”

Thanks a lot.

I was using Tinn-R to write the code. I’m sure if I knew what I was doing there would have been a way to get that application’s output through WordPress without making a mess of it. But Pretty R appears superior–and it’s likely to be particularly helpful when I’m looking at someone else’s code.

AC: “This is leadership behavior, worthy of emulation.”

Thank you for the kind words.

No false modesty here: I do think national productivity would be much higher if more effort were put into making technical writing clearer, and, although I’ve had only indifferent success on that score, at least I aspire to clarity. In my experience that aspiration is distressingly uncommon.

That said, if you’re an R tyro, you’d be ill-advised to use any actual R I write as a model. I use the language too rarely to remember more than a few commands, so I constantly run around my backhand. If any folks who use it a lot actually looked at my code, they no doubt cringed.

The actual lag, within 250 miles of Indianapolis, IN, for the winter solstice is 27 days +- 1 day. The lag for summer solstice is longer, and a bit more variable, due to the greater amount of energy involved in the summer water cycle. I wonder what the lag is in parts of northern Chile, where it hasn’t rained in recorded history, compared to Cherrapunji, reportedly the wettest place on Earth.

A long time ago a single lumped parameter model was defined as one degree of freedom. The more lumps the closer one achieves a distributive parameter model solution, presuming the ground properties are uniform.

At ground surface, there would be radiation (reflection and emission), and representation of natural and forced convection, when appropriate. Boundary conditions would be time varying solar radiation and air temperature. Initial condition for ground temp would be say 50F.

Now come more simplifications. No forced convection? No free convection? No rain? No clouds? No fortran compiler and no more time.

Joe;

Instead of Word or Text, try the ‘betweener: .rtf. Wordpad will generate it, and any editor/word-processor can read it. (Rich Text Format).

I believe you mean integral and differential instead of “lumped” and “distributed-parameter”. Here for instance here are the Integral vs. differential form of the Navier-Stokes equations: http://books.google.com/books?id=qKIk-JrbvlQC&pg=PA48&lpg=PA48&dq=integral+form+navier+stokes+equations&source=bl&ots=-H_lpvEmYU&sig=eS3r4J5DWKNfA6_dCetRPOaZQzs&hl=en&sa=X&ei=Y78BUP-dHYr10gHJzezEBw&ved=0CF4Q6AEwAw#v=onepage&q=integral%20form%20navier%20stokes%20equations&f=false

Thomas L.: “The actual lag, within 250 miles of Indianapolis, IN, for the winter solstice is 27 days +- 1 day. The lag for summer solstice is longer, and a bit more variable, due to the greater amount of energy involved in the summer water cycle.”

Thanks for the data. It had occurred to me that if I were more adept as scraping the data sites it would not be too hard to compute seasonal lags for a reasonable number of actual locations as I did for the synthetic data, i.e., simply to take the in-phase and quadrature components of the signal’s fundamental-frequency component. As you implicitly point out, though, what is more commonly meant by “seasonal lag” is the time from the solstice to the mean yearly maximum- or minimum-temperature date, and that differs between the summer and winter lags. Because of all the confounding variables, though, I’m not sure how much more help to the model-assessment effort I’ll get from further lag calculations.

As to a comparison of the driest location to wettest , my guess would be that the behavior of the wettest spots could readily be bounded by looking at appropriate-latitude ocean data. I may do that if they come readily to hand. As I say, though, I’m trying to come up with some approach other than lags for comparing model types.

Dinostratus: “I believe you mean integral and differential instead of ‘lumped’ and “distributed-parameter.'”

Although I confess to having used the contested terms without much reflection, I’m not yet convinced that my usage was inappropriate. The discussion found here http://en.wikipedia.org/wiki/Lumped_element_model seems to base the lumped/distributed distinction on whether the system is best described by ordinary differential equations (lumped) or partial differential equations (distributed).

While I haven’t warmed to that way of distinguishing the two, it certainly seems an apter criterion than whether one has chosen to express the system’s relationships in differential or integral form.

gary murphy: “A long time ago a single lumped parameter model was defined as one degree of freedom. The more lumps the closer one achieves a distributive parameter model solution,”

That’s more or less the meaning I had in mind (to the extent it could be said that I gave the definition any thought at all). And Fig. 1 of this http://en.wikipedia.org/wiki/Distributed_element_model Wikipedia entry seems to support that understanding. It illustrates taking the limit of the finite elements to develop a distributed-parameter model, in which the differential circuit quantities are depicted as finite-valued elements.

“I’m not yet convinced that my usage was inappropriate.”

I don’t think it was but I find it’s best to use the correct terms when introducing a new concept to people. It’s particularly useful to do so when the line between them becomes blurred as in a residence time distribution correction to a reactor model.