# Reflections on Monckton et al.'s 'Transience Fraction'

Guest essay by Joe Born

In Monckton et al., “Why Models Run Hot: Results from an Irreducibly Simple Climate Model,” the manner in which the authors used their so-called transience fraction raised questions in the minds of many readers. The discussion below tells why. It will show that the Monckton et al. paper obscures the various factors that should go into selecting that parameter, and it will suggest that the authors seem to have used it improperly. It will also explain why so many object to their discussion of electronic circuits.

The discussion below will not deal with how well the model performs or whether the Monckton et al. paper interpreted the IPCC reports correctly. It will be limited to basic feedback principles that are obvious to most engineers and to not a few scientists. But there are circumstances in which stating the obvious is helpful, and I believe that Monckton et al. have presented us one.

Equation 1 of the Monckton et al. paper provides us laymen with a handy back-of-the-envelope model by which we can perform sanity checks on things we hear about the climate system. If we concentrate on equilibrium values and assume that carbon dioxide is the only driver, we can drop the $t$‘s from that equation’s penultimate line and take the $q_t$ and $r_t$ parameters to be unity to obtain:

$\Delta T = \frac{\lambda_0}{(1-\lambda_0f)}\Delta F.$

The expression on the right can be recognized as the solution to the equation illustrated by Fig. 1, namely, $\Delta T = \lambda_0(\Delta F + f \Delta T)$.

Here $\Delta T$ is a temperature-change response to the initial radiation imbalance $\Delta F$, or “forcing,” that would result from a carbon-dioxide-concentration-caused optical-density increase. The optical-density increase raises the effective altitude—and, lapse rate being what it is, reduces the effective temperature—from which the earth radiates into space, so less heat escapes, and the earth warms. The $\Delta$‘s represent departures from a hypothetical initial equilibrium state of zero net top-of-the-atmosphere radiation, and the forcing $\Delta F$ is considered to keep the same value so long as the increased carbon-dioxide concentration does, even if the consequent temperature increase $\Delta T$ has eliminated the initial radiation imbalance and thus returned the system to equilibrium.

Without any knock-on effects, or “feedback,” the response would simply be $\Delta T=\lambda_0\Delta F$, where $\lambda_0$ is a coefficient widely accepted to be approximately 0.32 $\textrm{K\,m}^2/\textrm{W}$. The forcing $\Delta F$ produced by a carbon-dioxide-concentration increase from $C_0$ to $C_t$ is stated by the last line of Monckton et al.’s Equation 1 to be $k\ln(C_t/C_0)$, where it is widely accepted that $k\approx 5.35\,\textrm{W/m}^2$, i.e., that a doubling of the CO2 concentration would cause a forcing of about $3.7\,\textrm{W/m}^2$ .

So the model’s user can readily see the significance of the main controversial parameter, namely, the feedback coefficient $f$, which represents knock-on effects such as those caused by the consequent increases in water vapor, the resultant reduction in lapse rate, etc. In particular, the user can see that if $f$ were positive enough to make $g\equiv \lambda_0 f$ close to unity—i.e., to make the right-hand-side expression’s denominator close to zero—the global temperature would be highly sensitive to small variations in various parameters.

Fig. 2 depicts this effect: $\Delta T$ approaches infinity as $f$ approaches $1 / \lambda_0=3.2\textrm{W/m}^2/\textrm{K}$, i.e., as $g\equiv\lambda_0f$ approaches unity. (That plot omits $g$ values that exceed unity; for reasons we discuss below, Monckton et al.’s discussion of that regime in connection with electronic circuits is questionable.)

The quantities discussed so far are those that occur at equilibrium, i.e., in the condition that prevails after a given forcing has been constant for a long enough time that transient effects in the response have died out. To arrive at a value for times when the forcing has not been remained unchanged long enough to reach equilibrium, the model includes a “transience fraction” $r_t$ to represent the ratio that the response at time $t$ bears to the equilibrium value. Other subscript $t$‘s are added to indicate that for different times the various quantities’ effective values may differ. Finally, to arrive at the response to all forcings, a coefficient $q_t$ representing the ratio that carbon-dioxide forcing bears to all forcings is included:

$\Delta T_t = \frac{r_t}{q_t}\frac{\lambda_0}{1-\lambda_0f_t}\Delta F_t.$

As we mentioned above, the transience fraction $r_t$ is of particular interest. As the Monckton et al. paper’s Table 2 shows, the ratio $r_t$ that the response at time $t$ bears to its equilibrium value depends not only on time $t$ but also on the feedback coefficient $f$. Of course, it would be too complicated for us to investigate how such a dependence arises in the climate models on which the IPCC depends. But we can get an inkling by so modifying the block diagram of Fig. 1 as to incorporate a simple “one-box” (single-pole) time dependence.

Fig. 3 depicts the resultant system. The bottom box bears the legend “$1/c_ps$,” which in some circles means that the rate at which that box’s output changes is the product of its input and a heat capacity $c_p$. (The $s$ is the complex frequency of which Laplace transforms are functions, but we needn’t deal with that here; suffice it to say that division by $s$ in the complex-frequency domain corresponds to integration in the time domain.)

What the diagram says is that a sudden $\Delta F$ drop in the amount of radiation escaping into space causes the temperature response $\Delta T$ to rise as the integral of the stimulus $\Delta F$ divided by $c_p$. That temperature rise both increases the radiation escape by $\Delta T/\lambda_0$ and partially offsets that radiation escape by $f\Delta T$.

Now, Fig. 3 can justly be criticized for wildly conflating time scales; it does not reflect the fact that the speed with which the surface temperature would respond to optical density alone is much greater than, say, the speed of feedback due to icecap-caused albedo changes. But that diagram is adequate to illustrate certain basic feedback principles.

The output of the Fig. 3 system is a solution to the following equation:

$c_p\frac{d\Delta T}{dt}=\Delta F + (f - 1/\lambda_0)\Delta T.$

For example, if $\Delta F(t)$ equals zero before $t=0$ and it equals $\Delta F_{2\textrm{x}}$ thereafter, that solution for $t>0$ is:

$\Delta T(t) = (1-e^{-t/\tau})\frac{\lambda_0}{1-\lambda_0f}\Delta F_{2\textrm{x}},$

where $\tau=\frac{c_p\lambda_0}{1-\lambda_0f}$. That is, the equilibrium value of $\Delta T_t$ is the same as it was before we added the time dependence, but the added time dependence shows that the equilibrium value is approached asymptotically.

Fig. 4 depicts the solution for several values of feedback coefficient $f$. What it shows is that a greater feedback coefficient $f$ yields a higher temperature output $\Delta T$.

Another way of looking at the response is to separate its shape from its amplitude, and that brings us to transience fraction $r_t$, which is our principal focus. Fig. 5 depicts this quantity, which is the ratio at time $t$ of the $\Delta T$ response to its equilibrium value. That plot shows that, although greater feedback results in a greater equilibrium temperature, it also results in the equilibrium value’s being reached more slowly.

Of course, those plots give the relationship $r_t$ between current and equilibrium output only for our toy, one-box model. Monckton et al. instead employed the relationship set forth in their Table 2 and depicted by the dashed lines in Fig. 6 below. In a manner that their paper does not make entirely clear, they inferred the Table 2 relationship from a paper by Gerard Roe, who explored feedback and depicted in his Fig. 6 (similar to Monckton et al.’s Fig. 4) how a “simple advective-diffusive ocean model” responds to a step in forcing for various values of feedback coefficient.

As Fig. 6 above shows, the Monckton et al. $r_t$ values initially rise more quickly, but then approach unity more slowly, than the ones that result from our Fig. 3 one-box model. As to the specifics of his model, Roe merely referred to a pay-walled paper, but in light of his describing that model as having a “diffusive” aspect we might compare the Table 2 values with the behavior of, say, a semi-infinite slab’s surface, as Fig. 7 does. Except for the $f=0$ value, the curves are similar over the illustrated time interval, but the slab thermal diffusivity used to generate Fig. 7’s solid curves was about 2000 times that of water, so the nature of the Roe model remains a mystery. Monckton et al. may have had a reason for following Roe’s model choice instead of any other, but they did not share that reason with their readers. For all we can tell, that choice was arbitrary.

More troubling, though, was the fact that they chose only a single transience-fraction curve for each value of total feedback, whereas we would expect that the curve would additionally depend on other factors. Let’s return to a simple lumped-parameter model like Fig. 3 to discuss what some of those factors may be.

Recall that in Fig. 3 the two feedback boxes were the same except for their values $f$ and $-1/\lambda_0$; the feedbacks they represent did not operate over different time scales. But the IPCC models can be expected to employ feedbacks that operate with different delays. Feedback effects such as water vapor may act quickly, whereas the albedo effects of melting icecaps may become manifest only over long time intervals.

To illustrate such a difference, we divide the feedback $f$ represented by Fig. 3’s upper feedback box into two portions, as Fig. 8 illustrates: $bf$ and $(1-b)f$, $0\le b\le 1$. The legend $bf/(1+s\tau)$ in the uppermost box means that its output asymptotically approaches $bf$ times the input with a time constant of $\tau$. In other words, if that box’s input were a step from zero to $T$ at time zero, its output would be $(1-e^{-t/\tau})bfT$ at time $t$.

Now we’ll compare the responses of Fig. 8-type systems that differ not only in feedback $f$ but also in the portion $b$ of the feedback that operates with a greater delay. Fig. 9 compares such different systems’ responses, and we see that, as we expect, the magnitude of the higher-feedback system’s response is greater. In contrast to what we saw before, though, Fig. 10’s comparison of the systems’ $r_t$ curves shows that it is the higher-feedback system that responds more quickly. This tells us that the $r_t$ curve depends not only on the value $f$ of total feedback but also on the nature of that feedback’s particular constituents. And it raises the question of what feedback-speed mix Monckton et al. assumed.

Or maybe it raises the question of just how simple their model is to use. Let’s return to that model and note the dependencies on $t$:

$\Delta T_t = \frac{r_t}{q_t}\frac{\lambda_0}{1-\lambda_0f_t}\Delta F_t.$

Of the five subscript $t$’s, three simply represent the time dependence of the stimulus or response, leaving the subscripts on the feedback coefficient $f_t$ and transience fraction $r_t$ to represent the time dependence of the model itself. That equation might initially suggest a rather complicated relationship: transience fraction depends not only on time but also on the feedback coefficient—which itself depends on time.

But Monckton et al.’s Table 2 suggests that the relationship not quite as convoluted as all that: the transience fraction $r_t$ actually depends not on time-variant feedback $f_t$ but only on $f_\infty$, the value that the feedback coefficient reaches after all feedbacks have completely kicked in. One may therefore speculate that, although the transience-fraction function depends on the feedback’s ultimate value, that function was not intended to account for feedback time variation; one might speculate that the feedback time function $f_t$ serves that purpose.

But that would make the §4.8 discussion of the transience fraction $r_t$ puzzling, since it begins with the observation that “feedbacks act over varying timescales from decades to millennia” and goes on to explain that “the delay in the action of feedbacks and hence in surface temperature response to a given forcing is accounted for by the transience fraction $r_t$.” So Monckton et al. did not make it clear just where the feedback’s time variation should go. Also, separating the feedback’s final value from its time variation in the manner we just considered doesn’t work out mathematically, particularly in the early years of the stimulus step.

And that brings us to another problem. Note that the forcing used as the stimulus by the Roe paper from which Monckton et al. obtained their Table 2 $r_t$ values was a step function: the forcing took a single step to a new value at $t=0$ and then maintained that value. That’s the type of stimulus we have tacitly assumed in the discussion so far. But the CO2 forcing in real life has been more of a ramp than a step, so we would expect the $r_t$ function to differ from what we have considered previously.

In Fig. 11 the dotted curves represent step and ramp stimuli, while the solid curves represent a common system’s corresponding $r_t$ curves. Obviously, the $r_t$ values are lower for the ramp response than for the step response.

For all that is apparent, though, Monckton et al. failed to make this distinction. In their §7 and Table 4 they appear to use the step-response values of their Table 2 to model the response to a forcing that rose between 1850 and the present, and that forcing was not a step; it was more like a ramp. The Table 2 values could have been used properly, of course, by convolving them with the forcing’s time derivative, but nothing in the Monckton et al. paper suggested employing such an approach—which, in any event, does not lend itself well to pocket-calculator implementation.

Moreover, it’s not clear how Monckton et al.’s §7 statement that “the 0.6 K committed but unrealized warming mentioned in AR4, AR5 is non-existent” was arrived at. That section refers to their Table 4, which shows that the values computed for the model result from multiplication by a transience fraction $r_t$, supposedly taken from Table 2. The $r_t$ values 0.7, 0.6, and 0.5 respectively given in Table 4 for f values 1, 1.5, 2.2 suggest that in fact the central estimate leaves (1 – 1/0.6) * 0.8 = 0.53 K of warming yet to be realized.

So Monckton et al. have chosen a family of $r_t$ curves based on a model cited by Roe that for all they’ve explained is no better than the toy models of Figs. 3 and 8. Those curves apparently result from applying to that model a step stimulus rather than the more ramp-like stimulus that carbon-dioxide enrichment has caused. Although their discussion did refer to the fact that some feedbacks operate more slowly than others, they did not clearly tell where to incorporate the mix of feedback speeds to be assumed. And, as we just observed, it’s not clear that they properly used the transience-fraction curves they did choose in concluding that “the 0.6 K committed but unrealized warming mentioned in AR4, AR5 is non-existent.” In short, their selection and application of $r_t$ values are confusing.

That doesn’t mean that their model lacks utility. If one keeps in mind that various factors Monckton et al. do not discuss affect the $r_t$ curve, their model can afford insight into various effects that we laymen hear about. In particular, it can help us assess the plausibility of various claimed feedback levels. A particularly effective use of the model is set forth in their §8.1. If the authors’ representation of IPCC feedback estimates is correct, their model helps us laymen appreciate why the IPCC’s failure to reduce its equilibrium climate-sensitivity estimate requires explanation in the face of reduced feedback estimates. And note that §8.1 doesn’t depend on $r_t$ at all.

Despite the confusion caused by Monckton et al.’s discussion of $r_t$, therefore, Monckton et al. have provided a handy way for us laymen to perform sanity checks. And their model helps us understand their reservations regarding the plausibility of significantly positive feedback. It shows that, generally speaking, one would expect high positive feedback to cause relatively wild swings, whereas the earth’s temperature has remained within a narrow range for hundreds of thousands of years.

Unfortunately, they compromised their argument’s force with an unnecessary discussion of electronics that did more to raise questions than to persuade. Specifically, their paper says:

“In Fig. 5, a regime of temperature stability is represented by $g_\infty \le +0.1$, the maximum value allowed by process engineers designing electronic circuits intended not to oscillate under any operating conditions.”

Although that lore may make sense in some contexts, it’s quite arbitrary; parasitic reactances and other effects can result in unintended oscillation even in amplifiers designed to employ negative values of $g_\infty$. Even worse is the following:

“Also, in electronic circuits, the singularity at $g_\infty = 1$, where the voltage transits from the positive to the negative rail, has a physical meaning: in the climate, it has none.”

And Lord Monckton expanded upon that theme as follows

“Thus, in a circuit, the output (the voltage) becomes negative at loop gains >1.”

Although one can no doubt conjure up a situation in which such a result would eventuate, it’s hardly the inevitable consequence of greater-than-unity loop gains. To see this, consider the circuit of Fig. 12.

The amplifier in that drawing generates an output $y$ that within the amplifier’s normal operating range equals the product of its open-loop gain $A$ and the difference between signals received at its inverting (-) and non-inverting (+) input ports. In the illustrated circuit the non-inverting input port receives a positive fraction of the output $y$ from a resistive voltage-divider network so that, in the absence of the capacitor, the non-inverting port’s input would be $fy$. A negative voltage at the inverting port would result in a positive output voltage, which, because it is positively fed back, would tend to make the output even more positive than the open-loop value $-Ax$

Now, that is not a particularly typical use of feedback. More typically feedback is designed to be negative because the open-loop gain $A$ undesirably depends on the input—i.e., the amplifier is nonlinear—yet for $-Af>>1$ (and $A$ typically is much, much larger than the value 5 we use below for purposes of explanation), the closed-loop gain $A/(1-Af)\approx -1/f$: the relationship $y\approx -x/f$ is nearly linear despite the amplifier’s nonlinearity.

In principle, though, if $A$ is independent of input, there are no stray reactances to worry about, we are sure that the feedback coefficient $f$ will not change, the lark’s on the wing, etc., etc., then there is no reason why positive feedback cannot be used. If $A=5$ and $f=0.1$, for example, the loop gain $g\equiv Af$ would be +0.5, which would make the output $y=-Ax/(1-g)=-2Ax$: that feedback would double the gain.

But in that example the loop gain $g$ is less than unity. What about $g>1$, which makes $A/(1-g)$ negative? I.e., what about the situation in which Monckton et al. tell us that “in a circuit, the output (the voltage) becomes negative”? Well, despite what they say, it doesn’t necessarily become negative.

To see that, let’s change the feedback coefficient $f$ to 0.4 and keep amplifier open-loop gain $A$ equal to 5 so that the loop gain $g\equiv Af=2$, i.e., so that the loop gain $g$ exceeds unity. And let’s make the inverting port’s input $x$ a time $t=0$ step from 0 volt to -0.1 volt. That value is inverted and amplified, the result appears as the output $y$, and an attenuated version $fy$ of that output appears at the non-inverting input port.

But propagation from output port to input port is not instantaneous, and, to enable us to observe what may happen during that propagation (and avoid tedious transmission-line math), we have exaggerated the inevitable time delay by placing a capacitor in the feedback circuit. (In a block diagram like those above, the legend on the feedback-circuit block would accordingly be $f/(1+s\tau)$).

As Fig. 13’s top plot shows, the output is initially (-0.1)(-5) = 0.5 volt and then rises exponentially as the feedback operates. If the amplifier had no limits, the output would grow without bound; despite what Monckton et al. say about $g>1$ in electronic circuits, the output would not go negative.

But we have assumed for Fig. 13 that the amplifier does have limits: its output is limited to less than 15 volts. Accordingly, the output goes no higher than 15 volts even though the signal at the non-inverting input port still increases for a time after the output $y$ reaches that limit. Still, the output does not go negative.

We could characterize that effect as the total loop gain’s decreasing to just under unity, as the middle plot illustrates, or as the small-signal loop gain’s falling abruptly to zero, which the bottom plot shows. (That is, no input change that doesn’t raise $x$ above +3 volts would result in any output change at all.) No matter how we characterize it, though, the $y = -\frac{A}{1-Af}x$ formula doesn’t apply in this case.

Why? Because it’s the solution to an equation $y=A(-x+fy)$ that says the output is equal to the product of (1) the amplifier gain $A$ and (2) the sum of the input $-x$ and a fraction $fy$ of the output. And in the $g=Af>1$ case that equation is never true: delay prevents $y$ from ever catching up to $A(-x+fy)$ until the amplifier gain $A$ has so decreased that the loop gain $g$ no longer exceeds unity.

Now, none of this contradicts Monckton et al.’s main point. Increasingly positive loop gains $g=Af$ make a system more sensitive to variations in parameters such as open-loop gain $A$ and feedback coefficient $f$, so in light of the earth’s relatively narrow temperature range it’s unlikely that climate feedbacks are very positive—if they are positive at all. But the authors would have made their point more compellingly if they had avoided the circuit-theory discussion. And they would have made their model more accessible if their discussion of the transience fraction $r_t$ hadn’t raised so many questions.

## 168 thoughts on “Reflections on Monckton et al.'s 'Transience Fraction'”

1. Dr Deanster says:

Wow …. I’m having nightmares of my “circuits” class!! … it’s not a wonder that I just couldn’t stick with Engineering. …. BUT … great presentation.

• rh says:

No kidding! I had flashbacks to early days at U Lowell myself.
I’ve often thought of trying to model the temperature using Electronics Workbench. Input would be TSI, each ocean would represented by a low pass filter, and ocean cycles would be tank circuits, etc. My rational mind, however, tells me that the similarities between global temperature and electronic waveform analysis are superficial. I’m suffering from 30 years of troubleshooting electromagnetic interference issues and my brain is now hard-wired to think in those terms.

• johnmarshall says:

Exactly.
I have run simple climate models and making the CO2 input zero makes the model follow reality far better than any other. Kind of makes you think.

• Duster says:

Indeed. The key issue with any climate model is to have it model natural climate. If natural climate can be modeled, then the effects if human activity can be sorted out. Current models cannot reproduce the Pleistocene, let alone the changes across the last 550 MY. They cannot reproduce the lag the between increased warming and increased CO2 levels, quite the opposite, they pretty much demand CO2 changes to explain everything else. The short of it is that regardless of how carefully reasoned a model is and how carefully programmed a simulation of the model may be, if that model cannot replicate natural climate at any time scale beyond meteorological models, then it is inadequate. There are missing factors, or factors added that should not be there, or factors to which inappropriate properties have been assigned.

2. Old woman of the north says:

Says it all, Max.

3. Bill Illis says:

…Lambda is a coefficient widely accepted to be approximately 0.32 K/W/m2 …
I don’t know why that is so accepted when it is not the value that the Stefan-Boltzmann (SB) equation predicts (and SB seems to be able to match up energy and temperature everywhere in the universe to nearly 100% accuracy).
For the Surface, SB predicts only 0.184 K/W/m2, and,
For the Tropopause, Earth emitting temperature, SB predicts only 0.265 K/W/m2.
Climate science takes so many shortcuts to keep its theory in the 3.0C per doubling range that they just make up numbers.

• Joe Born says:

I must confess that I have merely accepted that value as being relatively non-controversial; actually going through optical depths for all the different wavelengths is far beyond me, so you’re probably right that I was a little careless in making that assertion.

• DD More says:

The optical-density increase raises the effective altitude—and, lapse rate being what it is, reduces the effective temperature—from which the earth radiates into space, so less heat escapes, and the earth warms.
Just what level of atmosphere is being discussed here? Note the inputted energy sources being absorbed by the different layers and until the Thermosphere convection can take place. Has the averaged SB predictions been weighted to the differing temperatures?
http://geogrify.net/GEO1/Images/FOPG/0314.jpg

• Brandon Gates says:

DD More,

Has the averaged SB predictions been weighted to the differing temperatures?

The short answer is: yes, that’s exactly what (spectral) line-by-line radiative transfer codes were designed to do. The atmosphere — not to mention the entire climate system — defies closed-form analytic solutions because it isn’t homgeneous or iso-anything for pretty much any parameter you can think of, so numerical methods (aka – “the models”) are all but necessary.

• Mr Illis asks why the Planck parameter at the characteristic-emission level is 0.31 Kelvin per Watt per square meter, when the first differential of the SB equation at that level is 0.27 K/W/m2. The reason for the difference is that allowance must be made for the Hoelder inequality by integrating the individual Stefan-Boltzmann differentials latitude by latitude.
It is interesting, though, that Kevin Trenberth implicitly uses the SB equation at the Earth’s surface in his radiative-balance papers of 1997 and 2008. Strictly speaking, SB does not apply at the Earth’s surface, but only at the locus of all points at or above the surface at which incoming and outgoing radiation are equal.

4. This was covered in a quite different way (non circuit engineer) at a guest post at CE previously. Not in this detail, but with more climate implications. Go there for additional info.

• Joe Born says:

I commend Mr. Istvan’s piece to readers’ attention. It’s more accessible and boils down the basic feedback concept further, and for some readers reading his piece first would be helpful. But It think something remained to be said about r_t. And then there’s the circuit stuff.

5. Thanks, Joe Born.
You brought back a lot of electronic circuits theory that I thought I had totally forgotten.

6. Good post. As I mentioned over at CE not too long ago:
“There is also the bizarre assertion (via misinterpreting a paper from Gerald Roe) that if feedbacks are negative then the Earth system has no thermal inertia and thus transient and equilibrium sensitivity are the same. I’d argue that in its current form the paper should not have been published, and would not have been had it been subject to competent peer review.”
If you look at Roe’s equation 29, its pretty clear that the transience fraction is non-zero for any positive climate sensitivity. Monckton et al’s assumption that the transience fraction is zero when feedbacks are negative is as unjustified as their a-priori assertion of a negative feedback parameter, and certainly disagrees with what Roe says in his paper (which, by the way, is verified via a personal communication with Roe).

• Joe Born says:

I recall your comment and in fact thought that it merited expanding upon.
But by “non-zero” you mean non-unity?

• You are correct; I mean that its impossible for r_t to equal 1 if the sensitivity parameter is greater than zero.

• Mr Hausfather incorrectly states that Monckton et al. assume that the transience fraction is zero where temperature feedbacks are net-negative. We assume, correctly, that the transience fraction is unity where temperature feedbacks are zero, and that the transience fraction exceeds unity where feedbacks are net-negative, whereas it is <1 where feedbacks are net-positive.

• Joe Born says:

Monckton of Brenchley: “We assume, correctly, that the transience fraction is unity where temperature feedbacks are zero.”
A useful exercise in this context would be to plot on a common graph the product of each row of Monckton et al.’s Table 2 and the $\lambda_\infty$ value that the corresponding feedback value implies; i.e., to compare the step responses that their Table 2 represents. The result will help assess that statement’s accuracy.

7. My question – how feasible is it to break out a set of transience functions in an “irreducibly simple model”? How certain could you be of capturing all the different possible physical factors?
Given the claim that the irreducibly simple model does a better job of replicating climate than more more complex models, I think there is a case for lumping transience into a single parameter, until or unless a more sensible breakdown of transience can be constructed in a model which isn’t irreducibly simple, and which does an even better job of reproducing observed climate.
Ability to replicate observation should be the primary objective – if complexity does not help improve replication, it is of no use.

• Joe Born says:

Not sure I quite understand the question, but the following may be relevant.
To the extent that the model is linear–and, despite what the authors say, it basically is linear once the conversion from concentration to radiation is performed–the product of the transience fraction (as inferred from the Roe paper) and $\lambda_\infty$ is what I’m told is referred to in some circles as the step response. The only reason I see to break the step response into portions is that seeing the loop gain separately is helpful when you’re trying to understand various features’ effects.
Despite the subscript t on $f_t$, the best interpretation is that f should be taken as independent of $t$; to make the math work otherwise requires a feedback-coefficient function that’s highly counter-intuitive.

”The optical-density increase raises the effective altitude—and, lapse rate being what it is, reduces the effective temperature—from which the earth radiates into space, so less heat escapes, and the earth warms.”

This false assumption is the real reason Monckton’s “warming but less than we thought” model fails, not feedback flaws. This is essentially the ERL argument, which simply boils down to a claim -”adding radiative gases to the atmosphere will reduce the atmospheres radiative cooling ability”. This claim is clearly ludicrous as the atmosphere would have no radiative cooling ability without these gases.
It becomes doubly ridiculous when the issue of lapse rate is considered. The lapse rate is not a given. It is produced by adiabatic expansion and contraction of air as it vertically circulates across the pressure gradient of the atmosphere. In the troposphere, this vertical circulation depends on radiative subsidence of air masses from altitude. Its speed is therefore dependant on radiative gas concentration. Without radiative subsidence, strong vertical circulation in the Hadley, Ferral and Polar cells would stall, and the bulk of the atmosphere would superheat.
Warmulonians and lukewarmers dodge and weave between “back radiation slowing the cooling of the surface” to ERL and back again. But neither argument holds up.
Given we know current surface average temperatures (288K), the simple way to answer the question – “What is the net effect of our radiatively cooled atmosphere on surface temperatures?” is to correctly answer – “What would the average surface temperature of the planet be without a radiative atmosphere?”.
Because climastrologists got the second question wrong, they can never get the first right. Climastrologists assumed the oceans were a near “blackbody”, and used the Stefan-Boltzmann equation to determine 255K for 240w/m2 of solar insolation. You can’t use the Stefan-Boltzmann equation on SW translucent materials that are IR opaque being intermittently illuminated by solar SW! The oceans are an extreme SW selective surface. They would heat to a surface average around 335K if not for cooling by our radiatively cooled atmosphere.
The climastologists assumption of 255K for “surface without radiative atmosphere” is utterly wrong, and so too is every single paper based on this flawed foundation. 312K is a more accurate estimate. And given current surface temps are lower, this tells you that the net effect of our radiatively cooled atmosphere is surface cooling.
More than anything now, it is the fear and embarrassment of lukewarmers keeping this sorry hoax alive. But trying, as Monckton does, for a Realpolitik “soft landing” can never work. The critical error in the “basic physics” of the “settled science” cannot be erased and no amount of “flappy hands” can change an extreme SW selective surface into a near Blackbody.

• David A says:

Konrad, “The oceans are an extreme SW selective surface. They would heat to a surface average around 335K if not for cooling by our radiatively cooled atmosphere.”
=====================================================
How do you arrive at that number? Are you assuming a non-radiatively cooled atmosphere of equal density?

• David A says:

Konrad says, “The oceans are an extreme SW selective surface. They would heat to a surface average around 335K if not for cooling by our radiatively cooled atmosphere.”
=================================
How did you reach that number.

David A
March 12, 2015 at 11:48 pm
//////////////////////////////////////////////
David,
you ask an intelligent question. First off, the reason you went into moderation and posted twice is because you typed my name. This is a lukewarmer site, therefore my name is treated as the mark of the devil 😉 Evil. Eeeeeevil!
You ask how the 335K number for oceans in the absence of radiatively cooled atmosphere is derived. The simple answer is “empirical experiment”.
I ran a number of these before finding out that the answers had been found by researchers at Texas A&M well before I was even born.
Ultimately an atmosphere without a radiative cooling ability cannot provide surface cooling. Climastrologists never considered this as they assumed that the atmosphere was slowing the cooling rate of the surface.
Try this simple experiment –
http://oi61.tinypic.com/or5rv9.jpg
– both target blocks have the same ability to absorb SW and emit LWIR. Both are opaque to LWIR. The only difference is depth of SW absorption. Illuminate both with 1000w/m2 of LWIR for 3 hours. Both rise to the same average temp. Now try with 1000w/m2 of SW. Now block A runs 20C hotter. Basic physics. Basic physics utterly missing from the “basic physics” of the “settled science”. If you use S-B equations on the oceans you are treating them as SW opaque. My claim that 97% of climastrologists are assclowns is solid.
Wanna try with liquids that convect? –
http://oi62.tinypic.com/zn7a4y.jpg
– ya not gonna win that way either 😉
So what did those researchers who beat me before I was born find –
http://oi62.tinypic.com/1ekg8o.jpg
– for evaporation constrained fresh water solar ponds, layer 2 clear an layer 3 black works far, far better than layer 2 black.
What else did older researchers find? That the deeper evaporation constrained solar pond got, the closer surface Tmin got to surface Tmax. So that’s where the 335K ocean Tav comes from. Empirical experiment shows the sun can drive water to a surface Tmax of 353K or beyond, but due to diurnal cycle, surface Tav will be lower than that.
If in doubt, just remember the five rules governing solar heating of the oceans –
http://i59.tinypic.com/10pdqur.jpg
– the climastrologists didn’t, and that is why they, Willis, Monckton and Spencer failed.

• Willis Eschenbach says:

Konrad. March 13, 2015 at 2:11 am

– the climastrologists didn’t, and that is why they, Willis, Monckton and Spencer failed.

Konrad, I think you meant “that is why they, Willie [Soon], Monckton and Spencer failed”, as I don’t have a dog in this fight.
Regards,
w.

Willis,
I apologise for my lack of clarity. I was indeed referring to you, not the much maligned Dr. Soon.
You claim not to have a dog in this fight. I beg to differ. Remember 2011? Remember what you did at Talkshop? Quite a few real sceptics won’t quickly forget.
You argued that incident LWIR could slow the cooling of water free to evaporatively cool. This is part of the foundation dogma of the church of radiative climastrology. In 2011 I proved that claim false via empirical experiment.
While I appreciate much of your work on the cloud thermostat, the 2011 mistake prevented you from ever finding the correct answer.
If DWLWIR is not heating the oceans above 255K (or at least 273K) as empirical experiment showed it cannot, then some other factors must be responsible for an average of 240w/m2 solar insolation driving the oceans above 255K. I believed my 2011 results, and the results of others who replicated and went searching for those other factors. You did not. This is why you failed.
I found three significant factors. First, hemispherical LWIR emissivity (0.69) for liquid water was far lower than hemispherical SW absorptivity (0.9). Second I found you cannot use apparent emissivity readings for materials (unless very hot) when measured in the Hohlraum of the atmosphere as effective emissivity figures to determine radiative cooling ability. Third, and most important, I found that SB equations don’t work for SW translucent / IR opaque materials being illuminated by solar SW. Water is an extreme SW selective surface, not a near blackbody and it covers 71% of our planet’s surface.
Willis, you and many other lukewarmers are trying to settle for the “warming but less than we though” soft landing. Given warming due to anthropogenic CO2 emissions is a physical impossibility, this is a political dead end. Sceptics have to be right, not just “less wrong”. We have a situation where every single person who is a net negative force toward free market economies and democracy have dug themselves an impossibly deep hole. You are trying to throw them a life line, while I am backing up the JCB to fill the hole in.
You do have a dog in this fight. Problem is this man bites dogs. I only kiss wolves –
http://i57.tinypic.com/2r6p27l.jpg

• “But trying, as Monckton does, for a Realpolitik “soft landing” can never work. The critical error in the “basic physics” of the “settled science” cannot be erased and no amount of “flappy hands” can change an extreme SW selective surface into a near Blackbody…”
Good comment and I agree, but I must quibble with the above quote of the last part.
Since there is almost no “science” in modern “climate science” but a whole lot of politics, a “soft landing” might help to get us out of this blind alley that climatology is in right now. I agree that lukewarmers are wrong on parts of the science of the matter, but they are at least challenging the “scientific consensus” and saying that all views should be considered. If we move away from this crazy notion that CO2 is a magic molecule that can do darn near anything, then perhaps we can get back to looking at what the atmosphere really does.
I do agree with you that all the “flappy hands” in the world can not save the current consensus.

• Mr Stoval quotes with approval a statement to the effect that the characteristic-emission level is not a near-blackbody. However, with respect to the long-wave radiation in the near infrared that is the object of study, it is a near-blackbody.

Monckton of Brenchley March 14, 2015 at 8:27 am
”Mr Stoval quotes with approval a statement to the effect that the characteristic-emission level is not a near-blackbody. However, with respect to the long-wave radiation in the near infrared that is the object of study, it is a near-blackbody.”
My text Mark quoted referred to the surface of the oceans not the mathematical fiction of a “characteristic emission level”. While water may be opaque to LWIR its surface cannot even be considered a near blackbody in this limited frequency range. First empirical experiment shows incident LWIR cannot heat nor slow the cooling of water that is free to evaporatively cool. Second the hemispherical emissivity of water in the LWIR is only around 0.67.
I understand the desire of lukewarmers to flee from discussion of surface properties of the oceans with regard to absorption and emission of radiation, as this is where the critical error that invalidates the entire radiative GHE hypothesis lays. However running to the old ERL or “characteristic emission level” game is no solution. Anyone with an IR thermometer can see for themselves that the 5km claims are false.
Remember what Sir George Simpson of the royal meteorological society warned Callendar in 1939 –
“..but he would like to mention a few points which Mr. Callendar might wish to reconsider. In the first place he thought it was not sufficiently realised by non-meteorologists who came for the first time to help the Society in its study, that it was impossible to solve the problem of the temperature distribution in the atmosphere by working out the radiation. The atmosphere was not in a state of radiative equilibrium, and it also received heat by transfer from one part to another. In the second place, one had to remember that the temperature distribution in the atmosphere was determined almost entirely by the movement of the air up and down. This forced the atmosphere into a temperature distribution which was quite out of balance with the radiation. One could not, therefore, calculate the effect of changing any one factor in the atmosphere..”

• aGrimm says:

Konrad’s comment is solid. For those not familiar with radiative transfer theory, hopefully here is a simplified, easy to imagine version. Imagine a molecule adsorbing energy from solar radiation. This extra energy in the molecule puts the molecule in an excited state which it doesn’t “like”. There are a number of ways the molecule can get rid of the extra energy, such as direct transfer to another molecule or by the emission of electromagnetic photon(s). This latter way is described as radiative transfer and usually the newly emitted photon is at specific wavelengths/frequencies which are likely to be adsorbed by other molecules. Let’s say a CO2 molecule sheds its excess energy near ground level, this energy will likely be absorbed by other molecules – either in the air or in the earth (water, soil, etc.). Additionally the photons can go in any direction – up, down, sideways. The conservation of energy continues in this fashion.
There are basically two ways the photon energy can be lost to space;
1) The photon is of a wavelength/frequency to which intervening molecules (air) are transparent. Molecules normally only adsorb specific w/f’s in the electron shell, which is the bulk of an atom’s size. If the w/f is not specifically adsorbed in the electron shell, then it will pass right on through. Any w/f normally can be adsorbed by a nucleus, but there is less of a chance of this due to the nucleus’ relatively small size compared to the total size of the atom.
2) The energized molecule is close to outer space with little intervening and adsorptive molecules between the photon and space. Additionally, the photon must be travelling towards space (not down towards earth). This is where Konrad’s comments are so important. An excited CO2 molecule WILL lose its excess energy eventually, sometimes right away, other times after a period of holding the energy. There is a good chance other molecules, e.g. water, will adsorb the CO2’s excess energy. Now excited, all these atmospheric molecules will also release their excess energy in good time. Where that energy goes partially depends on where the molecule is at the time of the energy release. Near the Earth’s surface, the energy will more than likely be retained in the Earth’s environment. But if the excited molecule is transported to the upper atmosphere as Konrad is saying, it could release its energy to space.
There are other ways photon energy can be used instead of becoming molecular excitation (which can translate to what we call heat). For example, the photon adsorption may cause the breaking of chemical bonds. This happens a lot in our atmosphere. Whether or not this is exothermic (heat releasing) depends on the reaction. In short, there is no way we can calculate all the possible perturbations of radiative transfer in the atmosphere’s chaotic system. We can come up with fudge factors that take this into account, but from all the things I’ve seen on WUWT, those fudge factors are all over the spectrum.
I don’t know if we are in the beginning of our learning process or whether we are in the middle of it, but I can confidently predict that we are a very long ways from having a true understanding. To rely on any climate theory at this time is foolish hubris.

• Curious George says:

Thank you for a nice comment. Couple of remarks: Regarding CO2, it is almost never excited by solar radiation. It is much more likely to be excited by an infrared photon from the surface or the atmosphere. Your point 1 is essentially correct, except that nuclei don’t need to be mentioned at all (if you do, please provide a correct explanation). Your point 2 mixes two concepts: an emission of a photon (correct) and other ways to dispose of the excitation energy. Here you omit a basic mechanism: an excited molecule (in a rotational or vibrational state) may collide with another molecule, most likely a N2, and the energy will be converted to a kinetic energy – heat.

• beng1 says:

• Brandon Gates says:

beng1, I’m sure he does, the question is will he provide it. Not that it makes much difference: sunlight penetration in the oceans is much studied, a quick search should turn up multiple hits all saying what K-rad is, none of it mysterious, controversial, or being ignored by “climastrologers”. That isn’t where his argument breaks, but at the very least he’s got a sense of humor about it: “shredded turkey in Boltzmannic vinegar” gave me a case of giggleshits.

Beng,
the 1965 work I refer to is –
Harris, W. B., Davison, R. R., and Hood, D. W. (1965) ‘Design and operating characteristics of an experimental solar water heater’ Solar Energy, 9(4), pp. 193-196.
– sadly I do not have a link to an non-paywalled copy online.
However you will find it referenced in a few places including Ibrahim Alenezi’s 2012 thesis on salt gradient solar ponds where he writes –
“A group of researchers at Texas A&M University [20] tried to improve the SSP by using a completely black butyl rubber bag. However, the result was exactly the opposite of what they had tried to achieve: the temperature of the top surface of the bag was 30oC hotter than the water directly underneath. So, the conclusion confirmed that the upper cover should be a transparent film.”
Harris et. al. were experimenting with shallow freshwater solar ponds. Because evaporation constrained freshwater solar ponds have no barrier to convection, they suffer from overnight surface cooling. The solutions were all too costly – insulated night covers, pumping to insulated night storage tanks or making the ponds very deep (remember sunlight penetrates our oceans to 200m). Because of the impracticalities of freshwater ponds, salt gradient with its convective constraints became the favoured technology. The physics pertaining to freshwater evaporation constrained ponds however remains relevant to how the sun heats our deep transparent oceans. Ie: DWLWIR need not be invoked to keep the oceans from freezing.

Brandon Gates
March 13, 2015 at 2:20 pm
/////////////////////////////////////////////////

” a quick search should turn up multiple hits all saying what K-rad is, none of it mysterious, controversial, or being ignored by “climastrologers”.

A “quick search” was it darling? So where are your results showing climastrologists actually had the brain to treat the oceans as an extreme SW selective surface? Nowhere, that’s where! All of their calculation are based on assuming the surface of our ocean planet is a “near blackbody”. This is what is so delightful sweetheart. You and yours are trapped by the Internet. You fecukd up something severe, and you can never erase your shame.
Is it just the “happy few” sceptics who fought on St. Crispins day coming after you? Hell no! You and yours pissed the engineers off. Way stupid move. You could convince the activists, journalists and politicians of the Left. But white coats and no empirical evidence don’t work on engineers. Engineers outnumber all your people, but you thought it a good idea to piss up our backs and tell us it was raining. You’re gonna pay!
Tell engineers they are “holocaust deniers” for pointing out flaws in your failed hypothesis, they don’t back down. They get pissed off. The eyes narrow and a wind of rage flips the desk calender to weasel stomping day –

Brandon, sweetheart, pet, love, let me tell you what happens next. You won’t be just up against a few hundred thousand sceptics. You will be up against the general public. Billions of them. Billions of enraged citizens, baying for political blood. Did you foolishly think each of the warmulonian transgressions were lost in the dust of battle? Think again! Engineers are good at building dams, and we have recorded ever single inanity the warmulonians have ever uttered. Will the dam burst and allow any of yours to escape in the confusion? Forget that.
Sceptics maintain the dam of anger, but we will give control of the small valve at the base to the general public. Turn that valve and you release an iron hard jet of rage that will power the turbines of vengeance.
(This comment is my homage to the recently late Terry Practchett. I accept his choices, but still regret his passing.)

• Brandon Gates says:

So where are your results showing climastrologists actually had the brain to treat the oceans as an extreme SW selective surface?

Clayton and Simpson (1975), “Irradiance Measurements in the Upper Ocean”: http://journals.ametsoc.org/doi/abs/10.1175/1520-0485%281977%29007%3C0952:IMITUO%3E2.0.CO;2
Abstract
Observations were made of downward solar radiation as a function of depth during an experiment in the North Pacific (35°N, 155°W). The irradiance meter employed was sensitive to solar radiation of wavelength 400–1000 nm arriving from above at a horizontal surface. Because of selective absorption of the short and long wavelengths, the irradiance decreases much faster than exponential in the upper few meters, falling to one-third of the incident value between 2 and 3 m depth. Below 10 m the decrease was exponential at a rate characteristic of moderately clear water of Type IA.

Gasparovic and Tubbs (1975), “Influence of reference source properties on ocean heat flux determination with two-Wavelength radiometry”: http://onlinelibrary.wiley.com/doi/10.1029/JC080i018p02667/abstract
Abstract
Multiwavelength infrared radiometers used for determining the heat flux through the ocean surface are generally calibrated by using a near-ambient temperature reference radiation source. Typically these sources have spectral emissivities that are less than unity and wavelength dependent. Analysis of the error produced by using a reference source which only approximates an ideal blackbody indicates that significant errors in the heat flux determination can arise unless the emissive properties of the source are well-known and accounted for.

Abstract : Information is given on the part played by the sun, earth’s surface, and atmosphere in the heat balance of our planet. Following a general survey of solar and terrestrial radiation, including the emissivity and reflectivity of various terrestrial features (clouds, land masses, oceans), an estimate is made of the planetary radiation received by a satellite radiometer in five spectral channels covering the ultraviolet, visible, and infrared spectral regions. The wavelengths and purpose for selecting each of the channels is given. The signal-to-noise ratios associated with the received radiation in each channel using bolometers such as those in TIROS II and III, are included.
Saunders (1967): http://journals.ametsoc.org/doi/abs/10.1175/1520-0469%281967%29024%3C0269:TTATOA%3E2.0.CO;2
A simple theory is presented to account for the difference between the temperature at the ocean-air interface and that of the water at a depth of about one meter. Except in very light winds and intense solar radiation the mean temperature difference ΔT is expected to be of the form [of a bunch of symbols I don’t feel like representing in Unicode] where q is the sum of the sensible, latent, and long-wave radiative heat flux from ocean to atmosphere and τ/ρw is the kinematic stress. No data are available to test this prediction.
The influence of slicks and solar insolation on interface temperature is also briefly discussed.

Not in ’67 anyway, but it’s a heavily cited paper, including three this year already. Sort of a must-read, actually, because it’s an accessible yet comprehensive overview of all of the fluxes contributing to near-surface ocean temperatures.
That really should be enough. That emissivity isn’t constant over all wavelengths any more than albedo is constant for all surfaces at all times is pretty much common knowledge at this point … which may explain why it’s not explicity discussed in more recent literature.

Nowhere, that’s where!

Idiocy or pathological lies. So hard to tell sometimes.

All of their calculation are based on assuming the surface of our ocean planet is a “near blackbody”.

No. Start with Hansen and Lacis (1973), “A Parameterization for the Absorption of Solar Radiation in the Earth’s Atmosphere” (open-access): http://journals.ametsoc.org/doi/abs/10.1175/1520-0469%281974%29031%3C0118:APFTAO%3E2.0.CO;2
Please point to the text which says, “we assume the entire surface is a ‘near blackbody'”.
Maybe you’re thinking about papers like this: Wetherald and Manabe(1975), “The Effects of Changing the Solar Constant on the Climate of a General Circulation Model”http://journals.ametsoc.org/doi/abs/10.1175/1520-0469%281975%29032%3C2044:TEOCTS%3E2.0.CO;2
A study is conducted to evaluate the response of a simplified three-dimensional model climate to changes of the solar constant. The model explicitly computes the heat transport by large-scale atmospheric disturbances. It contains the following simplifications: a limited computational domain, an idealized topography, no heat transport by ocean currents, no seasonal variation, and fixed cloudiness.
And then leads off with, “A very crude estimate of the sensitivity of the equilibrium temperature of the atmosphere to the change in solar constant may be obtained from the equation …”, a fancied up version of S-B, which when one assumes an average planetary albedo of 0.3, like textbooks often do, results in a 1% change in solar constant spitting out a 0.6 °C change in equilibrium temperature.
Just because one can find simplifications in literature does not mean that everyone, everywhere, all the time, is using the back of envelope approximations when they’ve got the accuracy requirements AND computational horsepower to do heavier lifting.

Brandon, sweetheart, pet, love, let me tell you what happens next.

You’re going to apologize for being a reality-impaired fruitloop and seek help? No, of course not. Smooches dear one, you were fun as always.

Brandon,
nice try but no cigar. Why didn’t you settle for Sweeney et al? It reads sooo much better. A tiny oceanographers voice in the wilderness calling to the climastrologists “Guys? Um err, guys?”
”Please point to the text which says, “we assume the entire surface is a ‘near blackbody’”. “
Bwahahahaha –
No Brandon, there is no way out. “Surface in absence of atmosphere at 255K being raised by 33K by addition of a radiative atmosphere” is just too wide spread for you ever to erase it. This claim is in every basic Gorebull warbling text. The shame of the climastorlogists will burn forever 😉
”You’re going to apologize for being a reality-impaired fruitloop and seek help”
Now why on earth would I do that Brandon? I was never so inane as to claim that adding radiative gases to the atmosphere would reduce its radiative cooling ability. You and yours own that one darl’ 😉

• Brandon Gates says:

Why didn’t you settle for Sweeney et al?

I don’t know that paper, sounds interesting. Hit me.

No Brandon, there is no way out. “Surface in absence of atmosphere at 255K being raised by 33K by addition of a radiative atmosphere” is just too wide spread for you ever to erase it.

Moving the goalposts. Your claim was that climastrologists ignore the selective absorptivity/emissivity of the oceans. That claim is false.

This claim is in every basic Gorebull warbling text.

I’ve already stipulated that the problem is simplified in texts. A first year physics student solves ballistic trajectory problems neglecting the planet’s rotation and atmosphere entirely. If they go on to design harware like this …
http://en.wikipedia.org/wiki/File:M3-M4_gun_computer.jpg
… for the military, then they must draw from higher level coursework and take air temperature, humidity, wind speed, drag coefficient, Coriolis and Magnus effect and a slew of other things into account. Every engineer was a first year physics student at one point, doing vastly oversimplified calculations. Including YOU. Throw yourself under the bus, netl00n.

I was never so inane as to claim that adding radiative gases to the atmosphere would reduce its radiative cooling ability.

I really shouldn’t have to invoke Kirchhoff, but a radiative atmosphere is also an absorbing atmosphere.
http://photonics.intec.ugent.be/education/ivpv/res_handbook/v1ch44.pdf
It gets good right around the discussion of Beer-Lambert on p. 10. Twit. Hugs ‘n kisses.

Brandon Gates March 15, 2015 at 3:11 pm
”Moving the goalposts. Your claim was that climastrologists ignore the selective absorptivity/emissivity of the oceans. That claim is false.”
Oh Please, Brandon there is no way out. This is not about insignificant errors, this is about a 60K error in the “basic physics” of the “settled science”. Try moving the goal posts all you like. Warmulonians are good at that. One of my speciality areas is thermite plasma FAE. Blast radius 5km. 300 psi overpresure at 3 km. You can’t run fast enough 😉
”I’ve already stipulated that the problem is simplified in texts.”
Simplified was it darl’? No, no “out” there. You and yours claimed that the net effect of our radiative atmosphere was surface warming, not surface cooling. This is black or white, right or wrong. There is no room for “nuance”
You say I throw myself under a bus? Dream on. No one would even dare push me. They can’t crack the butterfly wing encryption!
Sure I’m a monster, but I’m the monster you deserve.

• Brandon Gates says:

This is not about insignificant errors, this is about a 60K error in the “basic physics” of the “settled science”.

Which you attempted to address by saying that the ocean’s extreme SW selectivity has been ignored. That is false. You mention Sweeney, et. al by way of rebuttal, and failed to produce a citation when asked. Did you mean this paper? Sweeney, et al. (2004), “Impacts of Shortwave Penetration Depth on Large-Scale Ocean Circulation and Heat Transport” http://journals.ametsoc.org/doi/pdf/10.1175/JPO2740.1
Perhaps not, since it’s yet more evidence that climastrologists aren’t ignoring that which you say they are. So now you’re down to argument by assertion and indulging yourself with fantasies of rupturing my internals with thermobaric ordnances. Charming, my pet, quite charming.

You and yours claimed that the net effect of our radiative atmosphere was surface warming, not surface cooling.

Kirchoff, Beer and Lambert — the arch-climastrologists.

This is black or white, right or wrong. There is no room for “nuance”.

The planet is not your lab bench, my silly sweet. True/false dichotomies are difficult to come by, but some things can still be tested in controlled conditions. And have been. Rubens and Aschkinass (1898), “Observations on the Absorption and Emission of Aqueous Vapor and Carbon Dioxide in the Infra-Red Spectrum”: http://adsabs.harvard.edu/abs/1898ApJ…..8..176R
The absorption band at 14.7μ is so sharp that it comes out distinctly in every energy curve in consequence of the carbon dioxide in the air of the room, while the absorption bands of water vapor cannot be observed this way under an average humidity.
The observations now communicated show that the Earth’s atmosphere must be wholly opaque for rays of wave-length 12μ to 20μ as well as for those of wave-length 24.4μ.

This kind of stuff has been done so often and is now so routine that it’s an undergrad-level lab exercise: http://www.d.umn.edu/~psiders/courses/chem4644/labinstructions/COIR.pdf
At the surface, CO2 absorbs so strongly in the 15μm that transmittance falls to practically ZERO for path lengths on the order of tens of meters. In English, that means that if our eyes were only sensitive to frequencies in the 14-16μm region of the spectrum, at the surface of this planet the scatter from CO2 in this band would be akin to what a thick fog does for us in the real world at the frequencies our eyes actually do detect naturally. From space, for satellites which can “see” 15μm radiation, the results are pretty striking:
https://wattsupwiththat.files.wordpress.com/2011/03/gw-petty-6-6.jpg
The only reason that weather birds can see anything at 14-16μm is because that’s in a sweet spot for CO2 emissivity. That’s Kirchoff. However, by Beer-Lambert — and via lab observation — we know the satellites can only be seeing that radiation from the upper layers of the atmosphere. This is corroborated by the radiance in those bands dipping down to almost 210 K as would be predicted by the Stefan-Boltzman relationship.

• Konrad attributes to me a false assumption that is not made in Monckton et al., but it is made by Mr Born. Effective temperature does not change as a consequence of a change in the mean altitude of the characteristic-emission level.

Monckton of Brenchley
March 14, 2015 at 8:24 am
”Konrad attributes to me a false assumption that is not made in Monckton et al., but it is made by Mr Born. Effective temperature does not change as a consequence of a change in the mean altitude of the characteristic-emission level.”
Viscount Monckton,
I agree that Joe Born’s text I quoted is more a description of the old ERL argument rather than the “characteristic emission height” variant you use in your paper. The problem is that the same flaws in reasoning apply to both. It is just as Sir George Simpson said to Callendar in 1939 – “it was impossible to solve the problem of the temperature distribution in the atmosphere by working out the radiation.”
The 5 km you use is a mathematical assumption and not supported by empirical observation. Get an IR instrument and measure the sky. Clouds are the strongest emitters. Clouds emit most strongly at low altitude, and very strongly during formation due to the heat pulse during the release of latent heat. Trying to calculate a “characteristic emission height” from IR opacity maths games defies reason.
If empirical experiment is not your thing then see others observations of the net energy fluxes into and out of our gas atmosphere. Absorption of LWIR surface radiation accounts for less than 33% of all energy entering the atmosphere. Emission of LWIR to space however accounts for almost 100% energy leaving the atmosphere. So the net effect of radiative gases in our atmosphere both absorbing and emitting LWIR is atmospheric cooling.
Could such a radiative cooled atmosphere be warming the surface of the planet? No, as empirical experiment shows, the oceans would rise to 335K or beyond were it not for atmospheric cooling. A non-radiative atmosphere could not cool the oceans as it would have no effective way to cool itself.
There truly is no point to the “warming but less than we thought” games. Given there is no net radiative GHE on this planet, the Realpolitik approach is a dead end.

9. Gary Pearse says:

Nice presentation and critique of probably all models in the climate quiver. I have problems with the entire subject of these models. I have problems with time periods for forced temperatures going several centuries. A cooling period of sufficient length, say one as long as the 20yr warming period we are to be alarmed about, would presumably have multi century long feedbacks that would continue to cool. Indeed, such a thing as polar amplification becomes polar de-amplification and all those rising curves become descending curves. Imagine sliding down into the LIA. Can we say that we still have feedbacks that are still descending from that? More. We had a Medieval Warm Period (I continue to use this venerable term) that would have sent more warming down the pipeline that got lost in a cooling period. Climate optimum 7000yrs ago followed by cooling followed by Minoan Warm Period, followed by cooling, followed by Roman WP:, Dark Ages Cool period, Medieval….. These things were all happening (allegedly) with CO2 below 300ppmv with possibly even greater amplitudes. These also are the wiggles that are on the big oscillations of 100,000, 20,000 and 40,000yrs with amplitudes of ~4-5C that occur whether CO2 is 7000ppm to 280ppm warm or cold. This is the natural variation that any anthropo forcing has to overcome and it could come in handy if we could do so, let’s hope.
In summary, my problem with Monckton et al is they may have a model that performs better than others but there is little likelihood it approximates reality. This has been my nightmare: what if these people that want an new world order to subject the free had accidentally hit upon a model that, although it had nothing to do with reality, accidentally predicted future temperatures up to now. They would be shutting down fossil fuels and accidentally coinciding with the hiatus that’s upon us and taking credit for saving the planet. Our willing governments would already be taxing and expropriating and impoverishing us to subsistence if we were lucky, in preparation for the next dark ages. There is a plot for Hollywood. Good, I got that rant out of my system

• Gary, guess what? They already have a general circulation model that is the stuff of your nightmares: the Quantity Theory of Money and its paraphernalia. Because of this model, we are repeating the worst mistakes of history. Few realize that the Western half of the Roman empire collapsed because currency debasement caused gold to go into hiding, and the Dark Ages followed in its wake. The Eastern half avoided currency debasement by keeping its mints open to gold coinage — the gold Bezant — and as a consequence, flourished for another thousand years. We are copying the West, not the East.
See: GOLD IN HOARDS versus GOLD ON THE GO: Splitting the Roman Empire into two halves
Also see: Wikipedia > Gold Bezant
As a friend of mine would playfully put it: Wake up Cinderella, your pumpkin is here!

• The primary reason for the dark ages had nothing to do with money, the reason civilization regressed was the earth cooled, the roman warm period ended. Cool is bad warm is good, if only people understood this, cold crops fail, warm you can grow more crops. The same for increased CO2. I use to think the collective intelligence of the human race was postie, the global warming debate makes me thing the it is negative. If it is negative God help us all.

• joelobryan says:

Mark,
Of course it had nothing to do with the Huns, Visigoths, empires expanding at others demise..

The Huns made their first appearance in what is now Eastern Europe around the year 370 AD. Thundering out of Asia’s Central Steppes, their arrival pushed the resident tribes such as the Vandals and Visigoths westward into a collision with the Roman Empire .

• Gold has no more intrinsic value than other currencies — only what we (or “the market”) give it . One form of price manipulation merely replaces another. But that doesn’t mean you can’t make a profit off the fluctuations of either commodity.

• emsnews says:

About the Huns: the invention of the stirrup came from Central Asia and suddenly a huge number of horse riding warriors appeared out of nowhere and they dominated warfare for the following 2,000 years.

• theBuckWheat says:

It should be clear by now that what is being “modeled” is far more ideological than scientific. I prove my theory by noting that the output of almost every single climate model is a model of government action that all converge on larger government, more regulation, higher taxation, subsidies of favored parties and far less personal freedom.
The contra-proof is what these advocates are not interested in knowing: there seems to be no appetite for knowing the ideal climate for our current biosphere and where is our climate in comparison to it.

• bob boder says:

10. masInt branch 4 C3I in is says:

As the preponderance of evidence grows with regard to the principle of reasonable doubt, the Arrhenius Hypothesis IS fail.

11. Jimmy Haigh. says:

I’ve always said that it isn’t worth even trying to model a chaotic system like the climate. The money and time could have been far better spent on something else.

• Amen, Trying to model something you don’t understand will always fail, trying to predict the future will always fail. Thinking a computer will make such models and\or predictions any better is the height of stupidity.

• Scott says:

Modelling on a false premise will always end up with the wrong answer, regardless of the model.
A computer will only get you to the wrong answer faster

• Not entirely. I have recently been looking at the scale of variability of the climate over various periods. From this I can predict what will happen – or at least the upper and lower limits of what is typical given past behaviour. From this I can start to make some tentative statements about whether warming or cooling is most likely in the next few decades and how much that is likely to be on average.

• joelobryan says:

Quite agree. In the last 20 years, literally just in the USA, billions of US dollars have been spent on climate models, super computer time, meetings, salaries, more meetings, to achieve a politically desired output. Dr Collin’s specious claims in the story down thread on WUWT is vivid evidence of his deep seated desire to maintain the aCO2 AGW scam.

• The simple fact is that very simple models are no more skilful at predicting the climate than massive supercomputers. This is not welcome news to a whole group of people who earn their living through massively complex models and supercomputers.

• bob boder says:

All you need to know is that we are much closer to the top of the historical temperature range then we are the bottom and it is still just barely warm enough.

12. Bernie Hutchins says:

Joe –
Can you explain why you chose the particular circuit of Fig. 12? I take the amplifier A to be a finite-gain (A) differential input, with no frequency compensation (that is, not like a real op-amp). This assumed the network with feedback is just a negative real pole with a real zero slightly further back (more negative). It is a quite trivial to analyze the transient (like step response) and the steady-state (frequency) responses. Did you look at these plots and do they MEAN anything?

• Joe Born says:

I chose those simplified, idealized assumptions to make the circuit straightforward to understand; the only “complication” was the capacitor used to provide a delay that could be easily analyzed.
For the simplified, idealized values shown, the step response is as you see, but I would have said that the (real) zero is less, rather than more, negative. Be that as it may, the post was dry enough without venturing into poles and zeros.
I didn’t do a steady-state analysis, because it wasn’t germane. Moreover, I think Lord Monckton’s difficulty arises from his having spoken with engineers, who, in my experience, tend to speak about complex-number feedback values, i.e., steady-state-analysis values, and that can lead to false conclusions if you don’t really think through what those values mean. And the capacitor, which is all that results in a frequency dependence, was merely a way of using a delay whose math even a retired lawyer such as I could manage.
The only point of that circuit analysis was to show that greater-than-unity loop gain does not have the meaning that Lord Monckton seems to have attributed to it. When engineers have objected to that meaning, they haven’t broken the analysis down well enough for him to take their meaning. I probably haven’t, either, but I thought I’d make the attempt.

• Tsk Tsk says:

While I dislike Monckton’s circuit language, I think you’ve played some word smithing games as well. Monckton stated (correctly) that the output voltage of his feedback circuit will go negative. He’s correct because he’s increasing voltage(temperature) and the circuit both you and he use inverts the output. You chose to inject a negative signal and –surprise!– it inverts just like it’s supposed to. Sure, it doesn’t always “go negative” but I don’t think that was really his general point but the specific signs for the specific problem at hand.
That said there was nothing technically wrong with any of your derivations and his use of feedback terminology was sloppy but correct with his assumptions.

• Joe Born says:

Tsk Tsk: “I think you’ve played some word smithing games as well.”
I don’t think so. In retrospect I would have been better advised not to have used a differential amplifier–i.e., not to have applied the input to an inverting port–but the effect would have been the same if I had made both ports non-inverting.
Let’s change the system so that both ports are non-inverting, and let’s make the input a positive step. Under less-than-unity loop gain, the output asymptotically increases to a finite positive multiple of the input so long as that doesn’t exceed amplifier limits, but in any case it doesn’t turn negative. Under greater-than-unity loop gain, on the other hand, the output increase without limit–until it reaches the amplifier’s limit–and, again, it doesn’t go negative.
Again, there is a sense in which the negative values in the lower right portion of Monckton et al.’s Fig. 5 do have meaning. But the meaning isn’t necessarily that the output “transits to the negative rail.” And to me it’s far from clear that its meaning distinguishes electric circuits from the climate as modeled by Monckton et al.

• Joe Born says:

Oops! You were right about the zero being more negative.

13. Mac the Knife says:

Thanks Joe!
Your treatise reminded my why I chose metallurgical engineering over electrical engineering!
Mac

14. Complex equations based on false assumptions will still give wrong answers. Eliminate the “back radiation”, “radiative forcing”, “feedback loops”, etc before attempting to model the climate. Until climatologists do they will continue to get correct answers for their sums but be constantly surprised by reality giving different answers.

15. angech2014 says:

Thanks Anthony.
A good laugh is always worth putting up.
Mr Born’s impressive amount of writing on nothing is very informative. As has been his comments at Lucia’s and Curry’s.
I am glad you read it and understood it well enough to put it up as a serious article.
He has an issue with Monckton, tough luck.
He has an issue with a lot of things, ditto.
Monckton et al put up a simple example of why global warming is over hyped.
I get that.
Someone comes along and nit picks that he has found a flaw in the example.
Spends 5 pages talking about other things then raises the obscure and irrelevant focus of his ire.
which has nothing to do with a simple example.
We are treated to pages of impressive sounding but unexplained gobbledygook and then told that in some rare cases only Joe can find the example might not hold.
Zeke Hausfather said
” its pretty clear that the transience fraction is non-zero for any positive climate sensitivity. Monckton et al’s assumption that the transience fraction is zero when feedbacks are negative is as unjustified as their a-priori assertion of a negative feedback parameter,”
Only comments are “pretty clear” is funny terminology for “bleeding obvious” but does not mean he is right re Monckton’s assumption of transience fraction being zero when feedbacks are negative.
What Monckton meant is that if negative feedbacks are large enough the transience fraction is indistinguishable from zero, ie practically zero.
It is obvious that Joe and Zeke do not know what the word “practical” means.
It means commonsense.

,i>”Monckton et al put up a simple example of why global warming is over hyped.
I get that.”
No, what Viscount Monckton put up was a pathetic variant of the ERL argument. He is of course wrong. AGW due to CO2 is a physical impossibility. I mean seriously? He is arguing that adding radiative gases to the atmosphere will reduce our radiatively cooled atmosphere’s ability to cool the surface of our solar heated oceans. Utter drivel!
Monckton is a mathematician. He has a 2D brain. There is no way he can solve for “x” if the answer requires 3D CFD. To set up a CFD run you need a 3D brain. You need an engineer.
Global warming was in effect a global IQ test, with results permanently recorded on the Internet. For all his help to the sceptic community, Monckton still failed. Sadly the reason was ego. He bought the “1.2C warming for a doubling of CO2” thing, and refused to back down.
At this point, Viscount Monckton has become a liability to sceptics, not an asset. The same goes for Willis. When empirical empirical experiment proved both wrong, they both ran to gatekeeping to protect their own hides. They both committed the same sin as the climastrologists.
Don’t get me wrong, I have paid to see Viscount Monckton talk when he was on tour several times. I have admired his efforts. But his “warming, but less than we thought” offerings, especially after he has been shown the right answer, disgust me. In the end he acted just like the warmulonians, and the reason is inexcusable – Ego.

• Michael Spurrier says:

I think you’ve nailed it there………a lot of the argument has become them against us. Whatever his intentions Monckton is a bit of a pantomime act – if you wanted to be taken seriously you wouldn’t want to wheel him out on your side.

• Yah-boo has no place in ths thread.

Viscount Monckton,
“Yah-Boo” would imply that I have a knee jerk response to deride or disagree with everything you assert. This is not the case. Up thread you state – “Strictly speaking, SB does not apply at the Earth’s surface.”
I totally agree with this statement, and I have posted the empirical experiments on this thread that prove it wholly correct. I am having a jab at you (mild in comparison to the previous thread on your model) because you are ignoring the implications of that statement.
When it comes to determining the effect of our radiative atmosphere on surface temperatures, understanding surface properties is the KEY. After all, the surface is the primary point of solar energy input into the land/ocean/atmospheric system.
In comparison, establishing a mathematical fiction of a “characteristic emission height” that defies empirical observation and playing SB games up there is nonsensical. I looked at your model. Where is increasing radiative subsidence with increasing radiative gas concentration? Nowhere!
You were right to want simple, but chose the wrong altitude. 0.0km is the right altitude, just follow the three simple steps –
1. Determine current average surface temperature.
2. Calculate average surface temperature in absence of all atmospheric properties except 1 bar pressure. (sure SB doesn’t apply. Just use experiment or CFD)
3. From the differential calculate the net effect of our radiatively cooled atmosphere on surface temps.
How easy was that?!
1. 288K
2. 312K
3. Cooling!
If folks are having a jab at you Viscount Monckton, it may be because they want you back on the sceptic side. The Lukewarmer thing is a dead end.

• Matthew R Marler says:

angech2014: What Monckton meant is that if negative feedbacks are large enough the transience fraction is indistinguishable from zero, ie practically zero.
That’s the way I read it as well.

• Sigh. If feedbacks are net-negative then the transient response will be greater than the equilibrium response, so the transience fraction will exceed unity.

16. tetris says:

Reality check: The GCMs have been wrong for a long time and continue to be wrong. Why they are wrong is of course interesting but the key thing is that they are wrong. Cannot past cast, cannot forecast and cannot explain the growing discrepancy between them and the empirical data. Dangerously useless.

17. Richvs says:

Although the use of electrical circuits to illustrate feedback & transfer functions is very useful in developing simplified models that are readily solveable, I tend to look at the overall scope of the problem (or system) to get an idea of the type, quantity and magnitude of the components involved. It becomes rather obvious that the system does not and can not boil down to a one component analysis, i.e., CO2 or to a whole family of GHGs. From a historical perspective we observe and understand that that the temperature range of the earth & atmosphere has been relatively constant for vast periods of time. This in itself is indicative of extremely complex, large capacity systems with long time constants and with an effective overall gain of less than 1. For anyone with a Physics, EE or ChE degree it doesn’t take long to figure out that the overall climate system – both transient response & the pseudo equilibrium case, assuming a theoretical state, is dominated by large capacitance & time constants. Developing climate models is fun but the complexity, quantity & quality of data input, sampling size & frequency and computing power will delay serious analysis by decades…. unless we rely on extremely simplified models that we can fudge and retrain for every analysis or reanalysis cycle. Cheers

18. K.C. says:

SCADA systems generally get their information from and control industrial processes via – amongst other things – PLC’s ; Programmable Logic Controllers.
A PLC can include many different modules – one of which is the PID module.
“A proportional-integral-derivative controller (PID controller) is a control loop feedback mechanism (controller) widely used in industrial control systems. A PID controller calculates an error value as the difference between a measured process variable and a desired setpoint. The controller attempts to minimize the error by adjusting the process through use of a manipulated variable.”
I’ve been thinking throughout the years, that a climatic system could be represented by numerous types of PID loops, whereby the result of one system – say temperature – affects the input of another PID loop – say humidity.
Rather than trying to represent climatic systems via electronic circuit analogy, would it not be more appropriate/accurate to represent them as some kind of multiple PID loops?
A good introduction to PID loops is here : http://www.csimn.com/CSI_pages/PIDforDummies.html
Wikipedia article on PID controllers is here : http://en.wikipedia.org/wiki/PID_controller

• I started with a theoretical background with PID. I ended knowing that it was pointless to over think control systems as this article does and you just have to live with simple.

• Joe Born says:

I personally agree that “you just have to live with simple.” But the authors’ model is simple, and even it is fraught with latent ambiguities. There are good points to be made with simple, and the authors are well positioned to make them. The purpose of the head post is to help them tighten up their game.
That said, I think that little about, say, equilibrium climate sensitivity can really be concluded from comparing the model’s output with observations. Even if you assume that the climate system is the simple two-pole linear system of Fig. 8 above, it can be virtually impossible (depending on that system’s b and tau parameters) to distinguish the early-term response of an ECS = 2 system from that of an ECS = 12 system.
So, yes, much of this is indeed academic.

• K.C. says:

Scottish Sceptic,
Completely agree with you on the keeping it simple thing. My eyes glaze over when I see so many Complicated Things as in the above article.

• Duster says:

You cannot “live with simple” and talk about climate. It would be nice, but the reality is not simple. Science’s purpose is to simplify explanations of reality down to useful levels. But “useful” is a matter of content. If we understand that n elements go into a natural process but we believe that only a very few of those n elements (n-y) are strongly influential on the process, then the simplest models use all those presumed n-y influential elements. If the model drifts from the measurements of the natural process over time, then you have begin fiddling with sets of the lesser elements until a number of models track the natural process.
The entire point of the ensemble mean is to measure the theoretical understanding of the process. The collection of model outputs, all based on approximately the same theory, should form a rough normal distribution around the empirical measurements, IF the theory is reasonably useful. If they don’t, it isn’t because “the models don’t work.” It is because the theoretical understanding used to create the models is weaker than was believed. This problem is not limited to climate problems. It is endemic in all of science.

19. Leo Smith says:

\Delta T = \frac{\lambda_0}{(1-\lambda_0f)}\Delta F.
Well thats the point innit? that equation is a supposition, not an established fact.
And worse, is at best only a partial solution to anything. It ignores all the other possible reasons for delta T.
Assuming X to subsequently prove X is not science, its just playing with concepts.,

20. it’s rather academic talking about whether or whether not a model is right for various positive feedbacks, when there’s no way on earth during an inter-glacial that we will have overall positive feedbacks.
Because there couldn’t be any such thing as an “interglacial” with temperatures in a close range unless we have negative feedbacks preventing further warming.
And with the latest paper showing uniform temperature between the north and south hemisphere more or less proving that clouds provide this feedback. What on earth is the point in such pedantic arguments about a regime of positive feedbacks that anyone with any real experience knows cannot possibly be the current state of the earth?

21. Summary of the article:
1 In authors own words: “none of this contradicts Monckton et al.’s main point. “.
2. The author didn’t like a phrase where the article said the output would go negative and didn’t explicitly cover the other option which is that it could also head of to +ve infinity (yawn)
3. He doesn’t like models that look like circuits. (well hard cheese!)
4. He doesn’t like (but can’t show any actual problem with) using a very simple parameter for delays in the system – which clearly work as the model is more skilful than complex models. (Well doesn’t that tell us the wrong kind of people are producing the current models)
In other words, the author wants a more complex model not because it is any better at modelling the climate but because he wants a more complex model.

• mobihci says:

yes, the positive infinity is the key problem with there being positive feedbacks at all. consider the amount of time this planet has spent with high co2 levels (much higher than now) in periods with little difference in the positions of continents and the suns output, and it never went off into infinity. to do this would require either a time lag of millions of years or just that there is ALWAYS a negative feedback, even in times of warming.
the thing is, it must be this way, the earth is always cooling from the suns radiation. heating is the variable being introduced, not cooling.

• Mike M. says:

Scottish Sceptic wrote: “it’s rather academic talking about whether or whether not a model is right for various positive feedbacks, when there’s no way on earth during an inter-glacial that we will have overall positive feedbacks.” and mobichi wrote “there is ALWAYS a negative feedback, even in times of warming.”
Climate modellers all recognize the truth of these statements and all the climate models are dominated by negative feedback called the “Planck feedback”. The confusion comes from people who use an electric circuit to model something that is not an electric circuit (this is not the way climate models are constructed). In the terminology used in the article, as long as lambda_0*f < 1 there is net negative feedback in the sense used by Scottish Sceptic and mobichi. The paradox is a result of using delta_T as an input and forcing as a parameter when in fact forcing is the input and delta_T is an output. The result is very confusing.

• mobihci says:

the climate models do not realise the problem it seems. time is removed as a variable of the output by the fact that there has been no runaway warming in the past, that is if co2 is considered a forcing, it must be considered a parameter. ie there is a reaction time to all and each forcing, including co2.
of course it would be impossible to actually consider all forcings and variables surrounding those forcings, but it can be said that the system has a net negative feedback with an external forcing that varies over over time. ie the internal systems feedback can be parameterised.
climate models may be only dealing with short periods of time, but this does not make them immune from comparison. they must be able to carry out for eg a glacial-interglacial cycle correctly FIRST before considering the much more difficult, finer and more detailed 30 year period. do i believe they will recreate a full cycle without training/flat co2 numbers etc, hell NO. if they claim that co2 can possibly cause 6 times more warming than the initial forcing (realclimates claim), then they are deluded. for this to be the case, then co2 must CAUSE positive feedbacks, ie this feedback loop goes to infinity every interglacial (in higher co2 times). of course you could say that the feedback breaks at some tipping point, eg the ice caps melting or whatever, but then that also means that it becomes a parameter.

• Matthew R Marler says:

Scottish Sceptic: Summary of the article:
I think you got it about right.

22. angech2014 says:

Joe Born March 13, 2015 at 3:50 am
“I personally agree that “you just have to live with simple.”
Oh the irony.
Thanks Scottish Skeptic for putting the summary so much better than I.

23. David Norman says:

I’m not sure why, but every time I read the phrase “irreducibly simple model”, I’m compelled to smile and then grab a few pints, some peanuts, and rap a bath towel around my neck… go figure. Could that possibly be wrong?

• Brandon Gates says:

It’s a very non-Briggsian thing to say. I almost choked when I read the title and saw his name on the paper.

24. A Zeeman says:

A very simple model of the greenhouse effect is shown by heating a styrofoam cup of water in a microwave. By making the cup walls thicker and thicker, eventually even a small amount of microwave energy will cause the water in the cup to spontaneously get hotter and hotter even without additional energy input.
Exactly the same mechanism is at work, the frequency of the long wave microwaves is changed into short wave infrared. The only difference is that for the greenhouse effect shortwave visible light is changed into longwave infrared. The effect is the same, spontaneous generation of heat once the insulation gets thick enough.
Of course, this requires suspension of belief in the laws of thermodynamics.

• mkelly says:

Q=Cp * m* T
A Zeeman says: “…eventually even a small amount of microwave energy will cause the water in the cup to spontaneously get hotter and hotter even without additional energy input.”
The equation tells me your statement is not true, unless I am exchanging thicker insulation for water mass.

25. Crispin in Waterloo says:

“The optical-density increase raises the effective altitude—and, lapse rate being what it is, reduces the effective temperature—from which the earth radiates into space, so less heat escapes, and the earth warms.”
I look and I look and I have self-doubts but this just doesn’t make sense to me. The effective altitude of radiation is NOT set by the lapse rate. It is set by the ability to expel heat energy. As soon as that happens, whatever the altitude is, that is what it is. There is a missing ingredient in the explanation which is that while the altitude of effective emission has increased and the temperature at that altitude has decreased, there is MORE CO2 doing the radiating and the ability to radiate is permanently increased.
That this will self-stabilise is obvious. As soon as the more efficient radiator can dump the heat it will. The system is not depending on a less efficient radiator at a lower temperature, or a same-efficiency radiator, it is a more efficient radiator.
Combined with the thunderstorm transport system starting earlier in the day, the bypassing of the major portion of the atmosphere is easily and continuously accomplished.
Until the point is reached when thunderstorms would have to start before sunrise, to transport ‘additional heat’, we are going to see, on average, a remarkably stable temperature regime. Looking at the old behaviour that is going to be above 7000 ppm(v).

• ECB says:

“Combined with the thunderstorm transport system starting earlier in the day, the bypassing of the major portion of the atmosphere is easily and continuously accomplished”
Nice model, and it makes the most sense to me. Feedback theory does not capture the chaotic feedback of this vertical heat transport.

• Joe Born says:

“The effective altitude of radiation is NOT set by the lapse rate. It is set by the ability to expel heat energy.”
Well, I’m not a scientist; I’m just reporting what I think they think they’re modeling. But I believe you’re right that the lapse rate doesn’t set the effective radiation altitude. It’s the optical density; the lapse rate is what sets the temperature at the resultant altitude.
On the other hand, while it’s true that more carbon dioxide provides more radiators, it also provides more absorbers between those radiators and space. So the explanation I gave (well, regurgitated) seems plausible to me.
As to the ultimate result, I personally tend to agree with you that, encountering more resistance in the radiative path, more heat will be squeezed into the evaporative-convective path; latent heat will tunnel through the added resistance to become sensible heat where the optical path to space is shorter.
But, as I say, I’m no scientist; I’m just pointing out the logical issues that arise from the way Monckton et al. use their model. That is, if you compare a step-stimulus-derived output with ramp-stimulus-produced observations, the equilibrium value you infer is likely to be too low.

• angech2014 says:

Joe Born March 13, 2015 at 6:53 am
“The effective altitude of radiation is NOT set by the lapse rate. It is set by the ability to expel heat energy.”
Well, I’m not a scientist; But, as I say, I’m no scientist;”
say that again, more loudly? I cannot hear you.

• David A says:

Perhaps the relationship between radiation, conduction, and convection is not properly accounted for, and their co-dependence is not properly modeled. Conduction and convection may lag radiation affects, but may also in time neutralize them.

• Steven Mosher says:

“The optical-density increase raises the effective altitude—and, lapse rate being what it is, reduces the effective temperature—from which the earth radiates into space, so less heat escapes, and the earth warms.”
I look and I look and I have self-doubts but this just doesn’t make sense to me. The effective altitude of radiation is NOT set by the lapse rate. It is set by the ability to expel heat energy”
The ERL is set by the OPACITY of the atmosphere above it. Once the opacity reaches a low enough level
then the system can radiate freely to space. This opacity is fixed.
When you riase the amount of C02 the height at which this opacity is reached is raised.

• Bob Boder says:

how much has the “height” raised since 1950 Steven?

• Curious George says:

Can we actually measure that height? Or is it just a mathematical construct?

• Crispin in Waterloo says:

Steven – yeah we know that already and he says it above, but he was missing the part about there being more radiators. The difference in implication is simple enough: if the surface were heated by some other means, the altitude would rise ‘accordingly’. If the Cause of the increase is an increase in the mechanism by which heat is lost, then the altitude is reduced because it sheds heat more effectively. It goes up, but not nearly as much.
Any discussion of this model that does not mention transport of large amounts of heat vertically by thunderstorms is a diversion from the main task. Those thunderstorms have a concomitant cooling set of clouds to go with them.
So far it does not appear that the CO2 can get close to overpowering the negative feedback of the storms. Thermals beat thermalisation every time.

• angech2014 says:

Good on you Steve.
Now explain to him how the amount of energy going out into space remains the same as before the CO2 increased.
Sure it is going out at a higher level with more CO2 in the air, and the air is warmer but there is no extra energy in the system because it all has to go back out, doesn’t it?
The air can be warmer but only at the expense of other layers being cooler.

Steven Mosher
March 13, 2015 at 2:00 pm
//////////////////////////////////////////////
Steven, your argument remains as inane as it ever was. You are essentially arguing that adding radiative gases to the atmosphere will reduce its radiative cooling ability.
It’s pointless arguing that if the IR opacity was 100% the surface average temp would be 255K. Empirical experiment proves this inane claim utterly false.
There’s no way out Steven. Every activist, journalist or politician who ever sough to promote or profit by this sorry hoax gets their public face, metaphorically speaking, punched to custard. You’re not up against sceptics any-more. The general public wants names. Sceptics know all the names….

26. When deriving a model whose purported purpose is to show “why the [CIMP] models run hot”, it seems perfectly reasonable to assume feedback effects whose time constants are large with respect to the time frame under consideration (a few decades) are constants folded into f. Also, I’m not sure why the author focused so much attention on the circuit analogy but his comparison seems apples-to-oranges. Lord Monckton confined his analogy to the region of stability with real (i.e. non-reactive) feedback while the author’s critic illustrates operation in an unstable regime with reactive feedback. Both conclusions are correct but about different things.

• Joe Born says:

“When deriving a model whose purported purpose is to show “why the [CIMP] models run hot”, it seems perfectly reasonable to assume feedback effects whose time constants are large with respect to the time frame under consideration (a few decades) are constants folded into f.”
The best way to implement the Monckton et al. is to make f a constant and use r_t to provide the time dependence, including the feedback delay. I didn’t show it here, but the choice of time function for f can be highly counter-intuitive if you try to put that time dependence into the f parameter. You can do it, but, instead of making the feedback coefficient grow gradually, you initially have to make it decrease gradually. Hardly what the pocket-calculator user would expect. So I’m guessing they used a constant f, which makes the subscript t on the f confusing.
“Also, I’m not sure why the author focused so much attention on the circuit analogy but his comparison seems apples-to-oranges.”
The reason for the circuit is that Lord Monckton often refers to electrical circuits, and he seems to think that the negative value for overall gain when the loop gain exceeds unity means that the voltage “transits to the negative rail.” He has often been called on that, but he doesn’t seem to understand the nature of the objection. This was my attempt to make him see it.
“Lord Monckton confined his analogy to the region of stability with real (i.e. non-reactive) feedback while the author’s critic illustrates operation in an unstable regime with reactive feedback.”
Although it’s true that the feedback values Monckton et al. listed in their Table 2 fell within the region of stability, my purpose was to respond to their statements about the lower-right region of their Fig.5, where they contrasted the its unphysical meaning for climate with what they contended is a physical meaning for circuits. Lord Monckton has said this kind of thing a lot, and he may benefit from understanding why he meets resistance when he does. So far he has defied enlightenment.
I’m not sure what you mean by the feedback’s being non-reactive; it operates with a delay.
Look, their product of $\lambda_\infty$ and transience fraction is just what in some circles is called a step response. And, if $\lambda_\infty$ is a constant, then transience fraction is just a normalized version of the step response. And that step response incorporates the feedback, including its reactive aspects.

• Brandon Gates says:

Joe,

The best way to implement the Monckton et al. is to make f a constant and use r_t to provide the time dependence, including the feedback delay. I didn’t show it here, but the choice of time function for f can be highly counter-intuitive if you try to put that time dependence into the f parameter.

Yes, I believe you’ve nailed it. Making f a time-dependent parameter overly complicates things and, to me, simply makes no sense.
If you don’t know of it, Knutti & Hegerl (2008) is a quite accessible general overview for estimating ECS: http://www.iac.ethz.ch/people/knuttir/papers/knutti08natgeo.pdf
I particularly like the Figure 1 cartoon, which steps through the immediate response to a forcing perturbation from near instantaneous all the way through to millennial-scale feedbacks, resulting in the “final” equilibrium response.
There are only two equations in the entire paper:
(1) ∆Q = ∆F – λ∆T, where λ = ∆F/∆T
(2) S = ∆T₀/(1 – f)
Quoting the paper: … where f is the feedback factor amplifying (if 0<f<1) or damping the initial blackbody response of ∆T₀ = 1.2°C for a CO2 doubling.
If I’m understanding the body of the paper correctly, I can write:
(3) ∆Q = ∆F – ∆T∆T₀/(1 – f)
Which is about as succinct a way to express net change in energy retained due to a change in atmospheric CO2 (or most any “well-mixed” GHG) concentration as I can conceive, and I believe the most “proper” way to think about it. Problem is, we don’t experience joules, we feel temperature. Same for large masses of landed ice.
Or course, the practical problems are that ∆Q eludes a complete accounting, f is not very well-constrained and ∆T is fiendishly difficult to tease apart from ∆T₀ observationally. ∆F is about the only thing in this model which I’d call all but bombproof.
I don’t think Monckton, et al. (2015) propose an irreducibly simple model for ECS with their two gain loops … they’re actually attempting to derive TCR, so of course they come up with a “models are too sensitive” conclusion. For ECS, the time-independent expression S = ∆T₀/(1 – f) should be sufficient, and so long as f < 1, process engineering best practices need not enter into the discussion.
There is also much confusion in these parts about f needing to be negative to “ensure” a stable system. MSLB (2015) mentions an oft-overlooked relevancy in their eq. (4): F = εσT^4
Were that also a linear relationship — or even approximately linear like most of what K&E (2008) are discussing — yes, I’m pretty sure all sorts of hell would break loose if f were anything BUT exactly dead zero.
Final note directed at the general audience here: the IPCC themselves say in AR5 that CMIP5 is likely 10% too hot.

• Tsk Tsk says:

And again, while Monckton was being sloppy, I think you are missing the point. For positive changes in temperature (voltage), the output of the model will go negative. To make the absurdity more apparent, the model will go to the opposite rail to the input signal sign. In the case of temperature (starting at ‘0’ at equilibrium), that means an increase in temperature will actually make the Earth freeze and a decrease in temperature will make the Earth roast. I would tend to call that pretty unphysical.

• “I’m not sure what you mean by the feedback’s being non-reactive; it operates with a delay.”
Note the y-axis of his figure 6 is “Equilibrium Climate Sensitivity” so by implication the reference is to the _final value_ of the step-response of the system. By the final value theorem (in the limit as t->infinity, vo = vin * H(s) as s->0) reactive elements take on there DC values (caps become opens, inductors and series delays are shorts etc.)
“Look, their product of \lambda_\infty and transience fraction is just what in some circles is called a step response. And, if \lambda_\infty is a constant, then transience fraction is just a normalized version of the step response. And that step response incorporates the feedback, including its reactive aspects.”
This is incorrect. LM’s statement: “Bode mandates that at a closed-loop gain >1 feedbacks will act to reverse the system’s output. Thus, in a circuit, the output (the voltage) becomes negative at loop gains >1 ” is uncontroversial if in-artfully phrased (I would state that that loop gains >1 result in a phase reversal to cover both possible input polarities).
Again, by the final value theorem the ECS is just the transfer function (Laplace transform of the impulse response) of the system evaluated at s=0 (becase the s in the numerator introduced by the FVT is canceled when we integrate to get the step-response from the impulse response). In your (non) analogous circuit, the capacitor is inconsequential to the final value (it become an open) and thus to the analysis. In your circuit if f is the feedback fraction (f= R1/(R1+R2), Vof/X = A/(A*f-1) = A/(OLG-1), where Vof is the final voltage and OLG is the open loop gain. This system has an singularity at OLG = 1. Below that OLG value the denominator is negative and no phase inversion occurs (the sign of the output is the same as the sign of the feed-forward gain, in your case negative), above that value (non-physical region for real-world components) it’s positive and phase inversion occurs. As LM stated, this is a fundamental result of the Bode feedback formulation and the final value theorem. Are you really arguing that the Bode or FVT is wrong?

• Joe Born says:

Jeff Patterson: “Again, by the final value theorem the ECS is just the transfer function (Laplace transform of the impulse response) of the system evaluated at s=0.”
Believe me, I understand your difficulty; the same thing troubled me when I looked at this initially. But the experience I’ve had when I’ve dealt with experts in this area is that you have to be really careful about how you interpret the math, particularly when you’re dealing with feedback and/or complex numbers and/or taking limits. In those contexts, I’m told, you have to do a reality check.
Yes, if you apply a low-frequency signal, the output will blow up in the negative direction; I get that. But, as I said, you have to do a reality check when you’re taking limits.
So I did one, and the head-post analysis is what I got. There’s no mechanism I can identify that results in a simple step’s making that circuit’s output go negative. If you can identify one, I will be happy to be enlightened; I’ve been wrong before. But blindly following magic formulas doesn’t do it for me.
Jeff Patterson: “This is incorrect.”
Isn’t $\lambda_0 r_t$ as set forth in one of the Table 2 rows the model’s response to a unit step? If not, what is it?

• angech2014 says:

Only Monckton’s conclusion is correct.
The other conclusion is drawn with so much waffle that no one here could understand the conclusions because they are riddled with assumptions and putting words into people’s mouths.
eg
“More troubling, though, was the fact that they chose only a single transience-fraction curve for each value of total feedback, whereas we [read Joe Born] would expect that the curve would additionally depend on other factors.”
They chose a simple single transience-fraction curve, Joe, to input into a simple model.
Which by the way works.
Why do you not congratulate him for this?

”Why do you not congratulate him for this?”
Still not getting it? Because Monckton is wrong. Adding radiative gases to the atmosphere will not reduce the atmosphere’s radiative cooling ability.
You are a warmist and therefore a scientifically illiterate socialist. What does it matter to true sceptics if Monckton the fool plays his “warming but less than we thought” games? Nothing. You are a collectivist. You cannot understand. This is not about left or right. This is not about “sides”. This is about right or wrong.
In defeat, it is no good trying to side with the foolish and wet “sceptics” that are most favourable to your failed cause. This is not politics. This is science. You came to a science fight armed only with propaganda. Your Alinsky techniques are powerless here. Right or wrong. Black or white. There is no “nuance”.
Your political games can never work. You seek to align sceptics behind the ludicrous lukewarmers? Bwahahahaha…
Herding sceptics would like trying to herd cats. You are not going to win. Not now. Not ever.

You are a warmist and therefore a scientifically illiterate socialist.
C’mon, Konrad, that doesn’t necessarily follow. And then you added the comment about herding skeptics? I agree with that one. But it sort of contradicts what you said above.
Also, LM may or may not be wrong. I don’t know in this case. But he is not a fool.

• Brandon Gates says:

dbstealey,
I disagree with K-rad on many things, but his mockery of WUWT’s closet Slayers for lapping up Monckton, et al. (2015) isn’t one of them. They tow the IPCC party line on the theoretical basis almost 100%, the main difference is in application and conclusion, which hinges on the notion that no engineer in their right mind would design a system with a closed-loop feedback gain > 0.1 if stability were the goal. That’s the kind of thing which has no business being presented as an a priori in an empirical physical science. A working assumption to develop a hypothesis, sure. A basis for conclusion, no. As you ironically say, it does not necessarily follow. The maths are that the system will be stable so long as closed-loop feedback gain is < 1. Monckton and crew are rightfully mocked to argue otherwise.

27. Mike M. says:

I am afraid I don’t see the point of this. Why should climate behave like an electric circuit? Is there any justification for the idea?
The point of simple energy balance models is right there is the name: they are simple and they are based on a fundamental physical principle, the conservation of energy. Those are very useful characteristics that make such models worthwhile in spite of their not representing anything close to the complexity of the climate system.
Proper climate models start with the same principles but distribute the energy among many “boxes” rather than one big box and, to the extent possible, use basic physical laws, such as those governing radiative energy transfer and fluid flow, to constrain the flows of energy. The result still leaves much to be desired, but at least it is a physics based attempt to make the model more realistic.
But this electric circuit business adds complexity with no reason, at least that I can see, to believe that the result is any more realistic. It may well be less realistic. So what is the point?

• commieBob says:

Why should climate behave like an electric circuit?

Electric circuits are wonderful. They are easy to build and test. They are amenable to mathematical analysis because it’s really easy (relatively) to isolate variables. That means that people doing systems analysis probably learned their math in circuits class.
At its heart, the argument about CAGW is an argument about feedbacks. Without feedbacks, a doubling of carbon dioxide will increase the planet’s temperature by a little more than one degree C. Positive feedback is required to get warming which would be catastrophic.
How could you prove that positive feedback existed (or not) in the climate system? You could try to use the math you learned in circuits class. If you do that, you’re making (whether you realize it or not) a bunch of simplifying assumptions. The problem is that we simply don’t understand the climate system well enough to know which simplifying assumptions are valid.

Au contraire, it’s actually an attempt to simplify the problem … believe it or not.

• Mike M. says:

“Au contraire, it’s actually an attempt to simplify the problem”
For every complex problem there is an answer that is clear, simple, and wrong – H. L. Mencken

• Why should climate behave like an electric circuit?

In my EE classes we started out by changing electronic circuit elements in to physical entities, e.g. resistors behave like dashpots (ideally, something where friction is proportional to velocity), inductors behave like masses, and capacitors behave like springs. So it’s easy to visualize an oscillating system made out of a weight and a spring. Add a dashpot and you can talk about car suspension systems.
You pretty quickly get to a point where it’s easier to just deal in electrical terms. After all, it’s easier to change a resistor in an oscillating circuit than to use an adjustable shock absorber. With active elements, it’s easy to add an amplifier to a circuit than to add a governor to an motor drive.
Ultimately, it becomes easier to design mechanical systems by first emulating them as electrical circuits and then build the mechanical system.

• Mike M. March 13, 2015 at 9:00 am
I am afraid I don’t see the point of this. Why should climate behave like an electric circuit? Is there any justification for the idea?

Back in the day such simulations were called analog computers, in the 60’s they were better for modeling simultaneous differential equation networks than digital computers. The flight simulator used for the moon landings was an analog computer.
I teach a lab class where I use an integrated circuit system to demonstrate how negative and positive feedback can interact to cause oscillations, following that we set up a biological system where e.coli show oscillations in the same way. It’s a lot easier to control the oscillations in the electrical circuit than in the culture.

• m says:

Phil,
One of my fellow grad students used an analog computer to solve a complex chemical kinetics problem represented by a set of stiff differential equations. But it was mathematically demonstrable that the system of differential equations governing the output of the analog computer was the same as the system of differential equations governing the kinetics model.
Such a demonstration is lacking here.

• Yes m, my thesis was related to kinetic equations, particularly ‘stiff’ sets. We did some work with an analog computer which was much faster than the digital simulations, so you could use it to get a good feel of what the changes in parameters would do before doing the calculations.
But it was mathematically demonstrable that the system of differential equations governing the output of the analog computer was the same as the system of differential equations governing the kinetics model.
Such a demonstration is lacking here.

It should be possible to make such a demonstration for Monckton’s model.

• I am way over my head here, because I have forgotten most of what I learned, when I was in school a very long time ago. Our Electrical Engineering department obtained a new EIA Pacer hybrid computer, and we got to play with it. The digital computer part controled the analog computer part, and I was told that Boeing used the same computer??? to design their wings and test them out, since it was too hard to directly solve the math equations for air flow. Additionally, all the Physics classes I sat through, watching the professors derive equations for the laws of Physics, left me with the impression, that you could write equations, but you couldn’t solve them, unless you knew what terms were trivial enough in the real world, that you could throw out and simplify the equations enough, so that they were solvable. So, maybe there is some merit in using the circuit/analog computer approach for climate modeling??? The equations must be at least as difficult (probably a lot more difficult), than the equations for airflow around a wing section. Just a thought.
Dan Sage

• Old enough to remember analog computers ?
They were ( are ) just electrical circuits of op amps plugboarded to implement any sufficiently simple diff eq .

• Steven Mosher says:

noone has successfully modelled the climate as a circuit.

• Michael Spurrier says:

…..but great mental masturbation.

• angech2014 says:

Judith Curry’s “Stadium Waves” don’t count?

• Crispin in Waterloo says:

Lots of things are like electric circuits! Water is a great analogy for electric flow in many cases. Water circuits can have feedbacks amplification too. My father worked with water based programming in the 60’s from an applications point of view. There were cards a little bigger than a credit card with circuits carved into them. They were analogue computer circuits as I recall.

• Tsk Tsk says:

Control Theory doesn’t care if you’re talking about an electric circuit or a flywheel engine or a waterwheel. It happens to be easy to build circuits that are a 1:1 map of any of those systems and so many times people will talk about the problem that way, but exactly the same rules apply to all feedback systems whether they use electrons or not.

• Mike M. asks why the climate should behave like an electronic circuit. The point made in Monckton et al. is that there are several classes of dynamical systems, and the Bode system-gain equation that the climate models borrow from electronic circuitry and use as the pretext for multiplying the small direct warming from CO2 by 3, 5 or even 10 is not applicable to the class of dynamical systems in which the climate falls.

28. Kip Hansen says:

I believe both the author and Monckton are using simplified, linearized equations in place of the real nonlinear equations for the systems under discussion. Radiative heat transfer through a translucent material (the atmosphere) is governed by a system most accurately described nonlinear equation — not the simplified stuff above. Use of the linearized equations leads to incorrect, non-physical results — and endless arguments when two groups differ in their interpretation of which incorrect equation to use.

• Kip Hansen says:

—- a system most accurately described by a nonlinear equation —-

• The appendix to our paper provides the relevant equation for non-linear feedbacks. Furthermore, our time-series for the transience fraction is non-linear.

• Joe Born says:

Monckton of Brenchley: “[O]ur time-series for the transience fraction is non-linear.”
Perhaps Mr. Hansen was referring to the models defined by the Table 2 transience-fraction functions, not to whether the transience fraction is a linear function function of time. Linear systems, of course, routinely produce outputs that are non-linear functions of time.

29. Joe Born says:

“I believe both the author and Monckton are using simplified, linearized equations in place of the real nonlinear equations for the systems under discussion.”
Then perhaps you should work on your reading comprehension. I wasn’t using linearized equations; I was showing that, to the extent that using them has utility, Monckton et al. used them incorrectly.
If you compare your model’s step response to the climate’s response to something more like a ramp, you’re doing something wrong. If your model says that the observed temperature increment is only 60% of what your model says the equilibrium value will be but conclude there’s no warming in the pipeline, you’re doing something wrong.
No one believes that the climate is linear. Its nonlinearity is not the news flash you perhaps believe. But many do believe that studying how feedback works in linear systems, which are much easier to compute, may give insight into what we might expect in non-linear ones. To the extent that they are correct, they should use those techniques correctly.
Monckton et al. didn’t.
.

• Kip Hansen says:

Reply to Joe Born ==> Don’t get me wrong, I enjoyed your piece about the Monckton et al application — I was just pointing out something that is obvious but often needs to be pointed out over and over lest people begin to believe that the linearized climate models (or linearized models of any real world dynamical systems) can inform us of anything approaching reality, when the results are important. Two people using the wrong equations to find wrong results don’t make a right. Or, to put it another way, using the wrong equations the right way to correct the incorrect use of the wrong equations doesn’t arrive at the right or accurate answer either.
As you undoubtedly already know, the behavior of a nonlinear systems can be entirely different than that of its linearized analog. The fact the Monckton repeats the common error incorrectly, in your opinion, does not make repeating the error ‘correctly’ any more accurate.
Linearized analogs for dynamical systems do have some uses…arguing about the possible uses of them to model, however inaccurately, the climate system seems to be the most popular.
When we’re just fooling around, doing back-of-the-envelope sketches, thinking out an idea — they are adequate but can leave us with our behinds hanging out if we make the mistake of believing the results so obtained. No matter how many times your fuselage design passes through your Computational Fluid Dynamics (CFD) software, don’t fly an airplane design that hasn’t passed actual wind tunnel tests — which is where the unexpected, unpredicted turbulence brought on by the nonlinearities in the real fluid flow dynamics equations shakes your new design apart.

• angech2014 says:

What a great throw away line.
Perhaps you should work on your understanding of English.
Another great throwaway line.
” I wasn’t using linearized equations; I was showing that, to the extent that using them has utility,”
So you were using them mate, only he did it first.
This continual attack on Monckton, using semantics instead of real, provable arguments is disgusting.
Furthermore,
“If you compare your model’s step response to the climate’s response to something more like a ramp, you’re doing something wrong.”
Models are allowed to have steps in, it’s called data entry.
Steps have little lines drawn between them, this makes them into a a linear representation.
virtually all measurement is broken into steps or ramps if you focus in with a microscope./
“If you compare your model’s step response to the climate’s response to something more like a ramp, you’re doing something wrong.”
No you are the one breaking a simple argument down so far to where error can occur, on a microscopic scale.
But if a microscopic error is your reason for existence go for it

• My detailed reply to Mr Born, once it is posted, will demonstrate that he, not I, is in error. For instance, since we consider temperature feedbacks to be net-negative, and since our model calculates that on assumption that all warming since 1850 was anthropogenic precisely the warming that has occurred should have occurred, it follows that there is no warming in the pipeline as a result of our past sins of emission.
Contrary to Mr Born’s assertion in the head posting, all of this is carefully explained in our paper: see, e.g., Table 4.

• Joe Born says:

Actually, I’m assuming that I have indeed misinterpreted Table 4; I’d just like someone to explain where I went wrong.
In particular, I infer from the fact that Table 4’s third-last column was arrived at by multiplying by an r_t of 0.6 that 40% of the response remains to be seen. I also infer from Monckton et al.’s statement that no warming is left in the pipeline that my interpretation of Table 4 is somehow erroneous. But I’ve yet to see how.

30. Bart says:

Joe Born –
“Still, the output does not go negative.”
I have not looked too deeply into the paper, and may be interpreting things inappropriately. However, is it possible that the case of loop gain > 1 is in the context of the system being embedded within a greater system with dominant negative feedback enforcing stability?
As a simple example, I could have a transfer function
1/(s – a)
with a greater than zero, embedded within an outer loop with feedback b, such that the entire transfer function is
1/(s -a + b)
This entire system would be stable if b is greater than a, and if you checked the output of the 1/(s-a) block versus its input, it would indeed invert it.

• Joe Born says:

Forgive me if you already know everything in the following preface, but I don’t recall all my interlocutors.
Actually, the loop gain > 1 discussion merely concerns a gratuitous comment by Monckton et al.–but one Lord Monckton often engages in, at the expense of his credibility in the eyes of those who know this stuff–regarding the difference between electrical circuits and the climate. Certainly there are differences, but the one he asserts isn’t among them. That was the only reason for the circuit discussion: Lord Monckton should abandon that branch of his argument.
With regard to feedback in the actual climate, Lord Monckton confines himself to loop gains less than unity–which is why the electrical-circuit distinction is gratuitous as well as wrong. The “positive” feedback usually discussed in climate circles is exclusive of the $-1/\lambda_0$, radiative feedback, which exceeds the net of the other feedbacks to the extent that the net is positive: the overall feedback is negative, and I doubt that there’s any serious disagreement about that.
Now, just in case that hasn’t made your question moot in your view, the answer is, Yes, feedback can turn a positive pole into a negative one. If the open-loop transfer function is 1 / (s – a), then negative feedback of magnitude f will make the overall transfer function 1 / (s – a + f), which puts the pole in the left half plane if f > a.
But I’m not sure where that fits into the discussion; I probably misunderstood your question.

• angech2014 says:

“The “positive” feedback usually discussed in climate circles is exclusive of the -1/\lambda_0, radiative feedback, which exceeds the net of the other feedbacks to the extent that the net is positive: the overall feedback is negative, and I doubt that there’s any serious disagreement about that.”
English needed.
This is really the joke form of “all models are wrong, some models are useful”
Electron: “Are you sure about that?”
Positron: “I’m positive.”

• Mr Born says he “knows this stuff” and implies that we don’t. As will be seen when my full response is posted, his assumption in this regard is incorrect. The reciprocal of the Planck parameter is not a temperature feedback, but is part of the reference frame within which all the true feedbacks operate. See Roe (2009) for a discussion.
The high-end estimates of feedback values in several IPCC reports, if taken together, would indeed exceed 3.2 Watts per square meter per Kelvin, implying a loop gain exceeding unity. Ross McKitrick made this point in a recent lecture. So this is no mere theoretical matter.
Mr Born says I should abandon this branch of my argument. Fortunately, it has now been taken up by one of the world’s top six experts on the application of feedback mathematics to the climate object, who has concluded that there is indeed a serious problem with the application of the Bode equation to the climate. He has also concluded that in consequence climate sensitivity cannot exceed 1 K and may well prove to be less. His paper saying so is out for review at present. Mr Born will then have the opportunity of submitting an attempted refutation of the Professor’s math to the relevant journal.

• Bart says:

“But I’m not sure where that fits into the discussion…”
If that is the case, then it probably doesn’t. As I said, I have only glanced at the material in question.
I would caution against this, however:
“The “positive” feedback usually discussed in climate circles is exclusive of the -1/\lambda_0, radiative feedback, which exceeds the net of the other feedbacks to the extent that the net is positive: the overall feedback is negative, and I doubt that there’s any serious disagreement about that.”
It is not, in general, true that even an effectively unbounded negative feedback in one portion of the system confers stability on the system as a whole. For example, the system
dx/dt = -a*x + b*y
dy/dt = c*x
is unstable for a, b, and c positive, no matter how large a is. Such a system description is of importance in the climate debate. If x is temperature T and y is CO2, and the sensitivity b of temperature change to CO2 is positive, and sensitivity c of CO2 to temperature is positive, then the system would be unstable.

• Joe Born says:

“It is not, in general, true that even an effectively unbounded negative feedback in one portion of the system confers stability on the system as a whole.”
Indeed. I should have specified that by “feedbacks” in “net of the feedbacks” I meant the different feedback paths’ respective DC gains. In your example, of course, one feedback path has an origin pole so that its DC gain is unbounded. The net would therefore not be negative.
But I would think that the rate at which the oceans emit carbon dioxide is more like proportional to the difference between the current atmospheric concentration and the equilibrium atmospheric concentration for the current ocean temperature. If so–and I’m no scientist, so don’t take my word for this–that particular feedback path would indeed be bounded; its pole would be in the left half plane, not on the origin.

• Joe Born says:

Monckton of Brenchley: “Mr Born says he “knows this stuff” and implies that we don’t.”
I apologize if I seemed to imply that my knowledge in the area is superior to Monckton et al.’s; certainly, my modest credentials don’t compare to his colleagues’. Unfortunately, I was forced during my working life to operate extensively in fact situations where others’ expertise far exceeded mine, so I can assure you that I have come honestly by my modesty regarding any possible scientific or mathematical expertise.
Still, I like to think that my trade did give me a passable ability to recognize a good argument, and I have not yet seen one in Lord Monckton’s discussion of Bode equations.
Monckton of Brenchley: “Fortunately, it has now been taken up by one of the world’s top six experts on the application of feedback mathematics to the climate object, who has concluded that there is indeed a serious problem with the application of the Bode equation to the climate.”
I have no doubt that a good argument for that proposition can be made, and I await it with interest. My objection is only that making it with statements like “the voltage transits from the positive to the negative rail” generates heat but no light.

• Bart says:

Joe Born @ March 15, 2015 at 5:48 am
“But I would think that the rate at which the oceans emit carbon dioxide is more like proportional to the difference between the current atmospheric concentration and the equilibrium atmospheric concentration for the current ocean temperature.”
That makes
dCO2/dt = f(T)
for some function f(). A first order Taylor series expansion then results in
dCO2/dt = f(T0) + k*(T – T0)
where k is the partial derivative of f(T) evaluated at T0. If we are starting at some equilibrium point T0, then f(T0) = 0, and
dCO2/dt = k*(T – T0)
and, that is in fact what is observed:
http://www.woodfortrees.org/plot/esrl-co2/from:1979/mean:12/derivative/plot/uah/from:1959/scale:0.22/offset:0.14

• Bart says:

“In your example, of course, one feedback path has an origin pole so that its DC gain is unbounded.”
dx/dt = -a*x + b*y
dy/dt = -d*y + c*x
has apparent negative feedback in both paths, but this system is still unstable unless a*d-b*c is greater than zero. So, not only would the rate of removal of y have to be non-zero, it would have to be substantially so to produce a stable, well-behaved system.
But, if the rate of removal of CO2 were substantial, there would be no AGW debate. Something’s got to give. The aggregate response of temperatures to CO2 must be negligible in the present climate state. There is no way around it that I can see.

• Joe Born says:

Bart: “dCO2/dt = k*(T – T0)”
That should be dCO2/dt = k_2 * (k*(T – T0) – CO2)
The k * (T – T0) is the equilibrium CO2 concentration, which CO2 tries to reach.
That makes it essentially the top feedback path in my Fig. 8.
As to your last comment, I’m sorry, but I didn’t understand where those equations came from, and they seem to reduce to a single second-order homogeneous equation in either of the variables. Can you explain that a little more?

• Bart says:

Joe Born @ March 15, 2015 at 5:36 pm
“That should be dCO2/dt = k_2 * (k*(T – T0) – CO2)”
That equation is incompatible with significant accumulation of CO2 in the atmosphere for significant k_2. Indeed, if there is a feedback of CO2 in this manner, it is very small. It hasn’t been observable in 57 years.
I started with x and y just to have a starting point of agreement. If you agree that
dx/dt = -a*x + b*y
dy/dt = -d*y + c*x
is unstable for a*c-b*d less than zero, then we can move on to applying it to the climate variables of interest.
The time evolution of absolute T, call it Tabs, should be something like
dTabs/dt = -alpha*Tabs^4 + beta(GHG)*S
where alpha is essentially emissivity times SB constant divided by heat capacity, beta(GHG) is a function of albedo and greenhouse gases, and S is solar forcing.
Define the temperature anomaly as T = Tabs – Teq, for some baseline temperature Teq. Let a = 4*alpha*T0^3, and b = partial derivative of beta(GHG)*S with respect to CO2 relative to some equilibrium level CO2eq such that a*Teq+b*CO2eq = 0. We have
dT/dt := -a*T + b*CO2
It is observed that
dCO2/dt = k*(T – T0)
but T0 is an arbitrary baseline, and we can easily redefine the baseline and CO2eq such that, substituting a constant c = k, we have
dT/dt = -a*T + b*CO2
dCO2/dt = c*T
which is exactly the form of the (x,y) system above.
If you want to tack on an unobserved feedback term such that
dCO2/dt = c*T – d*CO2
you just have to realize that A) the d term is too small to be discernible in the data of the last 57 years and B) that a significant d would not allow even human inputs to accumulate.
There is more I could add at this point, which would model the human inputs and elucidate how and why they could have negligible impact. But, I think this is probably a good stopping point for now, until and if you feel inclined to ask for more.

• Joe Born says:

Bart: “If you agree that
dx/dt = -a*x + b*y
dy/dt = -d*y + c*x
is unstable for a*c-b*d less than zero, then we can move on to applying it to the climate variables of interest.”
Actually, I would have thought the stability criterion to be ad – bc > 0, but let’s move on anyway; I’ll re-check my math later.
My real problem lies in your equation dCO2/dt = k*(T – T0). Let’s say for the moment that T > T0 and we have a way to keep T constant; say we set a and b to zero temporarily. What that equation then says is that the CO2 concentration would increase without bound despite everything else’s staying the same.
Is that really what you intend to say?

• Bart says:

“Actually, I would have thought the stability criterion to be ad – bc > 0…”
Yes, that is why it would be unstable for the opposite condition, as I stated.
“What that equation then says is that the CO2 concentration would increase without bound despite everything else’s staying the same. “
Well, yeah. Any differential equation dx/dt = f(x) will increase without bound if you keep f(x) constant but incongruently allow x to change. That’s a bit of a trivial observation, but it is essentially the same as what you have stated. If, however, I take the set of equations
dT/dt = -a*T + b*CO2
dCO2/dt = c*T
and set b negative with a and c positive, then the system will be stable. If I set b to zero, and add a small d such that
dT/dt = -a*T
dCO2/dt = c*T – d*CO2
then the system will be stable. Here is another stable configuration:
dT/dt = -a*T + b*CO2 – e*F
dCO2/dt = c*T
dF/dt = g*T
This set of equations can be simplified to
dT/dt = -a*T + G
dG/dt = (b*c-e*g)*T
where G = b*CO2 – e*F. This system is stable if b*c-e*g is less than or equal to zero. F is other ameliorative feedback from, e.g., cloud cover, or convective exchange of surface heat with atmospheric greenhouse gases which radiate the convected heat away.
The problem with the AGW hypothesis is that it has taken it as axiomatic that b is greater than zero, and there are no countervailing processes. While it is true that, all things being equal, the radiative imbalance induced by increasing CO2 should increase T, it is by no means assured that it will do so when all interactions are taken into account. The aggregate response to increasing CO2 must be stable, or we would not be here to question it.

31. johann wundersamer says:

And thats all –
some 2, 3 variables, ‘my’ variables,
condensed to constants, ‘my’ preferred constants’:
here ‘my’ preferred feedback loops!
a comic /a silent movie/ + ‘cascadeurs’ + sounds:
not even a picture of the world – but able to take it’s breath.
We’ve come a long way. Hans

32. johann wundersamer says:

Thanks Mr.Monckton, Joe Born, WUWT for clearing sight.
Hans

33. Joe Evans says:

Joe Born, in your comment at 3:09am this morning, you stated that you had not performed a steady state analysis on your op amp model and that it was not germane to the discussion about Lord Monckton. I do not understand why. Could you expand on that part of the analysis for us?

• Joe Born says:

The reason it wasn’t germane is that the step response was adequate to show why the “transit” doesn’t necessarily occur.
Frankly, though, I wanted to avoid the confusion that inevitably results from such discussions. The fact is that the g > 1 regime does in fact have a meaning when you’re dealing with frequency response, but it requires more interpretation than I cared to spend the time trying to express correctly.
The bottom line, though, is that, to the extent that models such as Monckton et al.’s apply to the climate, the interpretation that part of the graph has for electric circuits is precisely the same as it has for the climate. Of course, we can argue all day about the differences between electric circuits and the climate, but in my opinion Lord Monckton would be well-advised to drop this part of his argument, which is unnecessary and unnecessarily draws criticism.

34. I have prepared a detailed reply to Mr Born’s commentary on our paper, which I am hoping Anthony will publish shortly.

• Joe Evans says:

Monckton of Brenchley, I am unable to get to a copy of your paper from the link at the very beginning of this page. Is it accessible anywhere?

• Joe Born says:
• Joe Evans says:

Thanks, Joe

• Joe Born says:

Thanks a lot for the link.
Although I generally agree with many of my (and Lord Monckton’s) critics that the these linear models’ limitations are severe, I believe they can afford some insight if one exercises judgment in using them. To that end, having at my fingertips the parameters provided by that paper will be helpful.
Thanks again.

35. Brian H says:

3.7K per doubling? Curry and Lewis found it to be about 1.35K as a central estimate. This is vastly different. There is, btw, not enough accessible fossil fuel in the world to achieve such a doubling. Arrant nonsense.