Guest post by Nick Stokes,

People outside climate science seem drawn to feedback analogies for climate behaviour. Climate scientists sometimes make use of them too, although they are not part of GCMs. But it gets tangled. In fact, all that the feedback talk is usually doing is describing the behaviour of variables that satisfy a few linear equations. Feedback talk adds a way of thinking about this, but does not change the mathematics of linear equations.

A couple of articles I’ll refer to are a survey article by Roe, and a frequently cited 2006 article by Soden and Held.

The basic calculus behind feedback and linear signal analysis goes like this. You have a device or system with a number of state variables, which I’ll bundle into a vector x. And the physics requires that they satisfy a set of equations that I’ll write just as

f(x)=0

There is a particular set of values x_{0} which satisfy those equations that for an amplifier, say, would be called the operating point. Generally it is a state existing prior to perturbation by an amount dx (a vector of state changes). After perturbation it still has to satisfy the equations, so

f(x_{0})=0 *and* f(x_{0}+dx)=0

For linear amplifiers, the perturbed state can be well approximated by the derivative expression

f(x_{0}+dx) = f(x_{0})+f'(x_{0}) dx = 0

and since f(x_{0}) = 0, that leaves the set of linear equations in the perturbation

f'(x_{0}) dx = 0

We don’t have to worry too much about the form of f'(x_{0}), or indeed f(x_{0}). The point is that it is linear, so all terms are proportional to perturbation. We can just take it that f'(x_{0}) is a matrix operating on the vector of perturbations dx. Roe (p 99) has a section headed “Feedbacks Are Just Taylor Series in Disguise”. Actually “Taylor Series” overstates it, since only first order terms are used. But it is getting close to the correct treatment as linear equations of perturbations.

Usually we think of one of the components of dx as the input, or forcing, and another as the output. Then the equations can be shaken down to make output proportional to input, or gain. This is just a property of a linear system of n equations in n+1 variables, and the feedback algebra just expresses it. But you don’t have to think of it that way. I’ll give some examples leading up to climate.

One thing that is important is that you keep the sets of variables separate. The components of x_{0} satisfy a state equation. The perturbation components satisfy equations, but are proportional to the perturbation. You can’t mix them. This is the basic flaw in Lord Monckton’s recent paper.

#### Example 1 – the abstract feedback system

The Wiki description is as good as any. It’s labelled negative feedback, but applies generally. The diagram is:

with the accompanying text

Note that it starts with two equations in three unknown voltages. Two are overall input and output, and the third, V’, is the voltage at the input to the amplifier (triangle). V’ is eliminated, leading to an equation relating input and output (red star). This is then manipulated to a gain ratio. But all these steps are just standard high-school manipulations; they don’t add anything. A computer (or a student?) could have solved them at any stage.

#### Example 2 – a junction transistor

Here is a very simplified AC circuit, with bias arrangements and capacitors omitted. The voltages are the perturbations (AC). Simplified transistor properties are assumed – zero input impedance, infinite output, and a current amplification β=100. So the AC voltage at the base (V’) is held to zero. There are 3 unmarked currents, denoted by the suffices of the resistors I_{0}, I_{1}, I_{f}. Directions are I_{0} right, I_{1} down, I_{f} right. 5 variables in all.

So we write down linear relations. There are 3 Ohm’s Law

V_{in}=I_{0}*R_{0} |
V_{out}=-I_{1}*R_{1} |
V_{out}= -I_{f}*R_{f} |

and one current gain relation:

β*(I_{0} – I_{f}) = I_{1} + I_{f}

Again, anything further done with these equations is just high school manipulation. But it can be shaken down to a voltage gain by eliminating currents, written in gain/feedback style:

V_{1} = -β (R_{1}/R_{0}) V_{0} / ( 1 + f) where f=(β+1)R_{1}/R_{f}

Note that it is an inverting amplifier, and the feedback is negative.

#### Example 3 – Climate feedbacks

Again, it’s just a matter of writing down linear equations, resulting here from equilibrium flux balance. I’ll follow this 2006 article of Soden and Held. Unfortunately, they don’t actually quite write the flux equations, but I’ll do it for them. They write:

ΔR is the change in flux at TOA, which is the GHG forcing. ΔT is the surface temperature response. The feedback factors are T for temperature,w water vapor, C clouds and α (=a) for albedo. What they are actually doing (multiply by λ) is writing a flux balance

ΔR = λ_{T}ΔT + λ_{w}ΔT + λ_{C}ΔT + λ_{a}ΔT

Each term on the right represents a flux due to that factor. They do a bit extra, which I won’t go into, to deal with the fact that flux is at TOA and response is at surface. Their T flux is what people often call the Planck feedback; they roll into it other kinds of temperature dependent cooling, but it is mainly radiation (Stefan-Boltzmann etc).

This hopefully demystifies all the stuff about positive, negative feedback and runaway. The first is a big term that determines what is thought of as feedback-free (open-loop) gain. It is the 3.2 W/m^2/K figure that is often quoted, and turns into the 1.05K/doubling which forms the basis for Lord Monckton’s ECS. That comes from this paper. The other terms are mostly negative, so they diminish the coefficient of ΔT and so increase the amount ΔT must respond to stay in balance. That is interpreted as positive feedback.

It actually gives a perhaps less scary picture of thermal runaway. If these negative fluxes increase, there will come a point where the coefficient of ΔT is zero. That doesn’t mean instant flames. It just means there is nothing to counter heat accumulation from the forcing flux. So the temperature will indeed rise without limit (until some nonlinearity intervenes), but only as forced by the few W/m2 of ΔR. Not good, but not perhaps as dramatic as imagined. If the coefficient became negative, then there could be exponential rise, which might get more dramatic.

#### So did climatology make a startling error in omitting “reference temperature”.

I may have given away the answer, but anyway, it is, no! Soden and Held is a typical exposition. They correctly gather the perturbation terms – that is, the forcing, in terms of GHG heat flux, and the proportional responses. It is wrong to include variables from the original state equation. One reason is that the have been accounted for already in the balance of the state before perturbation. They don’t need to be balanced again. The other is that they aren’t proportional to the perturbation, so the results would make no sense. In the limit of small perturbation, you still have a big reference temperature term that won’t go away. No balance could be achieved.

#### So are all sets of linear equations to be regarded as amplification/feedback?

Well, nothing really hangs on it except the way you talk about them; the algebra is the same. But what characterises amplification is that one of the coefficients is large relative to the others. That means that changing that variable induces a large response in others (hence amplified). What is characterised as feedback is where this variable appears in at least one other term, and is also multiplied by the big coefficient. That makes a big proportional change in the output variables. That modifies the apparent performance in ways described as feedback.

So what is the outcome here? Mainly that you can talk about feedback, signals, Bode etc if you find it helps. But the underlying maths is just linear algebra, and the key thing is to write down correct perturbation equations, and manipulate them algebraically if you really want to. Or just solve them as they are.

Mr Stokes, why, pray tell, it the estimated temperature in 1850 the base state one is measuring peturbations from? Why not 1200, still in the MWP, as the “normal” temperature?

Some physical factors produced the temperature in 1850, and the same factors are still presumably operating now. Was there any feedback effects operating in a description of what produced the climate in 1850? If not, why?

“Mr Stokes, why, pray tell, it the estimated temperature in 1850 the base state one is measuring peturbations from?”Can you quote what you are talking about, so we know exactly what they say? As said above, the basis for the analysis is that there is an operating point and linear fluctuations about it. You don’t need complete knowledge of what the operating point actually is, or how it got to be that way. The coefficients of the linear perturbation variations are likely obtained from observation or model outcomes. An analogy is the internet. Signals that you receive have passed through all sorts of amplifiers with many different operating points. You don’t need to know about that to process the signal.

They are looking at forcing from GHG and its consequences, including feedback, that are proportional to it at equilibrium. So 1850 is representative of a state when forcing from GHG was stable.

Your post is a reply to Lord Monckton, I presume, so it is on point to ask why the state in 1850 is the chosen base state, rather than some other date not during the LIA. You are also falling into a cliche CAGW trope that the LIA was normal, rather than the coldest period since the last outright glaciation.

What my question is, was the ~300 ppm level of CO2 in 1850 not contributing to any feedback that resulted in that climatic state? Discounting miracles, that invoking some special factor that did not exist in 1850, but exists now, rather goes contrary to my understanding of the basic rules of science.

So Christopher Brenchley pointed out a really obvious error in the established model? Can you explain why he is wrong, without handwaving?

“state in 1850 is the chosen base state”Well, chosen by Lord M. It seems a reasonable choice to me, because it represents a state with stable GHG, so the fluctuation from that level is the forcing. The change in T from that time is the response, not the change from 1200 or whatever. The existing 280 ppm and its effects on T, with feedbacks, are part of the chosen operating point.

I’ve tried to explain why he is wrong in the later part of the post. A key point is this. You have perturbations that result from the forcing of GHG since 1850 (or whatever time chosen). You have n relations in n+1 unknowns, which boil down to a proportional relation between forcing perturbation and T response. That passes through zero (no perturbation, no response). If you add in a term that is not proportional to perturbation, like reference temperature, then it can’t pass through zero, with constant coefficients. Zero perturbation does not return to the operating point. Something has to give.

Mr Stokes, you are still not explaining what you presume caused temperatures in your chosen baseline period. Was CO2 having an effect, or not? If it was, Monckton was basically correct.

“you are still not explaining what you presume caused temperatures in your chosen baseline period”No, and I’m explaining why you don’t have to. It’s just a state that happened, and was in balance. It’s my equation (f(x₀)=0. You don’t have to know what caused what. All you need to analyse are variations about that point, and their proportionality.

It’s the same if you buy an amplifier. If you are curious you can poke around and measure the operating point (voltages etc). You don’t need to analyse to find what made it that way. If you want to figure the gain, you only need to analyse the AC part of the circuit. But you’ll probably just measure input and output.

I do believe your approach resembles homeopathy, or other mystical worldviews. Consistency in natural laws should be a reasonable presumption, so anthropogenic CO2 shouldn’t know it has a different effect than “natural” CO2.

Mr Stokes, how do you model the effects that produced temperature in your chosen baseline period?

Thus far, you have been handwaving.

“anthropogenic CO2 shouldn’t know it has a different effect than “natural” CO2”It doesn’t. The linear equations I refer to just relate total quantities. If the driving perturbation consists of a mix of anthro and natural (say a burst of volcanoes), then it’s up to you to fractionate the result accordingly.

Y’all are still trying to not admit the Established Model has an obvious mathematical error.

To me the basic flaw in your explanation (which i thank you for) is the assumption of zero perturbation and equilibrium at any given point before changing a variable like CO2. i don;t see that in the climate at any point. It appears to be constantly perturbed by changing variables and constantly changing on every scale. Starting a linear equation from an assumed static position that is not static will always give you the wrong answer, which is pretty much what we see with every climate model. They start off reasonably close then diverge more an more because the starting position was only approximate and was actually in flux rather than in equilibrium.

At least some complex, chaotic, non-linear systems appear to never reach equilibrium because the input variables never stop changing – the economy for example, which is why economic models never forecast accurately over anything other than the short term. The other problem those models exhibit is that appear to be right(ish) for a period until they are suddenly very wrong.

To look at our actual climate, it is very difficult to show that when we first started to produce significant amounts of CO2, the climate was stable and in equilibrium. The Little Ice Age suggests that it was not (and never is). I would also note that the non-climate events that are often removed from climate, like volcanoes, are still variables in climate and could cause climate trends (if they exist) to reverse or amplify.

If you start from the wrong position and assume things are not changing when they are, even the best model will end up simply wrong quite quickly.

Phoenix44 says:

To me the basic flaw in your explanation (which i thank you for) is the assumption of zero perturbation and equilibrium at any given point before changing a variable like CO2.Agree. Stokes is assuming a “balance” in 1850 because CO2 was constant, but forgets/ignores the most important GHG — water vapor. Plenty of evidence that increasing/decreasing water vapor effects persist for many yrs (perhaps a major cause of the LIA & MWP) and is never really in balance.

Pheonix44,

“Starting a linear equation from an assumed static position that is not static will always give you the wrong answer, which is pretty much what we see with every climate model. “Perfectly stated!

‘Mr Stokes, why, pray tell, it the estimated temperature in 1850 the base state one is measuring peturbations from? Why not 1200, still in the MWP, as the “normal” temperature?’

I have seen your answer Nick, but why not use a warmer period as the reference point that also did not have a co2 influence? The 1730’s were the warmest period until the 1990’s according to Phil Jones. Why not use this? Or the even warmer 1540 period?

Personally I do not think there is anything that can be termed ‘normal’. the temperatures goes up and it goes down and is rarely static for too long. It varies due to a number of factors, some of which are key at any one time which might then be replaced by others.

When looking at the 1730 decade Phil jones was actually most interested in what happened in 1740-just about the most severe winter ever. after that he confirmed that the climate was much more variable than he had previously believed.

Beng135

You said, “Plenty of evidence that increasing/decreasing water vapor effects persist for many yrs (perhaps a major cause of the LIA & MWP) and is never really in balance.”

Indeed, both of the links provided by Stokes at the beginning of his article admit that water vapor is the most important forcing agent, and that clouds are the most uncertain. And, the article by Roe states that the effects will be most persistent for those the system is most sensitive to!

Tony

“Why not use this? Or the even warmer 1540 period?”You don’t need a period embodying some target characteristic. It’s just a question of choosing two states that you can measure and see what the difference is that could be interpreted as a response to something operating over that period. So it needs to be fairly recent, so we know more about it. And it really should not include too much time in which the forcing wasn’t operating, since that just confuses the issue. So I think 1850 is a reasonable choice, although again, it was Lord M’s choice, not mine.

In terms of working out a rate from an interval difference, the lack of equilibrium at present is far more of a problem than the lack of equilibrium in 1850. It’s why ECS is such a hard problem.

As there are equations that should produce a certain temperature from a certain insolation, why does a magic state somehow occur? The formula should work equally well for 1200, 1650, or 1850, if it actually works now.

“So 1850 is representative of a state when forcing from GHG was stable.”

I think you need to clarify that this refers to anthropogenic GHG’s from burning fossil fuels. Who knows what natural changes in GHG concentrations (including water vapor) were occurring in pre-industrial times. But, as you noted above, feedback on perturbations is a linear combination of all of them, so there’s nothing analytically wrong with focusing only on the feedback due to anthropogenic emissions to get a theoretical calculation of the net effect of those emissions, though I think there’s a huge issue on the reliability of those theoretical calculations.

“I think you need to clarify that this refers to anthropogenic GHG’s from burning fossil fuels.”It actually refers to the forcing. Volcanic CO2, say would be included too. And usually solar variations and volcanic aerosols are included too. The reason is that you are going to measure response variables, and for this purpose you can’t do attribution (that may come later). Of course, there will be noise – fluctuations that would have happened without forcing.

The linear equations that I wrote express required relations in the variables, regardless of cause. And there is one extra variable, or degree of freedom. It is when you prescribe that extra variable to get a numerical answer that you inject the notion of cause. Whatever you prescribe is the forcing. For an amplifier, it would be the input signal.

Climate is never at equilibrium, and can never be, because the very different time constants in different parts of the system, particularly between the ocean and the atmosphere. This is why ECS for example has no physical meaning.

Nick Stokes – June 6, 2019 at 6:41 pm

Enough is more than enough.

Excerpt: The “title” and the 1st two sentences from the very 1st paragraph of Nick Stokes’ above posted article, to wit:

Now I have always assumed that Nick S was a passionately committed believer and supporter of …. CO2 causing Anthropogenic Global Warming (CAGW) climate change ….. which is 100% based in/on LWIR “heat energy” feedbacks between all the per se “greenhouse gases” (

except H2O vapor of course) and the earth’s surface.So, given the fact, ….. that CAGW climate change

is rooted in the beliefthat there is a “mystifying feedback” that exists between atmospheric CO2 and surface entities, ….. then why in the world would an avid believer want to be “demystifying” that which must remain “mythical” to be of any value to/for their “claims of fame” and future employment?And just why would people “outside climate science” give-a-hoot about said “feedback analogies” other than to disprove and criticize the silly, per se, climate scientists whose livelihood is dependent upon said “feedbacks”.

And anyone that believes the following ……. would take things back that they never took in the 1st place, to wit:

Of course the “feedback analogies” are a part of the Global Computer Models (GCMs) ….. a BIG, BIG, BIG part of them, ……. and that is pretty much exactly why those Models have failed miserably at “forecasting the future” climate ……… as well as “hindcasting the past” climate.

Sam C, ….. yea ole Devil’s advocate just doing his thingy. 🙂

Joe Bastardi has seen that even for a 2-week forecast, the US models cannot “see” any cold air, and only see it when the forecast is for a mere couple days in the future. The “climate” models are essentially the same as the forecast models. I have no doubt both types of models are bovine excrement because they (purposely) vastly overestimate CO2 effects among other issue. Watch his 2 free public videos here:

http://www.weatherbell.com/premium

Surely the starting point is the very earliest days of the solar system, say the time when the planet first acquired an atmosphere, and when the sun had illuminosity far less than today.Everything is a perturbation from that time onwards.

If one does not want to go that far back, surely it must be the start of the Holocene.

If we cannot explain the temperature profile of the Holocene, we have no chance of reasonably estimating how temperatures will progress and develop throughout the remainder of the Holocene, and eventually back into the deep throes of the ice age that the planet is presently in.

Richard V

You are on the right track with the call for looking farther into the past.

Basically Nick (and others as well) are saying that if the CO2 concentration was stable, GHG forcing was stable (unchanging) at the time. To me, this is a fatal flaw in Nick’s argument. Let’s take his assumption that (because CO2 was stable at 280 ppm in 1850) the temperature was therefore stable because the GHG forcing was stable. All that is implied in the his starting assumptions.

I don’t think I am misrepresenting it in any way. See Nick’s words for confirmation. Stable CO2 means stable total GHG forcing means stable temperature, which prevailed in 1850.

Logically, if applied to previous times, it means that if the CO2 concentration was unchanging going back in time, the temperature was also unchanging cause there was no perturbation from 280 ppm. I am ignoring any cause-effect arguments, just observing.

Essentially the premise about 1850 is that CO2 concentration represents temperature. If there are feedbacks at play, Nick has it that they are in any case wrapped in the linear equations that gave us the equilibrium temperature in 1850. I am not claiming that CO2 was the only cause of the temperature rise from some zero-GHG state, only that Nick is using the CO2 concentration as a proxy for all contributions via a set of equations that we do not need to know about in detail.

I agree we don’t need to know the details, but we should not accept the premise that the temperature in 1850 was tied to the CO2 concentration that happened to prevail at the time without first testing it.

If the premise, upon which Nick’s explanation is based, is correct, they we can check for some f(x) function of global temperature and how it varies, compared with the CO2 concentration at the time. Looking back from 1850 to 8000 BC we find numerous proxy temperature profiles indicating that the temperature is far from stable and that the CO2 is almost invariant. This bodes poorly for the assumption.

The temperature, presumed to be strongly dependent on the total GHG forcing, itself indicated by the CO2 concentration as proxy, is all over the place. It is obvious that the “GHG forcing” that supposedly dominates the temperature, nay,

produces it, is itself dominated by non-GHG factors because reality contradicts the assumption.The “assumption” about 1850 is overly-simplistic. The tightly constrained CO2 level over millennia and the wandering temperature (several degrees C) indicate that either the temperature response is very strong for tiny variations in CO2 (disproved by recent observations), or there are other larger factors controlling the global temperature. Stable CO2 as a proxy for all GHG’s, was unable to stabilize the global temperature.

How then will the IPCC’s goal of a stable CO2 concentration bring about a stable temperature? It never had that effect before. Why should it work now?

On the flip side, how do we know that a stable temperature or a varying one was not produced by non-GHG factors? Obviously they dominate, in terms of temperature result. We have millennia of confirming observations and proxies.

On the basis of evidence available, there is no reason to accept that in 1850 there was an equilibrium temperature state for all climate forcing factors. The CO2 concentration increased very little from anthropogenic sources during the subsequent 100 years, yet the temperature rose rapidly until 1940 – rapidly compared with the net rise since then when GHG concentrations increased markedly.

I conclude by observing that strong positive feedbacks are nowhere in evidence in the historical record over the past 10,000 years, while there is ample evidence that CO2 is not a convincing proxy candidate for total forcing nor for temperature. There is no historical evidence that, going forward, stabilizing the CO2 concentration will stabilize the global temperature.

Richard V

“Everything is a perturbation from that time onwards.”There is no one designated reference state. You can compare any two states not too far apart. For convenience one is designated reference, and relative to that you calculate the perturbation terms. In the tyre example I gave before, the reference state is what you presently have, not a tyre in a vacuum. If you had to pump it up again later, that would be the new reference state.

Crispin.

“If there are feedbacks at play, Nick has it that they are in any case wrapped in the linear equations that gave us the equilibrium temperature in 1850.”Again I must protest that 1850 was Lord M’s choice, not mine, although I don’t criticise it. It’s true that you can’t get perfect stability at the start of the range. You certainly don’t have it at the end either, and that is a much greater deviation. That doesn’t seem to bother folks who like to forget about the E in ECS.

“How then will the IPCC’s goal of a stable CO2 concentration bring about a stable temperature?”It won’t. The goal is to remove a cause that is forcing a continuing temperature rise. Well still have to live with variations.

Nick wrote

“The goal is to remove a cause that is forcing a continuing temperature rise.”

There is some correlation of AG emissions of CO2 with temperature some of the time in the past 70 years, but not much and the correlation coefficient is dreadful.

Humanity is utterly wasting its time and money attempting to control the global temperature. The ECS is low, though still unknown. There is no strong, positive water vapor feedback in evidence, and plenty of evidence that its existence is dubious.

There is also ample evidence that having a stable CO2 concentration does not stabilize the temperature, and further that increasing it 50% has nearly no effect, much against expectation. All the climate modelling and hype it produces looks like crap. It’s ridiculous.

Yes, but “The Narrative” must be propagated! Oh, the HUMANITY!!

So Nick, are you going to let the embarrassing uninformed speak for you?

Honest question: why do we know (or assume) that all components of x are linear?

It is small perturbation (linear) theory. It’s the general calculus proposition that locally, the tangent line, plane or whatever is a good approximation to the function. The electronic amplifier creators go to a lot of trouble to make sure this works even for quite large perturbations. For something like climate, one relies mainly on the fact that underlying laws have smooth variation. Heat fluxes are generally proportional to temperature difference etc.

Except all that is only partially true and only for convection. It isn’t correct when you get near any phase change on many solids and on radiative transfers. Wonder how many of those other situations are involved in climate science …. oh all of them. So using that theory is less than useless.

You make a good point here LdB.

Water is a good example as at phase change the temperature remains constant, thus the Planck equation coefficient (sensitivity) is zero.

And there is an awful lot of this phase change going on in the atmosphere.

And the phase changes from liquid to gas to solid and back to liquid happen at different altitudes. The release of latent heat near the tropopause, at the tops of cumulonimbus clouds in thunderstorms, and in tropical storms, completely bypasses the “greenhouse” effects of CO2 and H2O (vapor) that the models say “trap” heat near the surface. Even before this happens, water vapor condenses into droplets to form clouds, which block insolation at higher altitudes. Both of these negative feedbacks from water vapor are absent from the naive “greenhouse gas forcing” linear arithmetic.

Monster,

The phase changes at different altitudes and the associated energy/heat transformations IS an elephant in the room. Further consideration needs to be that these rapidly developing storms are not static or stationary but actually moving and developing ahead of the highest altitude shown on radar. The highest altitude is likely where the storm is beginning to collapse and dump rain/H2O. It’s a very dynamic process and a tremendous amount of energy involved.

I am very relieved that others are seeing what I see.

eyesonu – June 7, 2019 at 2:00 pm

Eyesonu, fear not, because lots of others are seeing the same.

It is the CAGW shouters and believers who intentionally avert their eyes and their mind to what

some of usfreely admit to seeing.And some people won’t say anything due to their misguided respect for “authority”.

Heat fluxes in the atmosphere are greatly effected by water phase change, which is as non-linear (actually, you could call it catastrophic in the mathematical sense) as you can imagine.

Obfuscation.

I got into the conversation a bit late, but, come on, guys. Any of you who have attempted to model complicated systems know that the first thing you do is linearize the heck out of the defining equations, solve, and see how the results compare to observations. If they compare well, then you can use the results to interpolate with confidence between data points, but extrapolate (predict) with extreme care. Mr. Stokes’ comments about linearity and perturbation theory are imho right on; the argument then becomes is the baseline analysis (equation set) correct…

Climate sensitivity is given as K per doubling of CO2. In other words, temperature is proportional to the log of CO2. By definition, that is not linear.

You can treat the problem as linear over a certain range but to get away with that, you should really have a solid understanding of the system’s response. That’s not the case for the climate.

For sure the climate sensitivity approximation to reality doesn’t work for very low levels of CO2. Going from zero molecules of CO2 to one molecule would produce an infinite temperature rise because going from zero to one molecule is an infinite number of doublings. Similarly, for very high levels such as on Venus, the temperature is more determined by the density of the atmosphere than by the radiation properties of CO2.

So, where is the climate sensitivity concept valid? We don’t actually know that for sure.

“Climate sensitivity is given as K per doubling of CO2.”It’s often given as K per unit forcing in W/m2. Then you can make a conversion from W/m2 to log(CO2). But anyway, K per doubling is linear. Doubling means a unit increase in log_2(CO2).

It’s true that the linear relation between K/(W/m2) and log(CO2) has a limited range. It was originally worked out empirically by Arrhenius in 1896 and has held good since then. It’s possible we’ll get to a stage where it has to be refined. Or better, measure a new operating point.

I don’t want to jump the gun here, but I can already see what I suspect the issue is. This is an empirical claim:

“One thing that is important is that you keep the sets of variables separate. The components of x0 satisfy a state equation. The perturbation components satisfy equations, but are proportional to the perturbation. You can’t mix them.”

I don’t think it is possible to assert axiomatically. Just off the top of my head I can think of a number of systems that falsify the claim, notably in this case temperature at the melting point of water. The perturbation by itself won’t satisfy the same equations independently of the initial conditions at all.

These are globally averaged quantities, and at equilibrium, so there is time averaging as well. On the fine scale, there is always ice melting somewhere, and it’s all added up. The main point is that there might be discontinuity at melting in specific heat, and so rate of temperature change, but what mainly affects the averages are conserved quantities, ie total heat.

Again, this seems to be a flawed assumption. Why are you assuming equilibrium? Weather is essentially evidence of disequilibrium and there has always been weather as far as we know.

Heat is also not a conserved quantity in thermodynamics, energy is. Heat can raise the temperature by some amount or vaporize some water at the ocean surface. That work is then done and cannot be done again without violating conservation of energy.

In other words: If you are considering only radiant energy in energy and energy out you are completely ignoring the fact that heat is *any *transfer of energy, including chemical, mechanical and so forth. If you see a energy imbalance in the thermal spectrum you have no way of knowing just from that whether it is a result of increased temperature or increased mechanical work (say extra hurricanes or whatever).

Sure, it may be possible that the system behaves that way, but again, that is a very strong empirical claim that must be subjected to extremely rigorous empirical tests. It cannot just be barely asserted like what you are doing.

I hate to say it, but the very fact that the wind blows falsifies the notion. Disequilibrium exists at every level you care to look at, and”average temperature” of a non-equilibrium system was a coherent concept to begin with. Even if it was the case though, equilibrium in atmospheric temperature is less than a rounding error in the total climate system.

Pardon, but did a typeristing error preclude you from saying, “…’average temperature’ of a non-equilibrium system was

a coherent concept to begin with”?notIf so, then you’ll get no raised eyebrows from me.

“Pardon, but did a typeristing error preclude you from saying, “…’average temperature’ of a non-equilibrium system was not a coherent concept to begin with”?”

Yes, sorry, I completely messed that sentence up. The average temperature of a system not at equilibrium is not a sensible concept.

>>

Heat is also not a conserved quantity in thermodynamics, energy is.

<<

The first law of thermodynamics in differential form (using the Clausius standard) is:

.

This is also called the law of conservation of energy. U is the internal energy of the system, Q is the heat transferred to or from the system (positive heat is heat transferred to the system), and W is the work performed on or by the system (positive work is work done by the system). Although the units of heat used to be BTUs or calories (still is in some disciplines), the SI unit for heat is the joule–a unit of energy. Both BTUs and calories can be converted to joules. Heat is therefore energy and is conserved.

Jim

Just think about it for a second. Heat is the change in energy: “Let the amount of heat which must be imparted during the transition of the gas in a definite manner from any given state to another, in which its volume is v and its temperature t, be called Q”…

If heat, the flow of energy between systems, was conserved, equilibrium would be impossible. Temperature is defined as a local thermodynamic equilibrium. You see how this could be a problem.

Heat is not the same as energy.

The form of the equation you gave is for a closed system, which obviously makes sense. Still it is the energy that is conserved, not the heat, since the system will move from disequilibrium to equilibrium over time (second law). So as the system does work to reach equilibrium the heat flow will necessarily decrease to zero.

>>

Heat is not the same as energy.

<<

I’m sorry. but heat is energy. The definition of heat is a transfer of energy across a system boundary due to a temperature difference. Heat is measured in joules, BTUs, and calories. It can also be converted to foot-pounds, newton-meters, pascal-cubic meters, watt-hours, or any other unit of energy.

Jim

Heat is energy that flows spontaneously from warmer to cooler. So all heat is energy.

However, all energy is not heat … potential energy, chemical energy, radiational energy, etc.

… and therefore, Beeze’s claim that “Heat is not the same as energy” is 100% true.

w.

>I’m sorry. but heat is energy. The definition of heat is a transfer of energy across a system boundary

A thing and the transfer of a thing are not the same. Besides, heat only refers to thermal energy, not all the other forms. The formula you provided *only* applies in a closed system when you ignore all other forms of energy. But obviously that doesn’t describe the climate.

The total energy is all the different types of energy added together, so heat in the sense of temperature, kinetic, chemical potential etc. When you add all of these together you get the total energy and that is what is conserved. Thermal energy is not a conserved quantity.

It’s interesting how we are living in a bizarre world–where people make contradicting statements and think both are true.

>>

Willis Eschenbach

June 7, 2019 at 5:17 pm

Heat is energy that flows spontaneously from warmer to cooler.

<<

That’s not exactly the definition of heat. This is from my text on Classical Thermodynamics:

“Heat is defined as the form of energy that is transferred across the boundary of a system at a given temperature to another system (or the surroundings) at a lower temperature by virtue of the temperature difference between the two systems. That is, heat is transferred from the system at the higher temperature to the system at the lower temperature, and the heat transfer occurs solely because of the temperature difference between the two systems. Another aspect of this definition of heat is that a body never contains heat. Rather heat can be identified only as it crosses the boundary. Thus, heat is a transient phenomenon.”

Notice that energy transferred from a colder region to a warmer region is not heat by this definition. Also, the atmosphere can’t trap heat or heat can’t hide in the ocean.

>>

So all heat is energy.

<<

True. That’s what I’ve been saying–I thought.

>>

However, all energy is not heat … potential energy, chemical energy, radiational energy, etc.

<<

No, but you can change heat into other forms of energy. Heat, like work, is a boundary phenomenon. Heat is energy; work is energy; both only exist at the boundary of a system, but the energy they represent must be accounted for. Either the internal energy changes, or work, heat, or other forms of energy appear to balance the equation. Energy is conserved.

>>

… and therefore, Beeze’s claim that “Heat is not the same as energy” is 100% true.

<<

As I said, this is very bizarre. If ALL heat is energy, then how can you say that heat is not the same as energy? One of the definitions of energy is the ability to do work. In thermodynamics, work, like heat, is a transient phenomenon. Are you now saying that work isn’t energy either? I’ve taken a lot of physics courses, and work is always considered to be the same as energy–always.

Jim

>>

Beeze

June 7, 2019 at 5:32 pm

A thing and the transfer of a thing are not the same.

<<

You’re arguing definitions? Look up the definition of heat. Heat is the transfer of energy, so your statement is wrong.

>>

Besides, heat only refers to thermal energy, not all the other forms.

<<

I’m not sure why you think this is important. I agree–heat is heat.

>>

The formula you provided *only* applies in a closed system when you ignore all other forms of energy.

<<

I can add additional terms to that formula. I didn’t want to confuse the issue with lots of terms. Chemists add a term for chemical potential. You can add a term for surface tension to explain the workings of those little boats that run on surface tension. There’s the work term of course, and the term for internal energy. If a term is zero, then it doesn’t need to be included. For example, heat is zero for an adiabatic system.

>>

But obviously that doesn’t describe the climate.

<<

It depends on what you want to describe. The first law of thermodynamics won’t entirely explain the climate, but it does play a role.

>>

The total energy is all the different types of energy added together, so heat in the sense of temperature, kinetic, chemical potential etc. When you add all of these together you get the total energy and that is what is conserved. Thermal energy is not a conserved quantity.

<<

Well, temperature isn’t a form of energy, so it won’t add into your total. You need to add heat into that total to make the numbers come out right.

It’s not often to see someone (two someones actually) argue “x is y” and then “x is not y” in the same statement.

Jim

Your texbook definition causes the confusion. It uses the word “heat” in two senses, first as a flow of thermal energy and then as thermal energy itself, in the sense of temperature as a measurement of local equilibrium of thermal energy. So you have two bodies with different equilibria, and the “heat” in the equation is the additional thermal energy of the “hotter” supplied to the equilibrium of the “cooler”.

It is clear that the distinction is made from the final sentences: “Another aspect of this definition of heat is that a body never contains heat. Rather heat can be identified only as it crosses the boundary. Thus, heat is a transient phenomenon.”

A transient phenomenon cannot, by that fact alone, be a conserved quantity. Again, your equation only applies to a closed where thermal energy is the only form under consideration. The FLOW of energy is not conserved and is distinct from the TOTAL energy, which is.

If the flow of thermal energy was conserved the second law would be violated.

“It is wrong to include variables from the original state equation. One reason is that the have been accounted for already in the balance of the state before perturbation.”

I don’t see the logic here. Can a control expert chip in? In the meantime, I’ll check my college text books from a couple of courses “Process Systems Analysis and Control” and “Process Modeling, Simulations and Control for Chemical Engineers”. It’s been awhile.

Too much hand waving and smoking mathematical mirrors in the replies so it seems.

Mental mathturbation.

Math Heads think if the math is right, then they do not need to understand the mechanisms. For them, False Premise + Good Math = Settled Science.

That is an inconvenient truth.

Beans beans the musical fruit

The more you eat the more you toot

The more you toot the better you feel

So let’s have been for every meal.

Beans gives positive feedback to gas emissions. 😉

Over my head.

But today I drove through a region that was 15 degrees C for 100 kms, then it rose to 18 degrees C for the last 100 kms. And there it’s remained.

So far I haven’t experienced any deleterious effects.

But just in case, I’m going to spend tonight in the waiting room at the emergency ward at our local hospital.

One can’t be too careful about being exposed to extreme weather.

The President has something to say about extreme weather.

Having waded through all that, I am way more mystified than I was before. Nothing at all has been clarified, so either I am denser than a sack of hammers, or your first few words, “People outside of climate science”, (of whom I am one) are a classic case of misdirection.

Hi Nick,

the maths aside, I’m sure you have it correct, how did you come to the conclusion that the posted long-winded spiel ‘demystified’ anything? I would assume the electronics part of the math is quite well known as the variables are controlled for in the design, how are you applying/controlling for unknown variables? From what I understand this is the crux of the matter.

You state above, as a reference to a baseline period, “They are looking at forcing from GHG and its consequences, including feedback, that are proportional to it at equilibrium. So 1850 is representative of a state when forcing from GHG was stable”. Excuse my haste I’ve just stripped this from Wikipedia (nothing wrong there … ) and the primary greenhouse gases in the Earth’s atmosphere are water vapor, carbon dioxide, methane, nitrous oxide and ozone.

My dodgy reference describes carbon dioxide as a trace gas whereas water vapor (and I got caught in an awful lot of it in Adelaide the other day) could be described as significant. Your post refers to GHG’s (total) but your post alludes to only one of the variables being affected by the multiplication of a large coefficient. Why in the literature is this assumed to be carbon dioxide? The reason I ask is in your post you mention thermal runaway and that is when you lost me. IF this happens and IF that happens, you turned a reasonable attempt at demystifying feedback into mush. Somewhere in your explanation you have mention how the feedback formula accounts for the feedback that controls the imagined ‘thermal runaway’.

Cheers,

Andy

“how did you come to the conclusion that the posted long-winded spiel ‘demystified’ anything”Well, I think Lord M is way ahead of me in verbosity, or even mass of math. I think it demystifies because of what is not there. It describes simple linear algebra that you can do without the blessing of the venerable Bode. You don’t even need a tenured professor of control theory. You don’t need to argue about what the EE books say about what is a signal.

Feedback is an analogy used for thinking about climate. That’s fine, but you need to make sure the analogy doesn’t take over. I’ve shown that it is just a way of talking about the underlying linear equations. And that is the place to return to if you want to resolve anything about climate.

“but your post alludes to only one of the variables being affected by the multiplication of a large coefficient. “Yes, and I was inexact there. It’s true for say a voltage amplifier, but in the climate example, the coefficients have different units, so it doesn’t make sense to describe one as large. As I said above, in the equations for perturbations, there is one variable too many to get numbers out. You have to put in more information. You have to describe something that would induce the perturbation, and so quantify it. If you know that a certain amount of GHG was added, that will fix the perturbed state.

Re Nick Jun 6 9.30

“Feedback is an analogy used for thinking about climate. That’s fine, but you need to make sure the analogy doesn’t take over. I’ve shown that it is just a way of talking about the underlying linear equations. And that is the place to return to if you want to resolve anything about climate.”

Without direct evidence to prove or disprove climate hypotheses, which will take decades to emerge, the climate people resort to computer models, or more properly mathematical analogies , of which linear approximations are one of many techniques. So your statement contains the very truth that the analogies HAVE taken over. Then you say that if you want to resolve anything in climate science you have to use analogies!

Clearly, you are uttering a paradox, and for your statement to make any sense you must resolve it.

Anyway, with regard to these mathematical analogies, aka computer models, here is a question for you; how do you verify and validate these models? If you can’t, then they remain hypothetical.

“how do you verify and validate these models?”There are lots of internal things done. How well to they conserve energy, mass etc. They actually solve differential equations; the main test of any equation solver is to substitute the answer back in the equation and see if it satisfies.

But on a practical level, the main validation of the fluids aspect is that they double as weather forecasting programs. And they get that right, in huge detail, for some days into the future.

Of course, when used as climate programs, after a week or two they get to a stage where the phasing of events is lost. They keep forecasting weather, but it is no longer the weather that happens. But it is weather that is consistent with the forcings, which can then change. There is every reason to expect that in the long term that response to forcings will continue to be shown by the earth also, even if the weather doesn’t follow the same sequence.

With weather forecasting, you can check the results. It’s true that with GCM’s you have to wait a long time. We are starting to get suitably longterm validation of the very early efforts like Hansen’s. The warming they forecast has been showing up.

>>The warming they forecast has been showing up.<<

Say what? The models are all running way hotter than reality.

“The warming they forecast has been showing up.”

What, all of it?

I thought temperatures peaked 2-3 years ago?

I still maintain late 80s were warmer and we still have lots of record highs from the 30s. It must be extremely warm elsewhere 😉

“It’s true that with GCM’s you have to wait a long time.”

Nick, thanks for admitting that GCMs are just hypotheses … and must remain so for decades at least.

In other words, they are literally not true.

TonyN

If you compare the average of the many model forecasts, they speak for themselves that they are not only untrue but laughable.

Anyone saying otherwise is not of sound mind and still living in the 1970,s LSD era.

Climate models = Imagineering.

Nick,

The warming they forecast? Come on!

Most of what they ‘forecast’ was known history…. not really a forecast, and easy to match with ‘creative’ forcing history (including very creative aerosol history). I stipulate that rising GHG levels have to cause some warming. But that isn’t even interesting, much less important. How much warming, where, how much rain, where, etc. are the things that matter, and the models are not very good at those predictions. Even more important are the down-stream consequences (especially sea level rise), and those consequences are even less certain than the models. As we have discussed on other threads (at other blogs), empirical estimates consistently disagree with the models. What’s more, the models disagree with *each other* by substantial amounts, which ought to make rational people question their validity, their underlying assumptions, and of course, their accuracy… if they were all somewhere near correct, then they would all agree with each other. They don’t; and even such agreement, if it existed, would not prove accuracy… but the substantial disagreement definitely proves *inaccuracy*.

Your post is at best tangential to what actually matters. Monckton’s posts are orthogonal to anything that matters, and I am puzzled you would bother to reply.

Stokes,

You said, “And they get that right, in huge detail, for some days into the future.”

Not where I live in the Mid-West USA. The rate of false-positives for precipitation is very high. They seem to do fairly well for temperatures, but I suspect that they could do almost as well with historical records.

Clyde Spencer says:

They seem to do fairly well for temperaturesThey definitely do NOT do well — they are pathetic. Watch Joe Bastardi’s public videos for a while — they ALWAYS overestimate future temps, especially the further they go into the future (like 2 weeks):

http://www.weatherbell.com/premium

I think Nick you are missing a point. The simple model you show is an elementary analysis of an amplifier. The feedback is not even necessary, since a basic amplifier can be controlled by the input voltage. More voltage-more gain, more noise. Feedback is an attempt to control a runaway condition.

The climate is not simple, not elementary, and doesn’t function via linear algebra. Virtually all the processes go from laminar to turbulent- wind, water, heat, radiation(very complicated with absorption, emission, conversion to atomic vibrations, conversion to thermal motion), etc.

That is the problem using averages to model the system. Nothing in the system responds linearly through average inputs. There may be a range where the response appears linear, but outside that range it is not.

I think the best example most people are at least aware of is an airplane stalling- it can happen at any airspeed, altitude and temperature, humidity etc. At a particular point attempting to make a plane lift more, even thought the lift response has been a nice smooth curve, the plane will stall and possible crash. The air flowing over the wing responds to the exact temperature, pressure, and velocity at particular points. Go one tiny step too far and the flow breaks down into turbulence.

The designers go to great lengths to understand where and under what conditions the air at any point on the airplane can go from laminar to turbulent flow because it is so important.

Lord M. I quack, kook and crank.

” … how are you applying/controlling for unknown variables? ”

>>

fudge factors come in handy

What I cant work out is why you would have a feedback on temperature at all it’s a minor byproduct of the process. It is a bit like putting feedback on the wind resistance on a heavy loaded train and pretending the speed of the train is controlled by the wind resistance.

The train example works pretty much the same, you can convince yourself the wind speed has an effect all you like, until the driver really opens the throttle or breaks and shatters your illusion.

Temperature is nothing more that the speed of molecules it isn’t a major player in the electromagnetic radiative balance at all and it isn’t valid to treat it like that.

“Temperature is nothing more that the speed of molecules it isn’t a major player in the electromagnetic radiative balance at all and it isn’t valid to treat it like that.”WEll, there is Stefan-Boltzmann. εσT⁴. But the issue here is flux at the surface. Down IR is a big part of that, and wv is a big source. And we know wv concentration varies with temperature.

The thing is to work out the governing equations, and then see what matters. Not before.

Wind resistance is pretty important for trains.

Only you would believe any of that, because you are pretty much illiterate to real science. This is all just part of your trolling and misdirection where you try to pretend you understand it, sorry you aren’t fooling anyone.

Nice ad hom.

Actually that demonstrates perfectly the poisonous attitude of the “hard-line” denizens here.

That makes any engagement so dispiriting.

Even with the likes of the ever patient/polite and patently knowledgeable Nick.

It’s as if you wear your bias/and hatred of the science (as some metaphor for the opposite stance to your ideological bias) as a badge, unable to move past it.

Yes, yes. The world’s Earth scientists are all incompetent and/or corrupt and a silver tongued classics scholar knows more than them sufficient to reveal the “startling” error of climate science.

Takes a massive bias to buy that bizarre illogic.

How about you become a true sceptic and be sceptical of the snake-oil salesmen in your own camp.

The idea that anyone presenting the science is a “Troll”, if it fails to comply with the “its not happening”. It’s natural. It’s a scam. Lefty take-over, etc meme.

No, someone who explains/links to the science does not become a troll for that.

And the biggest “fools” are those that are so wedded to their bias that they come on here and say worthless things like the above.

PS: This is why I turned down the invitation of Charles to contribute here.

And why I expect ad homs to come my way below.

To which I will not respond.

So feel free.

“Nice ad hom.

Actually that demonstrates perfectly the poisonous attitude of the “hard-line” denizens here.

That makes any engagement so dispiriting.

Even with the likes of the ever patient/polite and patently knowledgeable Nick.”

Do you consider everyone here a hard-line denizen?

Yes, Nick has the patience of Job.

So 300ppm of CO2 is the baseline in 1850. That’s logical. I saw the other day where the CO2 in the atmosphere has increased to almost 415ppm. Yet the temperatures have been cooling since Feb. 2016. Disconnect?

Here’s some science, since we are referencing circuit analysis:

atmospheric CO2 acts on LWIR radiation to space, as a variable time delay circuit, on the time scale from ns to ms. Some might point out that the delay can be in the ps range.

Ps This post is neither trolling, nor an ad hom.

Pps Might some here consider your post to be a trolling ad hom screed?

I have no issue discussing things but when you can’t even get a sensible discussion going because one party decides to rewrite definitions and basic norms it is clear they are trolling. Nick is a troll, a very polite one but a troll none the less. How you would have been received I can not say, can you construct something using normal science?

LdB

Years ago I taught a class on System Dynamics computer modeling. For the benefit of the students, I was demonstrating how to build a model to simulate the performance of a car, which included the acceleration, gas mileage, and top speed, as a function of the depression of the accelerator pedal.

Wind resistance was something I had to take into account because it varies with the 3rd power of the speed. To ignore it would give unrealistic performance.

Nick

Re: “Stefan-Boltzmann. εσT⁴”

To clarify, please use the full absolute temperature to the first two Taylor terms.

When simplifying please keep the full absolute temperature terms as well as the Del T.

That would help show how important is the absolute temperature vs assuming just Del T.

David

Nick, dear lad, the ‘equations’ govern nothing. They are, supposedly, in the real world, a mathematical explanation of an observed phenomena. Have a nice day!

For LdB re: “wind resistance”

https://www.quora.com/Why-is-air-resistance-roughly-proportional-to-the-cube-of-speed

As speed rises the dominant controlling force on an object becomes wind resistance.

Look up terminal velocity too.

OK, the change in the output of a system is the perturbation times the gain of the system. That’s fine.

The gain of the system is a function of the forward gain of the system and the feedback. There should be no argument about that.

There is an input signal for which the output is zero plus some offset. That is our input reference.

Suppose we have a black box for which we don’t know the gain. We find the gain by perturbing the input, measuring the change in the output, and taking the ratio.

What’s the value of the offset? If we know the value of the reference, we set the input signal to that and the output signal will be the offset.

So, we’ve got the climate. We’ve got an average global temperature and we’ve got the atmospheric CO2 level. Based on physics, we postulate a forward gain without feedback as something like 1C per doubling of CO2. With no other information, could we calculate the gain of the system with feedback? If we know the reference level and the output offset, the answer is yes. The gain will be:

where:

So, given a CO2 level and a global temperature and a reasonable figure for forward gain, we can calculate the system gain and thus the feedback as long as we know the system offset and the input reference. The bigger the difference between the input signal and the reference, for a given output (as compared with the offset), the smaller the gain and therefore the less positive the feedback.

So, Nick is right, it’s just linear algebra. Yes, all we need is the change in the input and the gain and we can calculate the change in the output. The trouble is that we don’t actually know the system gain. To figure out the system gain, we need to know the input reference. That is basically Monckton’s point.

So, you ask, could we get the system gain if we have a time series of data. Yes indeed. Lewis and Curry get something like 1.66 K per doubling for ECS (Equilibrium Climate Sensitivity). That puts Monckton’s result in the ball park.

“What’s the value of the offset? If we know the value of the reference, we set the input signal to that and the output signal will be the offset.”No, you don’t. Valve audio amplifiers often had a transformer at the output stage. What is the offset there?

If it’s linear, you can work out gain by just graphing input vs output for a few signal levels, and the gain is the slope of the graph. The offset will be the intercept, but you don’t need to know that a priori. But you could measure it; it is just the no-signal state. You could do the same with climate.

“That puts Monckton’s result in the ball park.”Well, it puts L&C out of Monckton’s ball park. See the width of his histogram. That’s one thing about these low ECS claims. They are very inconsistent, but their fans don’t seem to mind. They just cheer when it gets lower.

But can they be turned up to 11, Nick?

That is the hard question.

Valve audio amplifiers often had a transformer at the output stage. What is the offset there?That transformer is usually there to (passively) shift the (AC) impedance — the ratio of voltage to current — from what the amplifier would like to deliver to what the speakers would like to see. Conceptually, I suspect that your feedback analysis probably should be based on power (voltage times current) rather than voltage, but it’d take some work, and I doubt it’d change anything. And this whole feedback business wasn’t your idea in the first place.

I’m not sure that you’ve demystified anything except possibly for yourself. But thanks for trying.

I’m also not sure that you’ve really answered Lord Monckton’s paper. To the extent I can follow the paper, he seems to be arguing that one or more of the feedback terms used in climate science are in the wrong reference system (“local” relative to current state) rather than absolute. I’m pretty sure that one can do feedback analysis in either reference system, but one presumably does have to be consistent.

Don,

“I’m pretty sure that one can do feedback analysis in either reference system”What I was trying to say with the first bit of calculus is that the analysis does compare a reference system with a local (perturbed) system. The feedback talk is applied to the difference (local-reference) system, where everything is linearised first order to the perturbations. He’s trying to transfer a reference variable (emission temp) to that first order system, but it is zero order. You just can’t do it.

Yes. That’s called AC coupling. For an electronic amplifier, a change in the DC level at the input produces a transient at the output and then the output returns to zero. Presumably for the climate such a characteristic would produce an ECS of zero.

Hi Nick,

Good to see your post. I have a question about treating the temperature record as a signal to be processed. I hope I’msexpressing that properly.

It seems to thar in ordinary signal processing, you start with a clear signal and then it gets degraded as it travels — by interference, jamming, what have you — by outside forces. In the case of the temperature record, what is the noise, and from where does it come?

James,

It’s a bit O/T. Noise is a problem locally, due to various measurement lapses. But that gets damped hugely it taking an average over time and space. For global averages, the main issue is sampling error; what other answer might you have got if you had sampled in different places. Bigger samples reduce this, so GHCN V4 is an improvement.

Commiebob – “the less positive the feedback”? In my electronics experience it is impossible to have a stable system which has net positive feedback. Negative feedback makes systems progressively more stable. I know this is true for simple, linear systems like amplifiers. However, the earths climate is composed of multiple nonlinear interconnected systems, so IMO it is probably naive to assume that what works for a linear amplifier will work with our vast and complex global climate system. And even more naive to use the simple amplifier analogy to ‘prove’ anything about climate science. I usually know when I am out of my depth, but apparently some here don’t.

The poster child for positive feedback is the regenerative receiver.

Usually, positive feedback is poison, unless you want to create an oscillator. Then, the Barkhausen stability criterion is the rule. The product of the forward and feedback gains has to be unity.

Joke:

There must be a bajillion kinds of stability analysis, depending on the design practices of the kind of system under consideration. The bottom line in any of them is to avoid having enough positive feedback to cause problems.

Good 1.

“As far as I can tell, Nick studiously ignored that part”Not deliberately. But let me make amends. You wrote

G = (T – offset) / (ip – ref)

G being the gain. ip is the input signal, suggested as log(CO2). I am not sure why the denominator is written as a standard state difference with ref, which the corresponding term for T is described as offset. But the comment says that the offset is determined from the reference state, so it seems to be the same thing.

Now I agree with that; it defines G as a rate, in a standard calculus way. Gain and feedback factors belong in the world of rates. Now what happens of you try to add something, reference temperature or whatever, into the numerator? It isn’t a difference, but will be treated as one. IOW, the gain will be calculated as if part of the change was a shift to the reference temperature from zero. And of course that is not true, but worse, it happens regardless of the smallness of the actual changes. The rate G is not a stable limit that you can put a number on, but goes to infinity.

Since pre-industrial times, we have had half of a doubling of atmospheric carbon dioxide concentration. In this period we have had about 0.8C global average temperature increase, but we have no way of knowing what part of that was caused by the increase of atmospheric greenhouse gases due to humans. UN IPCC suggests that at least half is anthropogenic. Considering that all of the various feedback mechanisms have been active during this whole time, why is there any reason to expect a larger temperature increase while the next half of the doubling is completed?

All of the feedback mechanisms that are responsible for driving the climate to some quasi-equilibrium state, such as our present interglacial period are active in producing that quasi-stable state. When it is disturbed, the climate responds. We have seen that response. It is not particularly frightening.

“The UN IPCC suggests that at least half is anthropogenic” So does Roy Spencer: ” I’m willing to admit over half of it could well be due to increasing CO2.” http://www.drroyspencer.com/2019/06/uah-global-temperature-update-for-may-2019-0-32-deg-c/

In other words there is the undeniable possibility it is mostly or even entirely due to GHG emissions.

“the climate responds. We have seen that response.” We have seen the start of that response. 0.8C is the average, some places see less, some a lot more, 3-4C around the Arctic for example. There is more baked in https://iopscience.iop.org/article/10.1088/1748-9326/10/3/031001

another 1C masked by aerosols and GHG concentrations are still rising exponentially.

Maybe a dark blue Arctic ocean won’t make too much difference or maybe the ice has helped stabilise the climatic. One more August Arctic cyclone like 2012’s and we’re going to find out.

Well we will find out if you are right because we are definitely going to go there because world emissions are still increasing and they won’t be stopping any time soon. That is because emission control is the dumbest idea you could use to tackle the problem even if you assume it is correct.

That is what you get for letting a group of social science graduates lose on an engineering and physics problem .. it is a little more challenging than asking “would you like fries with that”.

“but we have no way of knowing what part of that was caused by the increase of atmospheric greenhouse gases due to humans” – here is a serious bid:

“Simulations including an increased solar activity over the last century give a CO2 initiated warming of 0.2 °C and a solar influence of 0.54 °C over this period, corresponding to a CO2 climate sensitivity of 0.6 °C (doubling of CO2) and a solar sensitivity of 0.5 °C (0.1 % increase of the solar constant)”

https://www.researchgate.net/publication/268981652_Advanced_Two-Layer_Climate_Model_for_the_Assessment_of_Global_Warming_by_CO2

What amazes me here is not Nick’s clear explanation of feedback in artificial (electronics) systems, but his apparent belief that nature never figured that out too.

We burn fatty hydrophobic long-chain fatty hydrocarbons (parafins) in our jets and trucks and act like we humans invented something amazing. Yet nature figured out how to store and and very carefully “burn” long chain hydrocarbons in beta oxidation reactions billions of years ago to fuel life.

Those greasy bug splatters smashed onto your windscreen is exactly that — greasy oil that the now dead insect was to use to fuel its flying around for days or weeks doing whatever it was going to do before it met your car’s windshield at highway speed. Fatty fuels = dense energy.

The point here is, we humans pretend we invented something that nature figured out billions of years ago.Earth’s climate system is clearly ruled by feedbacks. We didn’t invent that. It just happened.

A many on manysetup. Positive and Negative feedbacks. On all time scales – from hours to millennia. Water of course, in its 3 phase changes is the biggest of these internal feedbacks. Sea ice modulation of high latitude heat venting from the oceans. Latent heat transport to the tropopause via bulk transport of convection. Evaporation increasing salinity and driving dense water sinking to push the ThermoHaline overturning Circulation to cool the tropics and warm the high latitudes. Feedbacks cooling and regulating the Earth’s climate heat energy budget by transporting energy along thousands of different paths and forms, which only a few of which models tackle.And climate models, while decent on radiative energy transports, are hopeless to emulate most (if not all) of water phase changes on first principle. So the modellers fudge them. They fudge H2O physics to make it look like they are doing science. They call it parameterization. But it is feedbacks, feedbacks of which they are mostly guessing at the numbers. And educated guess is still just a guess. And the guessing allows them to make the models do what they want, like a Hollywood CGI animation scene.

Climatology has no hard quantitative clues on clouds adjusting albedo or limiting upwelling IR radiation as a fedback to an energy input… the modellers just fudge it.

Convection transport physics transporting heat above the effective radiation level… they just fudge it.

Precipitation as a phase change releasing heat and microphysics of water droplet formation… they fudge that too.

Then they call it all “science.” Hold big meetings. Publish lofty sounding papers. And push their fake science on an unwitting public.

They produce models with predictions of positive feedbacks of water vapor in the tropics with mid-troposphere hot spots to get 2X to 4X amplification by positive feedback … amplification never to be found in observational data. Yet they trundle onward, claiming it is “largely resolved.” Far from it. The only feedback response the modellers are hoping for with a “correctly tuned” output is the grant paycheck from the climate gravy train, like a trained seal getting its fish reward for a stupid trick.

Clearly, Earth’s climate is highly regulated with strong negative feedback from the immense oceans. Oceans of salt water. Convection physics processes that are substantial players along with radiative physics.

Mainstream climatology is just playing the Useful Idiot role for the Global Socialists and the Green Blob that wants our freedoms and our money, respectively.

You’ve provided a surefire setup for a new cottage industry in Florida; that being a bunch o’ YouTube preppers, rendering Love bug scrapings into tallow candles.

Waste not, want not. Or something.

100% Joel O’Bryan,

Very well stated .

The theory of CAGW relies on positive water vapour feed backs to multiply CO2s effect on temperature ,as the doubling of CO2 in the atmosphere can only raise the temperature of the planet by .6 degree Celsius.

Only by adding in positive feed backs and deducting negative feed backs can the planet warm more than .6 Celcius with a doubling of CO2.

The theory of CAGW depends on the tropical hot spot which has not been found .

IT is plain to see that the GCMs run far too hot and with the computing power that is thrown at them they should be accurate .

Therefore it is a reasonable conclusion to assume that the parameters that the climate models are based on are faulty .

The climate models have used the input of positive feed backs to push up temperature and are not entering correctly the negative feed back role which in the case of water vapour ( the main GHG ) works as both a negative and a positive feed back .

This deception is the what the whole Global warming scam is based on .

Nick, thanks for an interesting post.

The big feedbacks that I see in the climate system are temperature threshold based.

For example, take the late-morning typical development of the tropical cumulus field. Below a certain threshold temperature, there are no cumulus clouds at all. However, after the temperature passes a certain threshold, we very quickly see a fully-developed cumulus field covering the entire sky. And this, of course, cuts way back on the incoming solar energy.

The same thing is true of the next step in the tropical daily cycle. If the temperatures continue to rise after the establishment of the cumulus field, when it passes a second threshold we see cumulonimbus, the great thunderstorms that are so efficient ar removing energy from the surface.

These temperature threshold-based systems are quite similar to the way say house thermostat operating a furnace works. When you are above a certain temperature, the furnace is off. When the temperature goes below the set-point, the furnace kicks in.

Note that in none of these cases is the response linear …

Would you be willing to talk a bit about the mathematics of this type of setup?

Best regards,

w.

Thanks, Willis.

The first thing to say is that Soden and Held do have an important term for feedback from clouds. Quantification is with models, so probably doesn’t get as fine-grained as this. The modelling is based as far as possible on observation.

Another point is that the feedback here is on equilibrium global average temperature and averaged forcing. So what counts isn’t the non-linearity of the individual events, but whether statistically, after the climate has shifted, the overall flux when averaged responds linearly with the climate variables. If it were non-linear, what do you think it might look like? It could be, of course. But it’s hard to see why adding say 2° would depart radically from double the effect of adding 1°. If it did, it would be interesting to understand why. One of the factors favoring a smoothed average response is the geographic inhomogeneity. Even if there is something special happening when you go from 28°C to 29°, say, at any point of warming only a fraction of places are in that zone. And they pass through it.

That was actually the point about the transition from 1D models to GCM’s. 1D had the criticism that it was averaging a lot of things that were very non-linear if you drilled down far enough. In fact, just about everything in turbulent fluid flow, and the same is true in engineering CFD. But the GCM idea is that you can emulate the non-linear events (still only down to a rather coarse scale) and then add them all up to see what the average in space and time amounts to. And of course the average does behave a lot more smoothly, which means a linear approximation is much more reasonable.

I’ve been saying that you don’t have to know in detail how the operating point (reference etc) state works. It might include processes like you describe that are hard to quantify. But the perturbation analysis only needs to know that the state exists and is sustainable in the absence of forcing. It doesn’t need to know why. The problem would come if there are kinks in the response curve with change. But the kink won’t affect all regions at the same time.

Hi Nick,

actually enjoying the technical discussion and I appreciate the effort (and your response earlier). However, generalities about this and that are one thing (although I do like Willis’s input) but now you have made another statement that I can take out of context.

The statement “but its hard to see why adding say 2 degrees would depart radically from double the effect of adding 1 degree”. I almost had kittens when I read that.

In an alternate universe 2 degrees is the tipping point (re: thermal runaway) ergo if I have understood you correctly and nothing happens at 1 degree then mathematically twice as much will happen at 2 degrees (linearly speaking of course).

I do not understand why you spend so much time defending CAGW when, in this post, you have virtually explained why it shouldn’t be?

Cheers,

Andy

Andy,

“In an alternate universe 2 degrees is the tipping point”Well, there are many possible universes. But in the scientific one 2 degrees isn’t a tipping point for thermal runaway. It’s a reasonable target, saying that we don’t have very good control of the process, and if we can’t agree to aim for that, the chances of getting control are bleak. I don’t think you’ll find scientists, or even IPCC etc, saying thermal runaway starts at +2C.

I was, though, talking about how a specific phenomenon (Willis’ thermostat) might respond.

Nick;

when does thermal runaway happen? you know as well as anyone here that the answer is never, stop playing and just say it.

Thanks, Nick. Let me see if I can explain my issue. This shows the correlation between absorbed energy at the surface and the surface temperature.

As you can see, not only is the correlation different, even the SIGN of the correlation is different. Note that on the land the correlation is almost always strongly positive, so that is what we assume the world to be like … but it ain’t, because over large areas of the ocean, the correlation is negative.

And that means that unless we are to believe that more energy cools things down, we have to assume that in those areas the incoming energy is NOT regulating the surface temperature, it’s the other way around—the surface temperature is regulating the incoming energy.

Nor is that a simple linear negative feedback, which could only reduce the amount but could not change the sign of the correlation.

Let me add a couple of graphs to further illustrate the temperature threshold-based nature of the game. First, here is cloud top height versus surface temperature:

Linear? Not even remotely. Here is the same thing, but for cloud area fraction vs SST:

Note the breaks at about 26°C in both of those. Finally, here is Pacific equatorial rainfall amount and SSTs from several sources:

Again, note the break at around 26°C.

I bring all of this up to show the threshold-based nature of the systems. For example, the cloud area decreases from an SST of zero up to about 26°C … and above that, it increases.

Now, you can average that all you want, but “linear”?

I don’t think so.

Your comments on this kind of threshold-based feedback greatly appreciated, thanks again for the post.

w.

Willis,

Thanks again

“the incoming energy is NOT regulating the surface temperature, it’s the other way around—the surface temperature is regulating the incoming energy”One reason I recommend the linear equations formulation is that it doesn’t care about what caused what. It just says there is an association. And you can solve for that.

“because over large areas of the ocean, the correlation is negative”The ECS issues concern a global average. That effect may be quite different for land and sea, but the amount of land and sea doesn’t change. There is no reason why the summed global response shouldn’t varying linearly with some forcing even if locally it is going different ways.

“Linear? Not even remotely.”But again, we’re looking for a linear response of a global average to a global forcing. If this effect has a sharp cut-off at 26°C, that doesn’t negate a linear response globally. The area below 26°C would diminish smoothly with warming, and so the spatial integral of the cloud top effect would still vary linearly.

It’s the same with sea-ice albedo. There is a discontinuous ice front. But if the front recedes proportional to warming, then the averaged albedo diminishes smoothly.

Willis,

I have long been impressed by your insight. And I think you are on to something.

I agree with you that the earth’s temperature is stable at least partly because it includes a governor system.

By observation, the set point of the governor appears to be about 26 C. What could change the governor setting would be some change in the atmosphere such that there is a shift in the 26 C threshold. I’m guessing here, but would a change in atmosphere density do it? A shift in gas ratios? A shift in ionoshperic electrical potential? Change in cloud nucleation rate?…

-BillR

Mr. Stokes analysis has one over-simplification: it ignores the reference state. In the amplifier example, at zero input, the amplification stage would have zero output. In an audio amplifier, that would lead to noise when the amplifer is in “idle” or zero input. Consequently, audio amplifier circuits have a bias added to the design to prevent it from returning to zero output.

Mathematically, there are a few ways to address this need: one could add a base signal to the input such that it never returns to zero. One could also add something to the feedback transfer function Beta to prevent it from returning to the zero state. (Please don’t get hung-up on the signs of the math, one can move negatives around at whim minding positive vs. negative feedback.) A third choice would be to have the transfer equations additive (V1+V2) on which I’ll comment later (vida infra).

Thus, I would argue that Lord Monckton’s approach is the former approach and Mr. Stokes is the latter. From the point of view of the model, either can be made to work. In the case of thermal forcing feeback models, it must be able to predict the 1800’s temperatures at the base CO2 conditions. If one calibrates the model bias (reference condtion) from that point in time, then the model will predict changes from that reference condtion based on changes to input.

The rub arises in determine the thermal forcing transfer function. Lord Monckton is arguing that the forcing function calibration was performed incorrectly creating a reference point error. This might be true unless the bias signal to the reference state is buried in a feedback transfer fuction Beta such that the net transfer gain is additive to the reference condition (temperature in this case). Not having examined the models myself, I do not have an informed opinion.

Building on this ignorance futher, I would be very dubious of the model if that the forcing function calibration is used to calculate a temperature gain and then simplying adding that to the reference temperature. Such a construct is basically the V1+V2 approach and either calibration assumes a decoupling of the transfer functions such that the mechanisms responsible for reference condtion are unaffected by the additional heating/CO2. That error could be addressed by considering additional mechanisms to the incremental temperature feedback that are present in the reference temperature function. Mr. Eschenbach has been actively exploring such possibilities via water evaporation and cloud formation.

As the devil is in the details, this reader will await the constestants to clarify how they implement the reference condtion and account for all the mechanistic effects relative to that reference condition.

One of many flaws in the climate fundamental flaw is the assumption that W/m^2 of feedback are linear to temperature, when the only relation in all of physics demands that W/m^2 are proportional to T^4 and that feedback is linear to W/m^2 of emissions. The climate system assumption of approximate linearity is the problem since for feedback analysis to be relevant, the system must be linear across the entire range of forcing, which is from 0 W/m^2 at night to over 1000 W/m^2 at high noon at the equator. Approximately linear over a small range around the average just doesn’t cut it. Of course, another flaw is the implicit and infinite power supply powering the gain, which the climate system lacks.

It all boils down to the same old flaw of ignoring COE, which must be honored between the input and output (forcing and temperature), but is not. This is ignored because the simplifying assumption of an external power supply precludes the need to conserve energy between the input and output. This is also why runaway seems plausible, when it’s absolutely impossible without the implicit power supply. Venus is not runaway GHG, but runaway clouds where the ‘surface’ in direct equilibrium with the Sun has become the cloud tops and the temperature of the solid surface below is dictated by the PVT profile of its atmosphere.

Without ignoring COE, there’s no way that the next W/m^2 can be differentiated from the average W/m^2 so that it can increase surface emissions by 4.4 W/m^2 (a temperature increase of 0.8C) while the average W/m^2 only contributes 1.62 W/m^2 to the surface emissions. There’s no way that feedback can apply only to the last W/m^2 when it must apply to all W/m^2 equally, which based on the claimed amount of feedback would result in a surface temperature close to the boiling point of water. The bottom line is that there’s nothing to demystify since the climate feedback model has absolutely no correspondence to either the laws of physics or the ground truth.

BTW, the analytical error in Schlesinger’s paper and Roe’s rehash of Schlesinger’s work is assuming that the feedback factor and the feedback fraction are the same thing, but since the feedback factor is an archaic attribute calculated as the feedback fraction times the open loop gain, the two are the same only when the open loop gain is unity. Meanwhile, both Schlesinger and Roe assume a non unit open loop gain that ‘amplifies’ W/m^2 of forcing into a temperature.

“One of many flaws in the climate fundamental flaw is the assumption that W/m^2 of feedback are linear to temperature”Nothing special about feedback; as shown, it’s just a linear relation between flux and temperature. It isn’t a clisci invention – one version is called Fourier’s Law. More generally it is widespread in engineering as the heat transfer coefficient.

” to be relevant, the system must be linear across the entire range of forcing which is from 0 W/m^2 at night to over 1000 W/m^2 at high noon at the equator”No, the relations are between global averages of equilibrium temperature. They vary within a narrow range.

“It all boils down to the same old flaw of ignoring COE”No, as I showed with S&H, their expression is flux balance, which is COE.

And as I said, the linearity required is not being honored by the climate model. You agree that it’s a linear model, but stubbornly refuse to acknowledge that T and W/m^2 are not linearly related.

You can’t just declare that linearity over a small range is all that’s required when the model requires otherwise, moreover; the equilibrium temperature varies over a whole range across the planet. The actual average temperature only represents the average W/m^2 of emission, the system must operate over a wide range of emissions (and forcing), not just one value.

Heat transfer has nothing to do with the sensitivity and only affects the redistribution of existing energy. Relative to Fourrier’s Law, dQ/dt has the units of Watts, not W/m^2. One is a flux and the other is a flux density. You need to understand the difference between these two.

You still haven’t explained how the climate can tell the difference of the next W/m^2 from all the others so that it can be amplified by so much more. Unless you can explain this, nothing else you say will matter.

Once again you demonstrate an ability to pierce through to the heart of the issue. My understanding of Nick’s argument also leads me to your conclusion. I would note also, the fact that one can construct a tidy mathematical argument does not indicate actual relevance to the physical reality one is attempting to describe.

rip

co2isnotevil

Yes, Stokes is claiming that linear approximations are adequate, but I’m sure that there is a restricted range for that to be approximately true. Nowhere does he state over what range the linear assumption is valid.

Yes, that restricted range is the entire range of forcing, which for Earth is from 0 W/m^2 at night to more than 1 kw per m^2 at noon on the equator. This entire range of forcing defines the ‘small signal’ and the requirement is for linearity spanning the operating range which must span the dynamic range of the small signal inputs and outputs. If the behavior is non linear outside of the operating range, which is always the case for real amplifiers when they start to distort and can no longer be quantified using linear feedback analysis, approximate linearity is OK as long as linearity is strictly maintained for all possible inputs and outputs both as averages across time and as instantaneous functions of time.

George,

“refuse to acknowledge that T and W/m^2 are not linearly related”Makes no sense

“Relative to Fourrier’s Law, dQ/dt has the units of Watts, not W/m^2.”No, it isn’t. Here’s Wiki

q = -k∇T

where (including the SI units)

q is the local heat flux density, W·m−2

k is the material’s conductivity, W·m−1·K−1,

∇T is the temperature gradient, K·m−1.

Nick,

Thermal conduction has to do with the heat flow through matter and is relating a delta T to absolute W/m^2 and not a delta T to a delta W/m^2 or T to W/m^2 and has nothing to do with feedback or the climate sensitivity. The point you’re missing is that W/m^2 of solar forcing are linear to W/m^2 of surface emissions and W/m^2 of emissions are proportional to T^4. W/m^2 of forcing are not linear to temperature as is being assumed, not even incrementally.

The only feedback model that makes sense is to have all W/m^2 of solar input as the forcing input and W/m^2 of emissions equivalent to a temperature as the output, where the gain is 1.62, where each W/m^2 of forcing results 1.62 W/m^2 of surface emissions which is 620 mw per m^2 more than an ideal BB which emits 1 W/m^2 per W/m^2 of forcing.

You still haven’t answered the question about how can the climate system tell the difference between the next W/m^2 and the W/m^2 of solar power, so that the next W/m^2 of forcing can increase surface emissions by 4.4 W/m^2 (0.8C) while 1 W/m^2 more solar forcing will only increase surface emissions by 1.62 W/m^2.

Another serious flaw is considering CO2 to be a forcing influence. The only actual forcing is from the Sun and changes to the system, for example CO2 concentrations, are more properly considered as equivalent to W/m^2 of solar forcing, keeping the system, i.e. CO2 concentrations, constant.

Nick,

BTW, there is one place where heat transfer matters, but again, it has nothing to do with the radiant balance. This establishes the thickness of the thermocline as it acts to insulate deep cold waters from warm surface waters.

The only linear relationship related to energy and temperature is that T is linearly proportional to stored energy (i.e. 1 calorie, 1 gram of water, 1C). However; energy is measured in Joules while Watts are a rate of Joules. A higher heat capacity just means that equilibrium is approached slower, but has no effect on what that equilibrium will be. The same is true with conductivity, which also affects the rate of heating or cooling, but not what the equilibrium temperature will be.

The ONLY thing that effects the equilibrium temperature is the availability of W/m^2 to offset the emissions consequential to that temperature and the relationship between W/m^2 and the equilibrium T is the SB Law dictating the T^4 dependence of W/m^2, thus the sensitivity has an unavoidable 1/T^3 dependence that can not be ‘linearized’.

To be absolutely clear, the Earth is not an ideal BB, but is a non ideal radiating body whose macroscopic behavior can be fully characterized by the SB Law with a non unit emissivity. Try again if you think that there’s another laws of physics that can quantify the macroscopic behavior of the planet relative to incremental forcing, but it’s not Fourriers Law whose only possible effect would be on the time constants which have nothing to do with what the equilibrium state will be.

The data confirming what I’m saying is unambiguous and the analysis is readily repeatable. If you don’t believe me, do the analysis yourself. Here’s a scatter plot of 3 decades of monthly averages of the surface temperature vs. the emissions of the planet for each 2.5 degree slice of latitude from pole to pole.

http://www.palisad.com/co2/tp/fig1.png

The green line is the prediction of a gray body whose emissivity is 1/1.62 (0.62) and conformance to the T^4 dependence is unambiguous. The blue line represents the IPCC nominal sensitivity drawn to the same scale as the data. Notice how when plotted to intersect with the current surface state, this passes through zero, rather than being tangent to the actual response? This is a direct result of the inappropriate assumption of approximate linearity. In other words, the 1/T^3 dependence of the derivative of the planets response to solar energy (the sensitivity) is ignored.

Even more interesting is that the magenta line represents 1 W/m^2 of surface emissions per W/m^2 of forcing at the current surface temperature and that when you plot the average solar power vs. the surface temperature for the same slices of latitude, the slope of this is also 1 W/m^2 of emissions per W/m^2 of incremental forcing.

http://www.palisad.com/co2/tp/fig2.png

In this plot. the yellow dots represent the relationship between the surface temperature and planet emissions while the red dots show the relationship between average solar input and the surface temperature, where the magenta line is what my analysis predicts. Where these two curves intersect defines the steady state average response. Any model of the climate system must be able to reproduce this measured response.

Feedback loops in electrical/electronics systems, which are very well understood and with my electronics/electrical background, I have never understood why these are used in relation to the climate system. To me, its an excuse to justify the BS (The Bad Science of climate science).

Complex systems are by definition not linear. They can be described by linear equations over a short range of observable inputs, but so can ANY non-linear plotting of values.

We can observe a population of coyotes and rabbits and see that an increase in rabbits results in an increase in coyotes. We could even describe that trend linearly. However, once you reach a certain population of rabbits (e.g. the point where there is not enough grass to support the rabbits) the curve changes rapidly and a linear equation quickly loses predictive accuracy.

Trying to analogize a complex system such as the earth’s climate to a non-complex system such as an amplifier is not appropriate. Amplifiers are closed systems specifically constructed (through insulators and conductors) so that electromagnetic forces are consistently dominant through the expected operating range.[1] Complex systems have far more variants at play, and they are notable precisely because those inputs are dominated by other inputs until they reach a leverage point, when they suddenly become dominant.

[1] One of the reasons people often analogize complex systems to more static systems is that there is an element of complexity inherent in any material science. It all comes down to the range of inputs you wish to define. Define inputs for an amplifier or rocket engine narrowly, and you do not have to deal with complexity. But increase energy input past the point where materials can handle entropy and suddenly you have a VERY complex system where it is almost impossible to predict outputs. (c.f flight profile of a rocket where output energy exceeds the material strength of the engine)

I don’t see how your comparison works, too many variables. The thread topic is very specific with clearly known variables. In a natural environment, with unknown variables, where does the extra energy come from to “amplify” the, “feedback”, effect?

Simple and obvious answer is there is none. If CO2 was the “driver” of that amplification and feedback, Venus would have a run-away warming effect if we believe the theory. Simple observation, since the 50’s, it simply isn’t the case. CAGW via emissions of CO2 from human activities causing a “run-away” warming effect which then leads to catastrophic weather events is completely bogus. Only people on that funding gravy train “believe” that.

“Complex systems are by definition not linear.”

Then the climate system is not a complex system since W/m^2 of solar energy are quite linear to W/m^2 of surface emissions corresponding to its temperature. The constant ratio of about 1.62 W/m^2 of surface emissions per W/m^2 of forcing is independent of the temperature or total fluxes. This constant ratio is the reciprocal of the equivalent emissivity of a gray body model of the Earth’s emissions relative to the surface temperature and connects the dots between the behavior of an ideal BB and the Earth, which is not an ideal BB. None the less, non ideal BB’s can be accurately modeled with a non unit emissivity and there’s no other law of physics that can quantify the emissions of matter consequential to its temperature.

The climate system is only made to look more complex than it is by ‘linearizing’ the non linear relationship between emissions and temperature in order to provide the wiggle room to fudge support for what the physics can not.

Why? Because that’s all they can come up with.

Op amp inverters do sums nicely, but when faced with lag times, it’s best to go digital. By the time the system approaches a new equilibrium, the inputs have changed.

Same applies to digital! If you are modeling a system digitally and time is involved, then when you apply the inputs to each part of your model and calculate the outputs, you then have to reapply the outputs to any changed inputs until all change has settled. That is the first step of your simulation.

This is digital simulation 101.

Yes, there’s a better solution in the Z domain considering that the 620 mw per W/m^2 of feedback power returned to the surface from the atmosphere is energy emitted by the surface in the past, temporarily stored in the atmosphere which delays it before being returned to the surface.

BTW, the IPCC incorrectly considers the 1.62 W/m^2 of surface emissions per W/m^2 of forcing as the before feedback sensitivity. The only valid pre feedback emissions sensitivity is 1 W/m^2 of surface emissions per W/m^2 of forcing which corresponds to about 0.2C per W/m^2. Not only is the climate feedback model incompatible with Bode’s linear feedback amplifier analysis, feedback is effectively applied twice. Once to multiply 1 W/m^2 of emissions per W/m^2 of forcing up to 1.62 W/m^2 (about 0.3C) and then once more to arrive at 4.4 W/m^2 (about 0.8C).

So the whole feedback effect is wholly dependent on input temperature change and this temperature change can be the result of many factors. So how do you determine what the factors are and their magnitudes?

The article I cited by Soden and Held is the most frequently cited source for that.

Please see my comment here concerning what the 2006 Soden and Held paper uses as its primary source of observational data supporting their postulated water vapor feedback mechanism:

https://wattsupwiththat.com/2019/06/06/demystifying-feedback/#comment-2718240

As I read their 2006 paper, Soden and Held’s primary source of ‘observational data’ comes from the climate models, not directly from the water vapor feedback mechanism itself as it theoretically operates within the earth’s climate system as a whole.

If this is indeed where most of the observational evidence comes from, the obvious question arises as to the validity of that ‘observational evidence’ and its applicability to verifying the actual presence of the water vapor feedback mechanism in nature.

Nick, very many thanks for this.

And many thanks too to Anthony. Its the acceptance of posts from people who take different views which marks this site as being a worthwhile read.

Keep it up. There are some of your readers who’ll detest it. But there are many more who will appreciate it enormously. It makes WUWT significantly different from Ars, Real Climate etc, where the entire aim is to should down dissent and never allow an opposing point of view to be heard.

As JS Mill remarked, the quickest way to refute a fallacy is to allow it to be published and exposed to argument. Free debate, not censorship and abuse, are the answer. Thank you for realizing this and for implementing it in practice.

Stokes should post more on how to make the complex become simple, and then consequentially transform the simple into the impossible.

all to prove the point that Stoke’s own original version of impossible was a much superior product because it is based on the consensus of what is apparently … much more simpler?

I don’t think that you even tried to explain anything and tried to hide it behind condescending claptrap.

All you did was say that there are many feedbacks and they negated a stronger sensitivity to CO2 in 1850 but not now or in the future. It might be simple high school algebra but many assertions written as if it were like 1+1 = 2, even though stretching plausibility a bit. I’ll give you that Monckton doesn’t destroy the argument but he does highlight that a far from likely scenario is needed for high sensitivities to be considered robust calculations.

To me Earth’s climate resembles a Schmitt trigger more than a system with linear feedback. The system has switched between glaciation and interglacial periods for quite a while now. As it’s a natural system each steady state has quite a bit of noise. Looking at the noise only is interesting but basically a waste of time and money looking for the next switch of state would make more sense

Great comment Ben. The two steady states are very interesting – each some sort of equilibrium that keeps repeating at generally the same point over time.

In the glacial steady state, global sea levels are approx 100m lower, global temperatures are about 10degC lower, global ice cover is about 5 x current, and global atmospheric water content (=cloud cover) is not well known.

But it seems reasonably stable for long periods of time – before it switches back.

Firstly, thanks to Nick Stokes for a very clear and concise explanation. ( In stark contrast to Monckton’s mumbo jumbo )

The non-linearity is already there: the Planck (negative) feedback is T^4. Now if anyone wants to “linearise” that for small perturbations and then ‘forget’ they did that to pretend that the climate could reach a tipping point dominated by the small positive feedbacks is either does not understand feedbacks or is misleading you.

The climate has not reached a ‘tipping point’ and turned us into Venus in the last 3.5 billions years and has gone through much larger changes in “forcing” than our pathetic reintegration of 80ppmv of natural CO2 that was sequestered in the ground. It is alarmist political BS so suggest otherwise.

Thanks again to Nick for laying this out so clearly. It is quite possible Monckton is trying blind everyone with science in his cryptic arcane presentation, expecting everyone to say: wow, I can’t follow that but he sure seems to know what he’s talking about. That would be typical of his character. He is cynical and manipulative.

Maybe opposition to CAGW needs some “cynical and manipulative” players . It has been noted before that arguing facts and science in the climate debate is like bringing a knife to a gun fight.

Thanks, Greg

For tipping points, I don’t think T^4 would be the saviour, but I agree with Ben Vorlich above that we have in the past seen an apparent alternation between two quasi-stable states (my thought was an unsymmetric multivibrator). I think a tipping point would take us to a warmer one stabilised by some other nonlinearity. Still, it might turn out too warm for comfort.

Greg,

Using a linear approximation of S-B introduces negligible error in flux calculation for small perturbations around a given (brightness) temperature, provided of course that the gradient at that brightness temperature is used for the approximation. You can easily test this for yourself. The same is not true of using a secant gradient drawn from an origin at absolute zero to the S-B emission evaluated at 255K, say. The local gradient is exactly 4 times this secant gradient – a relationship which is always true for whatever brightness temperature you wish to start at. Using this secant gradient for extrapolation yields massive error in estimation of projected DeltaT. Nor is it appropriate to apply the linearised form to some temperature when the gradient has been calculated from a very different temperature. Lord Monckton actually uses a mix of non-linear form and linearised form in his calculations unless he has changed them since the last time I looked at his basis. The oft-quoted Planck response of 3.3 W/m2/K yielding 1.1deg K per doubling of CO2 under the assumption of no other feedbacks is already based on the linearised form for perturbations around a brightness temperature of ca 255K (corresponding to a surface temperature of ca 288K).

“It is quite possible Monckton is trying blind everyone with science in his cryptic arcane presentation … He is cynical and manipulative.”

Gee, that’s quite an “ad hom” sounding comment against Christopher M., especially coming from someone who seems to be presenting himself as a climate skeptic? If I didn’t know any better, I’d almost guess that the writer has had his toes stepped on by ‘M.’ somewhere before! Note that I’m not trying to cast any sort of aspersion onto Christopher M. here myself .. I just know that he has been a more or less outspoken right winger at times, and maybe this is really the source of the ‘ad hom’ I quoted above?

As far as the basic problem with Nick Stokes version of how control theory might work for these purposes, I think Stokes himself pretty much nailed it (in the negative sense of *defeating his own idea*), when he said;

“So 1850 is representative of a state when forcing from GHG was stable.”

(this was in response to Tom Halla’s questions).

So, it would appear that Moncton’s approach is at least more consistent in an ideal sense, as compared to magical thinking about a special “pre-industrial” age! That would be that special time in history, 1850, when all forcings were stable and there was no pressure toward climate change at all?

To put this a different way, if you don’t want to concede that traditional climate theorists have “made a mistake” on feedback, that’s more or less defensible, I guess, despite Moncton’s tendency to put it that way. Surely by now though, the Stokes’s of the world should at least be willing to concede that Lord M.’s model is a *possibility*, i.e., it *might* be better, as an ideal model, than a lot of what’s come before, better than what you might call the “conventional” idealized theories?

Hey Nick,

So what is the outcome here? Mainly that you can talk about feedback, signals, Bode etc if you find it helps. But the underlying maths is just linear algebra, and the key thing is to write down correct perturbation equations, and manipulate them algebraically if you really want to. Or just solve them as they are.Nice to hear that! But what is actually a main message of your text? I thought that previously you were saying that Lord Monckton’ objections are wrong because feedback responds to perturbations only, not on a whole reference signal. Now looks like you’re happily saying: ‘of course feedback acts on the entire reference signal! Climate science knows that and that always was the case!’. Well, to me looks like you happily aligned your position with His Lordness. There still may be discussion around exact numbers but at least conceptually we’re all in the same boat.

Paramenter,

“Now looks like you’re happily saying…”No, I’m certainly not saying that. The main message is that the language of feedback is just a way of talking about linear relations, and if you get tangled, go back to those relations. The relations give you n equations in n+1 unknowns, which you can reduce to a convenient proportionality. That describes the system. You need one more variable to define your instance. The gain formulation makes this easy. You provide the input; they system says, multiply by gain to get the output. But if you do provide that information, you could just as well have got the answer from the original equations.

What Lord M has wrong comes from the original equations. They follow from linearising. You have a set of equations which may be nonlinear, and which describe a reference state, which is sustainable without signal. The perturbation equations describe what happens if some forcing is applied. Some variables change in a way proportional to the forcing. So you can work out a set of coefficients that are true for any perturbation. The reduced form after algebra has just one coefficient, the gain.

You can’t put a state variable like reference temperature into this first order system, for two reasons:

1. It was part of the reference system, which was in balance. You don’t need to change it, and it would disturb that balance to do so.

2. It isn’t proportional to the perturbation, so would completely mess up the first order system. If the equations worked for one level of perturbation (or signal), then with constant coefficients, they would fail for any other.

I have little patience with Lord Monckton’s obfuscations, but Mr. Stokes has contributed by persisting in what is a mere semantic disagreement about how to describe a system in which a stimulus causes a response. There’s nothing wrong with saying as Lord Monckton does that feedback acts throughout the stimulus domain, although he’s obviously wrong in thereby implying near linearity.

Let’s call the stimulus R analogously to Lord-Monckton’s before-feedback equilibrium temperature, with the response E analogous to his after-feedback equilibrium temperature. We’ll avert our eyes from the fact that by making both variables temperatures he is finessing forcings problematically away.

In physical systems it is often instructive to describe the relationship implicitly: the dependent variable depends on, among other things, itself:

When it does there’s no reason not to call that a feedback system and say, as Lord does, that feedback applies to the whole stimulus.

This can be expressed more restrictively in the case of feedback amplifiers, which are so designed that feedback is additive:

And, despite what a lot of engineers have been saying, we see by inspection that feedback may in fact operate throughout the domain.

But in climate the feedback may not be inherently additive. So the additive relationship not appear until the equation is differentiated:

where the derivatives are evaluated at some reference state

Isolating yields:

In the with-/without-feedback relationship, and , so we get Lord Monckton’s relationship

So the perturbation relationship on which Mr. Stokes insists is entirely consistent with what Lord Monckton calls feedback over the whole domain. It’s just that Lord Monckton argues as though nearly uniformly equals , which, of course, he hasn’t proved.

There’s plenty to argue about in Lord Monckton’s theory without getting bogged down in semantics.

And even if there wasn’t you would still argue because of your personal feelings, Joe please just let it go, you are better than this.

When it does there’s no reason not to call that a feedback system and say, as Lord does, that feedback applies to the whole stimulus.Mr Stokes explicitly says otherwise.

Well, that’s just an error on his part.

Seriously, it doesn’t matter whether he’s right or not about that nomenclature. Imagine that at some point in the past there were no greenhouse gases and the sun was dimmer than now so that the earth’s surface temperature was lower than its current emission temperature. A brightening of the sun would add forcing and increase temperature, which in turn might decrease albedo and therefore cause additional forcing and a further temperature increase: the temperature increase would be reinforced by what I’d call temperature feedback, in the form of albedo-reduction-caused forcing increase.

Lord Monckton also characterizes reinforcement like that as feedback: feedback “to the emission temperature” since it’s part of what got the earth’s surface to the current emission temperature and beyond. He argues that the reason for the IPCC’s high ECS estimates is that “climatology” made the “grave error” of failing to take such below-emission-temperature reinforcement into account.

The issue is whether it did fail to—or whether that even matters to “climatology’s” ECS estimate. In the head post Mr. Stokes properly addresses this issue’s substance by explaining why indeed it does not matter. And that’s good.

But over the years Mr. Stokes and others have muddied the waters by arguing about whether mechanisms such as that reinforcement are properly called feedback. And scores of readers consequently seem to think it matters.

It doesn’t.

“Mr. Stokes has contributed by persisting in what is a mere semantic disagreement “It’s not semantic. It’s a matter of what you put into the equation. Change that, and you get a different answer. In Lord M’s case, a very wrong one.

Here is a homespun example. You have a tyre at 26psi. That is the reference state. It will stay there unless you do something. You don’t know much else about it; could even be full of CO2. But it works. You have a hand pump and want to get it to 28 psi. How hard will that be?

So you send a signal – one pump-load. The accurate gauge says it’s now 26.1 psi. So, you imagine, that is the response. Gain is .1 psi/pumping, and 20 pushes in total should suffice.

But no, they say, you haven’t taken into account the whole signal. What about the original volume of gas? What about the tyre thickness? Surely you have to add those in.

Fortunately, you’re a climate scientist, and don’t listen. It takes 20 pumps.

A closed non chaotic system, meaniless

An excellent and irrefutable example of the error in Lord Monckton’s analysis … thanks.

w.

OMG, you guys can’t be serious! A homespun analogy, I wonder how good those are usually? The pressure in the tire being a static condition to start with, nothing like the presumed steady state flow situation you were talking about as applying in year 1850. Maybe a better analogy would be flipping a running garden hose into a flowing river, or something?

Anyway, if you really wanted the ‘tire’ analogy to apply to the atmosphere, first you would have to put a roof on the atmosphere, then you would have to pump some air into it. Didn’t I see something like that on the movie ‘Spaceballs’?

Good analogy for how Lord Monckton goes wrong. For the extrapolation coefficient he doesn’t use local slope (your .1 psi/pumping) of response as a function of stimulus. His error is to use average slope (in your example current pressure ÷ number of pump strokes to achieve current pressure) instead.

That he does so can clearly be seen in his “end of the global warming scam in a single slide” at https://wattsupwiththat.com/2018/08/15/climatologys-startling-error-of-physics-answers-to-comments/, where he uses average slope rather than local slope as the extrapolation coefficient , i.e., as what is multiplied by to calculate the ECS value .

But which coefficient he uses has nothing to do with whether he does or does not employ the term

feedbackor something else to describe the mechanism by which the current pressure was achieved. So whether that nomenclature is correct or not is a red herring; it’s just semantics and an unnecessary distraction.But where is the feedback in your example? With feedback in addition to the perturbation the one pump would give you some pressure more than 26.1 psi.. So, when you figure the gain from the feedback you have to consider that the feedback component represents a response to the original signal as well as to the perturbation because, as you have acknowledged in your previous comments, the feedback can’t distinguish between the original signal and the perturbation. It appears to me you comments support Lord Monckton’s argument. Maybe you could propose a more complete analogy, i.e. one that involves a feedback?

“But where is the feedback in your example? “Lord M’s main error is in the partition between state variables and perturbation. With the tyre the states are easy to visualise – tyre at 26psi, and tyre at 28psi. The first is reference, and the one used for reasoning is the perturbation – how much response per pumping. It is the linearised version of the

differencebetween the states. The same partitioning applies to feedback, since this is in fact just a way of describing terms in the linear equation for perturbations.Hey Nick,

So you send a signal – one pump-load. The accurate gauge says it’s now 26.1 psi. So, you imagine, that is the response. Gain is .1 psi/pumping, and 20 pushes in total should suffice.But no, they say, you haven’t taken into account the whole signal.And, I reckon, they’re right. What pressure our hand pump has to apply during pump-load to overcome the existing tyre pressure and top-up the tyre pressure from 26 to 26.1 psi? 0.1 psi or more than 26 psi? Methinks latter is true. This is hardly a feedback system but, if anything, suggest our Lord is right: each iteration of pump-load has to include existing in the tyre pressure plus additional value.

Fortunately, you’re a climate scientist, and don’t listen. It takes 20 pumps.Accordingly, climate science builds a hand pump with the output pressure 0.1 psi and successfully inflates the tyre with the initial pressure 26 psi. Feedback mechanism is clever enough to figure out that we want only difference and takes care about the rest. Easy life, except that does not work like that.

That is precisely the reason I said earlier we need something more than such ‘thought experiments’. Our Lord and his co-authors build test rigs to prove their point, what is a good start. When their article is published it should contain design and all details so the process can be replicated and validated.

Paramenter:

Actually, it’s only according to Lord Monckton that their “test rig” proves their point; we haven’t seen that test rig. Just as we haven’t seen his “eminent” co-authors entering into the rough and tumble of defending what Lord Monckton says is their belief.

And I’m pretty sure the “test rig” merely proves that using average rather than local slope works if the system is linear—but also that local slope, whose use Lord Monckton tells us is the “grave error” that “climatology” makes, works, too. Moreover, it’s straightforward to design a feedback circuit that shows local slope to be superior to average slope if the system is nonlinear.

Unfortunately, Mr. Watts stopped running my posts when I exhibited insufficient deference to Lord Monckton’s (exceedingly questionable) expertise, so you won’t see my test-circuit design.

Hey Nick,

No, I’m certainly not saying that.It sound like. Look at that:

In the limit of small perturbation,you still have a big reference temperature term that won’t go away. No balance could be achieved.For me you’re referring here that reference input is still here, as it should be – exactly what Lord Monckton is saying.

You can’t put a state variable like reference temperature into this first order system, for two reasons:1. It was part of the reference system, which was in balance. You don’t need to change it, and it would disturb that balance to do so.

I cannot see that in the justification you have provided. In Wiki formulas input always contains output which in turn contains input plus deltas. Under the previous post I’ve linked this textbook diagram, bit expanded compared to Wiki. Full output of the iteration (original input plus disturbances) comes back as new input – here denoted as Ym: measured value of the output (Y). That makes sense to me. Lord Monckton and his co-authors built a test rig to prove this behaviour. Methinks that’s the way forward. I’ve suggested alternative and cheaper approach: build a virtual feedback control loop using decent quality simulation control software as MATLAB Control System Toolbox or Python Control Systems Toolbox. Otherwise we’re risking running only

gedankenexperimentsand everyone will simply imagine different outcomes. My imagination vs yours!2. It isn’t proportional to the perturbation, so would completely mess up the first order system.That’s unclear for me. Of course reference temperature is not proportional to perturbations and why it should be? Still it is included in the output and henceforth in the input for next iteration of the feedback loop.

“For me you’re referring here that reference input is still here”No, I’m saying that if you wrongly put the zero order term among the first order terms, it won’t go to zero as they do, and you can’t get a solution which acts as a proportional perturbation.

You should keep the term with the other zero order terms, which were in balance before perturbation.

BTW, no-one seems to have asked, why pick out just this one “emission temperature” component of the reference state? Why not put them all in? Which would actually work, because they would cancel. But why do it?

“1. It was part of the reference system, which was in balance.”

But it wasn’t and never is. It’s perpetually unbalanced, i.e. perpetually dynamically oscillating around an approximate average.

Stokes,

You said, “You have a set of equations which may be nonlinear, and which describe a reference state, which is sustainable without signal.” Well now, if the forcing were to decrease, then I would expect the temperature to drop, demonstrating that the “reference state” is being maintained by the sum of an even earlier temperature and the temperature increase caused by an increase in the forcing. That is, one has to take into consideration the total signal and not just a delta, particularly if the response is highly-nonlinear over the total possible domain.

“Well now, if the forcing were to decrease, then I would expect the temperature to drop”Yes. That is just the Planck feedback term in operation. It doesn’t say anything about accumulated history.

If you have a heater in a room maintaining a steady temperature, then you turn down the current, then the room will cool. This isn’t a result of any “total signal”. It just reflects that fact that the temperature difference from ambient was being maintained by an outgoing flux (Fourier’s law, maybe with an overlay of S-B). Lower the flux and the temperature difference drops.

“People outside climate science seem drawn to feedback analogies for climate behaviour. Climate scientists sometimes make use of them too, although they are not part of GCMs.”

Something just does not pass the smell test.

https://wattsupwiththat.com/2018/08/15/climatologys-startling-error-of-physics-answers-to-comments/

IPCC (2013) mentions

feedbackmore than 1000 times.…

Will feedback now be erased from the IPCC reports? Easy after erasing previous warm spells, I guess.

Let’s see –

The Hole in the Ozone Scare,

Acid Rain Forests,

Global Cooling,

Blowball Warming,

and finally Climate Change.

But oops – feedback must go.How about Climate Linearity As A Goal by 2025!

$Trillions for Linearity!

Save the Algebra!

Fridays for Algebra!

Green New Algebra Curriculum!

Sounds more sciency than just good ol’ Climate Chaos.

Just sayin’.

Sorry, if the feedback was positive, we would have fried millions of years ago.

Exactly Robert, there _must_ be negative feedback int he climate system or it would have hit an extreme and stayed there.

If the climate ‘models’ don’t take this into account they’re worse then useless (because some people believe them).

Climate models are written to reproduce the recent past history of two (tenuously) connected variables and, when they reproduce the recent past, are assumed to be correct and are then used to create hobgoblins for political, ideological and commercial reasons. The “scientific community” goes along with this fraud as long as the funds for “further research” continue to flow. Nostradamus would be proud.

It was called

massaging the numbersin slide-rule days.Hi Robert of Ottawa (I must be only a few kms away at most from where you are). Indeed, the preoccupation with positive feedbacks in climate science is surprising if they are truly scientists in this field. Your comment should be considered axiomatic.

We’ve had 20 times the present level of CO2 in the earth’s atmosphere and, infact at present, we aren’t far above the CO2 starvation point for all life on the planet. If we ended up calling the Holocene the Anthropocene, it would be for the “Great Greening” that’s taking place and for possibly extending life on the planet for perhaps 100s of millions of years more than was destined the way things were going before fossil fuel burning.

Nick: Except for mentioning runaway warming as something we might worry about, I salute your contribution here and admit I had similar misgivings about the “state” temperature of 1850’s without getting my teeth into it. It seemed to me the 1850 temperature would have arrived at itself as a product of existing conditions of GHG and whatever other effects (not necessarily in equilibrium, though). You make a fair case for Lord M’s “error” claim being wrong.

Having said this, I still have grave problems with the “Principal Component” applicability to climate. It probably operates in climate, but with so much else going on m, it historically has regularly been overpowered by natural drivers. You brushed Tom Halla off in early comments on the thread related to the MWP. How did we get from the MWP, when, apparently CO2 was about the same as in 1850 and it was as Warm as 2000s by this ‘control knob’ mechanism? Clearly, something much bigger than GHG forcings can swing average temperatures by several degrees -say from the Holocene Optimum to the coldest depths of the LIA. This should not be subject to much argument.

The fact that molten salt nuclear reactors will usher in low carbon energy worldwide and will do so cheaper than fossil fuels or anything else makes discussions about atmospheric CO2 rather academic oe even irrelevant. We KNOW very well the effects of molten salt nuclear power – and it doesn’t require knowledge of climate feedbacks, etc.

The evil of global warming hysteria is that its adherents are so ignorant about “solutions.” But THAT is the most important issue. Therefore I look upon articles like this as mostly a waste of time and energy and

avoiding what’s important. Nuclear engineers have the knowledge we need. Neither Nick Stokes nor anyone else is in possession of anything that can trump technology.

In this article posted on Neutron Bytes on June 4th, 2019, Dan Yurman interviews Dr. Jose Reyes, a co-founder of NuScale and chief designer of their small modular reactor:

https://neutronbytes.com/2019/06/04/interview-with-nuscale-ceo-jose-reyes/

Molten salt SMR’s are some years away from being commercialized. On the other hand, NuScale’s SMR design uses half-height conventional fuel rods. The targeted capital cost for their first 12 module facility, about 700 Mw total after the 12th NuScale SMR module is installed, is $4,200 per kWe.

NuScale is a decade ahead of the pack in getting an SMR into commercial production. Their current schedule calls for the first US-manufactured SMR to be in operation in eastern Idaho by the end of 2026.

I still have to disagree Nick.

If you took an opamp, wired for unity gain, inverting output.

Feed in 1.0v, you will measure -1.0V on the output, of which we can all agree.

Do we move to the are of disagreement.

Increase the input voltage by 0.0V to 1.1V

The output will change by 0.1V becoming -1.1V.

I say you always need to refer the input as 1.1V, you appear to say (and your fellow climate friends) that you only use the 0.1V input change, delta V if you wish.

We both understand the the two output voltages will not exist at their current values of either -1.0V or -1.1V WITHOUT the WHOLE of the input voltage being present ie 1.0V or 1.1V.

The output voltages only exist at the levels they do, is because of the whole of the input, not just the delta.

I am all for reducing equations to their simplest form, however, equations only work in the real world, if they map to physical reality.

As Lord Monckton has discovered, the formal equation use, is wrong.

Sir,

Increase the input voltage by 0.0V to 1.1VDid you mean increase input by 0.0V or 0.1V? That:

The output will change by 0.1V becoming -1.1V.suggests that it should be 0.1V?

Nick,

Well good luck with this, although looking at the comments so far, you have an uphill battle.

I would have preferred in a way that you just presented the derivation of the climate energy balance equation without any mention whatsoever of control theory – to show that it can be done. One of the most frustrating things about Lord Monckton’s ideosyncratic view is that he insists that climate science has used control theory to estimate climate sensitivity, that it did so in a flawed manner, and hence that such estimation can be improved on by using an improved model supported by his “world experts in control theory”. Although he seems to have backed off somewhat from the first erroneous claim, he is resolutely sticking to the last claim. To judge from the comments he has received, this misrepresentation has seriously damaged the understanding of many people and left them in a snark-hunt for the perfect control box analogue.

The catastrophists require much more warming than can be honestly ascribed to CO2; this is where the talk of +ve feedback comes from and they do prescribe control theory; the controlling variable being CO2.

You lost me at f(x)=0. No matter. The feedback analysis, while interesting, doesn’t provide much help in explaining the earth’s climate that we can piece together from the data. The problem I have with the GCM’s is that they can not explain the cyclic pattern we have observed through time since the end of the last interglacial. It is clear that CO2 changes can not explain the Little Ice age, the Medieval Warm Period, the Dark Ages, the Roman Warm Period, etc. How do you explain the 3-5 degree higher temperatures during the Holocene Optimum? Current GCMs are one-trick ponies. Increase CO2 increase temperatures. There is a reason that the temperature decline from the late 1940s through the 1970s has been adjusted away. The GCMs cannot explain the cooling.

I can not reconcile the predictions of significant warming from a doubling of CO2 with the Ideal Gas Laws. The pressure changes required for a 3 degree increase in temperature from adding .04% of CO2 to the atmosphere make no sense to me.

To say that Hansen’s prediction from the late 1980s proved correct is not accurate. Nothing that he predicted has come to pass. When you compare his temperature predictions for “business as usual” CO2 emissions growth to actual temperatures since his prediction, he wasn’t even in the ballpark.

I find the entire discussion silly. The assumed linearity makes the discussion moot. There is no evidence of linearity in the climate. There are also too many unknown variable, and certainly some unknown unknowns including the future insolation. So we don’t know X0, much less f(x) or f'(x). This is what makes climate models just so much nonsense.

Adding a different equation that purportedly takes into account something “they” forgot, is just as nonsensical as the original and for the same reasons. The point is we don’t know enough to predict, and don’t have near enough data to model the black box which is the climate system. Picking a starting point and drawing a linear equation to today and making a prediction based on that is exactly the complaint.

Even if you have an accurate picture of past feedback, in this case future, non linear feedback doesn’t depend on past feedback and it could be exponential or it could be exponentially negative. We just don’t know. So the discussion is speculative and political about “what if” with both sides invoking “science” disingenuously.

The problem with IPCC is not that they’re wrong about feedback — it’s that they significantly overstate their certainty for political purposes which include ongoing funding for their research.

A “linear” feedback is simply one which is proportional to the output that drives it.

Real feedback systems are rarely perfectly linear. But an “approximately linear” feedback is one which, though perhaps not linear over the full range of theoretical values, is nearly linear over the range of values of interest, and most of the interesting climate feedbacks are approximately linear, for practical purposes, over the ranges of interest, simply because the ranges of interest are small. Consequently, most feedbacks do not introduce substantial nonlinearity into a system, and a simple linear analysis is not far from the mark.

For example, although Planck heat loss (Planck negative feedback, a/k/a Stefan-Boltzmann response) is proportional to the 4th power of the temperature, the increase in radiative heat loss going from 300K to 301K is just 1% greater than the increase in radiative heat loss going from 299K to 300K. For practical purposes, that’s linear, even though the 4th power of T is obviously not a linear function. It “works” to approximate it as linear because the anthropogenic perturbation of temperature is on the order of only about 0.3% (on the Kelvin scale).

Most climate feedbacks are like that: although not fundamentally linear, they are approximately linear, over the (small) ranges of interest.

An exception is CO2 forcing, because the range of interest for CO2 level is not small. Mankind has raised the atmospheric CO2 level by about 46%. That large change means the logarithmically diminishing effect of additional CO2 becomes significant.

Here’s a graph showing what I mean by an “approximately linear” function:

https://sealevel.info/vp_diag_200pct_looks_linear_when_you_zoom_in01.png

I suppose the real question is this: Which set of assumptions produces results that reflect observed behavior? Those used by the bulk of the modeling community, at least those associated with the IPCC, do not seem to operate reliably.

To avoid positive feedback in climate models, you have to deny that atmospheric CO2 concentration is sensitive to surface temperature.

To be more explicit, you have to deny that decomposition of damp organic residue does not increase with temperature or, perhaps, that the surface of the earth is not coated with damp organic residue.

What is the CO2-sensitivity-to-temperature incorporated in climate models? If it is less than that indicated clearly by the Antarctic ice-cores, the models are anti-scientific.

Let me correct:

… you have to deny that decomposition of damp organic residue does not increase with temperature or, perhaps, assert that the surface of the earth is not coated with damp organic residue.

For those that want to stay within the radiation domain, what validity has an analysis that does not consider a boundary that interacts profoundly with the radiation? You know better.

To avoid positive feedback in climate models, you have to deny that atmospheric CO2 concentration is sensitive to surface temperature in a positive way.

Obviously the warmer is is and the more CO2 is in the air the more plants turn it into carbohydrates and other organic material.

Giving overall negative feedback.

Overall photosynthesis might increase with more CO2, but it doesn’t keep up with geometrically increasing bacterial activity on accumulated residue. Compare the carbon contained in temperate and tropical soils.

… and seas.

Here’s a simpler description of how feedback works, and a simpler analysis of a linear feedback system (followed by a list of all the significant climate feedback mechanisms of which I am aware):

https://sealevel.info/feedbacks.html

Nick,

I’d be curious to hear your response to my post here regarding all of this:

https://wattsupwiththat.com/2019/06/05/the-moral-case-for-honest-and-competent-climate-science/#comment-2717580

RW,

I didn’t much agree about the 3 decades delay. The main slow process is diffusion into the ocean. But worse is this:

“Do you agree that in order to ‘amplify’ +3.7 W/m^2 of ‘forcing’ from 2xCO2 into +3.3C at the surface it requires +18 W/m^2 of net gain at the surface/atmosphere boundary (287K = 385 W/m^2;”That makes no sense at all. No-one suggests that whatever relates these small increments can be said to be a rate that applies throughout a notional process of warming from 0K.

Lord M does something like that too when he postulates an E(R) function that must pass through 0K.

Why not consider the base state in the feedback system to be the earthy with no greenhouse gases? The addition of such gases makes for a relatively small perturbation (is it 15 degrees?) from the hypothetical steady state. In that case, I think Lord M’s approach seems correct. One could then calculate the effect from different perturbations with differing compositions of greenhouse gases. Is that right?

When engineers want to predict the physical behavior of an amplification circuit design operating inside a larger electronic system, they have the option of building a prototype of the design in the laboratory to see if its actual behavior matches theoretical calculations.

The benefit of this approach is that if they have a physical prototype in front of them, the engineers have easy access to the amplification circuitry itself; and just as important, they have easy access to the measurement and test equipment needed to observe and precisely quantify how a physical implementation of the proposed circuitry design actually behaves relative to theoretical predictions.

First question:Is it not true that the Soden & Held paper from 2006 obtains most of its ‘observational data’ from general circulation models (GCMs) of the earth’s atmosphere, not from the earth’s climate system itself as it physically exists in nature?Second question:Is it possible to verify the existence of the Soden & Held water vapor feedback process through observations made directly within the atmosphere itself — in other words, inside the earth’s real climate system as it physically exists in nature — using instrumentation systems and data collection systems designed specifically for that purpose?Mmmm, … in this equation:

ΔR = λTΔT + λwΔT + λCΔT + λaΔT

… I think I see fluxes-represented-as-temperatures being added together. Is that even allowable in physics?

I thought that we could not add temperatures. So, how can we add fluxes derived from temperatures?

Maybe I’m an idiot.

“So, how can we add fluxes derived from temperatures?”Fluxes are very often derived from temperature differences. This goes way back to Fourier, and even, less clearly, to Newton. The coefficients are heat transfer coefficients. And you can add fluxes because heat is conserved.

CAGW depends on implied and assumed positive feedback.

To say that climate models do not use them is to lie.

“To say that climate models do not use them is to lie.”

They don’t.

And it’s therefore not.

A GCM is run from a set of initial conditions using the physics of the atmosphere (such that can be modelled on current computational economies) in play, and integrated forward in time.

Feed-backs then emerge within the model atmosphere, and in turn integrated forward.

They are an emergent feature and are NOT used per see as in being part of the modelling.

Well let’s ask the IPCC… http://www.ipcc-data.org/guidelines/pages/gcm_guide.html

“… many physical processes, such as those related to clouds, also occur at smaller scales and cannot be properly modelled. Instead, their known properties must be averaged over the larger scale in a technique known as parameterization. This is one source of uncertainty in GCM-based simulations of future climate. Others relate to the simulation of various feedback mechanisms in models concerning, for example, water vapour and warming, clouds and radiation, ocean circulation and ice and snow albedo. For this reason, GCMs may simulate quite different responses to the same forcing, simply because of the way certain processes and feedbacks are modelled…”

Hmmm…seems to explicitly-state that certain feedbacks are modeled.

GCM’s basically only give positive feedback signals. They can only come from a set of initial conditions using the physics of the atmosphere that has such conditions built into them.

If they did not have positive feedbacks built in, via misuse of the science assumptions, they could not and would not give positive feedbacks out.

“To say that climate models do not use them is to lie.” Is spot on.

During the past 800,000 years there have been eight alternating glacial and interglacial periods and atmospheric CO2 concentrations varied in a range of 200 to 300 ppm. When CO2 was at its peak Earth entered into glacial periods; when CO2 was at its lowest, brief interglacials began that reached higher temperatures and sea levels than we experience now. All this discussion of CO2 forcing and feedback mechanism are like discussions of the number of angels that can dance on the head of a pin, without first establishing the existence of angels. CAGW, human-caused climate change and/or extreme weather has been falsified by observations. Mother Nature has told us what has happened but we can’t stop counting the angels.

CO2 is no longer at its peak when interglacials end, but it is still high. But you are right that CO2 is just about at a minimum when the rather abrupt “terminations” start, and temperatures rise to interglacial levels in a few thousand years.

“… Climate scientists sometimes make use of them too, although they are not part of GCMs…”

Lol

Show me the GHG effect or the equilibrium in this system.

https://www.climate4you.com/images/AMO%20GlobalAnnualIndexSince1856%20With11yearRunningAverage.gif

No GHE there (at least visible) …. it’s lost in the overall signal.

There is no equilibrium because it is a small part of the climate system and is bounded by it.

The climate system of Earth has energy in = energy out.

Or at least it should.

IN EQUILIBRIUM.

(less solar cycles and orbital eccentricity)

“…The climate system of Earth has energy in = energy out.

Or at least it should.

IN EQUILIBRIUM…”

Should? Try harder.

“Should? Try harder.”

As in – it now ISN’T.

Try harder.

You said it “it should. IN EQUILIBRIUM.”

It MUST in equilibrium.

Nick ==> Unfortunately, neither the climate or the mathematical equations need to describe it are linear.

Thus “Again, it’s just a matter of writing down linear equations, resulting here from equilibrium flux balance.”

Quoting the new paper from Zhang and Kirtman:

“The Earth’s climate can be generally regarded as a chaotic system that is highly sensitive to initial conditions (Lorenz, 1963; Shukla, 1998). In chaotic systems, the error, defined as the distance between two initially close trajectories, evolves exponentially with time and become saturated (Dalcher & Kalnay, 1987). Hence, if there exist initial errors in the climate system (as is always the case), then beyond a period the system becomes random and unpredictable.”

Feedbacks (as defined in this essay) change the “initial” conditions of each computational iteration . . . . and since the equations ruling the various component systems are in fact known to be nonlinear, we can not know what effect, or even the sign of the effect, of these initial condition changes.

There is no reason to believe that the climate system behaves like an electronic feedback system or a junction transistor feedback system. Logically it seems that it must — but in actual fact, no such feedbacks have been proven to operate except in trivial, short-term examples.

Nothing wrong with the concept of feedbacks — but a great deal wrong with the idea that they operate in a linear manner in the Earth’s actual climate system or that we can predict or project changes caused by such “feedbacks”.

…..the mathematical equations

needed to describe it…..Kip,

“There is no reason to believe that the climate system behaves like an electronic feedback system”Well, my contention here is that you don’t need to believe that. The key thing is that forcing changes produce proportional responses in long-time global averages. There are reasons for believing that to be true, some based on the general conservation principles that apply, and the effects of diffusion. If you put a kettle full of hot water in the bath, the temperature will go up proportionally to the heat added. That’s true even if you have chaos like a couple of wriggling infants. It’s because heat is conserved and diffuses, so evens out.

All I’m really saying is that it comes back to the linear analysis of small perturbations. You can follow the math of control or electronic amplifiers if you find that helpful. Or do your own. It is elementary maths.

Nick ==> The major point is that small perturbations do not remain, neccessarily, small nor even of predictable sign in the real climate system or in climate models, for that matter. See mine @ Judith Carey’s https://judithcurry.com/2016/10/05/lorenz-validated/

These linearized equations all look so nice but they do not reflect the real climate nor the true physics of the climate.

Nick writes “It’s because heat is conserved and diffuses, so evens out.”

This is bathtub thinking. Literally. The earth is much larger than our intuitions let us understand. Using that argument one might believe there is no way a 1m difference in sea levels across the Pacific ocean could persist. But consider that the 1m rise across the 15,500 km is about a thousandth of the width of a human hair per meter when averaged.

So sure that cant persist forever but over what timescale can it? And yes time matters a great deal to any thoughts of “evening out” and considerations of what it means that the climate is chaotic.

I dont have the answers either…but I can sure see that bathtub thinking is fundamentally flawed.

The argument is that diffusion smooths out non-linearity, so that a linear approximation is more useful. I can’t see that you are contradicting that. The difference you describe may persist, but the variation is very smooth.

Non-linear diffusion smooths-out non-linearity?

“ And the physics requires that they satisfy a set of equations that I’ll write just as f(x)= 0”.

Translated into simple language, this means that if the value under consideration depends on several factors, then one can ignore the change in all factors except one. It was on the basis of this assumption that the notorious formula for the logarithmic relationship between the temperature of the atmosphere and the concentration of CO2 was obtained. This relationship is accepted by IPCC and majority of climatologists.

http://donaitkin.com/the-relationship-between-co2-and-temperature/

The problem is that under real conditions the influence of various factors on the temperature of the atmosphere cannot be excluded, and, therefore, the relationship between CO2 concentration and temperature is not substantively proven. In fact, Arrhenius, who was mentioned in this discussion, also knew this. Arrhenius believed that it was in a laboratory experiment that one could determine how the concentration of CO2 influences temperature. In 1896 he wrote: “In order to get an idea of how strongly the radiation of the earth (or any other body at temperature 15oC) is absorbed by quantities of water vapour or carbonic acid in the proportions in which these gases are present in our atmosphere, one should, strictly speaking, arrange experiments on the absorption of heat from a body at 15oC by means of appropriate quantities of both gases. But such experiments have not been made as yet, and, as they would require very expensive apparatus beyond that at my disposal, I have not been in a position to execute them”.

https://www.rsc.org/images/Arrhenius1896_tcm18-173546.pdf

I apologize for the long quotation, but it is clear from it that Arrhenius, unlike those who now call him the founder of the theory of the greenhouse effect, understood the need for experimental proof of this effect. Moreover, we know that such an experiment was not carried out till now, more than 120 years later, although modern scientific laboratories have highly sensitive and expensive equipment.

In essence, there is no physical justification for the mentioned logarithmic dependence of temperature on the concentration of CO2, or for the theory of the greenhouse effect as a whole. The question is: what is the point of discussing mathematical equations if we do not know whether they describe the physical processes correctly and do they take into account all factors?

The modellers always try to say that the climate computer code simply calculates the physics for any increase of CO2 and then spews out the new temperature. This is wrong for the main reason that all along, for the last 30 years ever since the 1st generation (we are now up to the 6th generation) Al Gore’s Church of Climatology has supplied special code to the modellers for each new successive generation.

The 6th generation was supposed to have many solar forcing variables in it but when the simulations showed that these new solar variables were causing all of the warming for past years, there was no warming left to do for the 413ppm CO2. So the 6th generation code was held up and delayed until they could work out some way to include some of the solar variables (there are many) and still have Mr. CO2 do his thing of warming. The 6th generation code was then released again and now it shows even more warming than the 5th generation. I suspect that the reason is, that some of the solar variables were incorporated and that Mr.CO2 was allowed to increase the temperature in the same manner as the 5th generation.

Some of the climate modellers themselves have expressed surprise to certain reporters that the 6th generation models are running so hot. If the climate modellers were in complete control of every last line of their code, then there wouldn’t be this surprise from more than 1 modeller in more than 1 supercomputer GCM , that the 6th generation is running so hot. Indeed there wouldn’t even be a need to have generations of models at all. The generation numbers haven’t changed from 1 to 6 because of computer hardware. It is the software changes that define the generations. So in the end there is some computer code that is common to all the models. This code i suspect is the actual forcing code for CO2 increase. Since Tapio Schneider admitted that they will never have enough computing power to solve the Navier Stokes equations for cloud turbulence on a global basis, the actual physics in the models is just a glorified video game. I contend that the code that is passed to each climate computer reresenting the next generation is the actual forcing code represented by a simple forcing formula. If this wasnt the case, then the modellers would not have expressed surprise to the reporters that their simulations are running so hot. The other proof of this is that simple 1 dimensional programs duplicate the GCM’s projections of future temperatures for doubling of CO2.

“Since Tapio Schneider admitted that they will never have enough computing power to solve the Navier Stokes equations for cloud turbulence on a global basis, the actual physics in the models is just a glorified video game. ”

It’s much worse than that. We do not know how to solve the Navier-Stokes equations even with unlimited computer power except in a few simple cases. We don’t even know if there is a globally defined solution, and if there is, whether it is smooth or has singularities.

IIRC, there was an interview of Dr. Wil Happer here on WUWT earlier this year, where he basically praised the Arrhenius GHG ideas, so I would suppose that might indicate he is mostly a conventional ‘Lukewarmer’, where basic theory is concerned?

Point is, if my recollection is correct, Happer also mentioned CO2 as the one gas where the log scale limited response is supposed to apply?

So, in these modern times, is this point corroborated, or isn’t it?

Things were not constant in 1850, because, to the extent that one can rely upon the thermometer record, temperatures significantly increased through to 1880 even though the change in CO2 was only a few ppm.

Indeed, as from 1850 to about 1853 there was about 1 degF of warming and a change of just about 1 ppm of CO2.

Clearly the system was not in equilibrium at that date, or these significant changes would not have occurred. something drove those changes, but it was not CO2.

Climate is never at equilibrium. It can’t be because different parts of the climate system such as the atmosphere and the ocean change at vastly different rates.

Reading this thread took me back to 2009 when I read ALL of the comments in the Harry_read_me file from Climategate. Lots of ‘correction factors’ and guesstimates.

I am still trying to get my head around this. This is where I get lost.

We have an initial temperature, say 10c. Something happens, say CO2 rises, and this, absent any other changes, would raise the temp by 1C to 11C.

But, says IPCC, something else does happen. When the temp goes up 1C, water vapor rises, and the amount of water vapor rise which 1C increase produces is 3C, so we end up with a total warming of 4C, to 14C total.

Whether this is right or wrong, it makes sense to me. I can see that CO2 can have a warming effect, and I can see that a warming could indeed increase water vapor and that would indeed add to the CO2 warming effect.

So I whether we call this feedback or whatever, I can see how it could work, how a modest warming could, if the system works that way, produce a much larger warming in consequence of the effects of the smaller warming that triggered the whole thing.

Can someone explain to me what the IPCC is saying as a description of this, and what CM is saying? I have some incoherent impression that CM is saying that the additional warming must in some way multiply the 10C we started with. That makes no sense to me, so I am probably failing to understand his point.

I know this is a really simple thing, and probably I am missing the obvious, but there it is, I am, and would be very grateful for an explanation in these simple terms. Who is saying what about this situation and what is the disagreement exactly?

Michel ==> You are being confused because the basic presumptions are simply not true. That is a

Totally Linearversion of a system that has been known for over 50 years to beNONLINEAR.There are several good essay series on “Chaos” and climate — one my myself here at WUWT: Chaos and Climate – Part 1: Linearity ; Chaos & Climate – Part 2: Chaos = Stability ; Chaos & Climate – Part 3: Chaos & Models and Chaos & Climate – Part 4: An Attractive Idea.

“Chaos theory is a branch of mathematics focusing on the behavior of dynamical systems that are highly sensitive to initial conditions. “Chaos” is an interdisciplinary theory stating that within the apparent randomness of chaotic complex systems, there are underlying patterns, constant feedback loops, repetition, self-similarity, fractals, self-organization, and reliance on programming at the initial point known as sensitive dependence on initial conditions” — wiki

Simplistic linear approaches to climate systems are doomed to failure — chaos is the major factor preventing climate models from being able to accurately project further than just a few weeks or months — and at that they are only projecting major climate features, not any real useful details. They are improving but not much.

I cite a recent paper from Zhang and Kirtman in a prior comment on this thread that directly discusses this problem.

““The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.”

– IPCC TAR WG1, Working Group I: The Scientific Basis

Anytime you see SIMPLE explanations of anything climate related, know that is has been dumbed-down and is probably totally useless.

Yes, that is a good summary. To put it into the linear algebra

F + .75*ΔT = ΔT

where F is the forcing, your 1°C, and the second term is the effect of water vapor. That goes back to Arrhenius.

And yes, in effect Lord M is saying that 10°C is part of the signal (change) and has to be included somehow. In fact he would include it as 283 K, which really swamps the arithmetic.

If that is what he is saying, it seems like its obviously wrong. Its like a category mistake of a very basic sort. I can’t even get my head around how to phrase what this coherently.

We have, according to the theory, a steady state of some level, say 10C. We then apply heat to it, or stop it losing heat, and the temperature starts to rise. As it does so, it increases water vapor. When this whole process gets through we get a rise of 4C, which is 1C due to the initial warming of our heat application, and an additional 3C caused by the rise in water vapor’s heating effects.

This makes sense, whether its right or wrong, and can be stated coherently as a description of what is supposed to happen to the climate when a heating impulse is applied.

I don’t understand how to phrase what CM is saying that is different from this.

Curry and Lewis made an argument which also makes sense, whether its right or wrong, namely that if you look observationally at the effects of applied warming, the actual rise in temperature is lower than some estimates of the consequenntial changes imply.

This too makes sense. Whether the total effects of the initial warming trigger events which lead to a further one, two or three degrees – or even none – is an empirical matter.

In the example of the 10C, 1C and 3C above, what is CM saying the relationship is? He is saying something different from my crude summary. But what exactly? In my crude example the fact that we start with 10C is immaterial, the account would be identical if it were 100C. Why does he think the initial temperature plays any role in the calculation, and if so, what role is it supposed to play?

Or is he just saying in a roundabout way that the 3C number is too high and should in fact be something lower, like 0.5C?

What he’s saying boils down to his slide at https://wattsupwiththat.com/2018/08/15/climatologys-startling-error-of-physics-answers-to-comments/, which he calls “the end of the global warming scam in a single slide.”

There is the (with-feedback) ECS value he calculates from , which is the temperature increase that doubling CO2 would cause without feedback. He calculates it, as Mr. Stokes does in his analogy at https://wattsupwiththat.com/2019/06/06/demystifying-feedback/#comment-2718602, by in essence extrapolating the with-feedback equilibrium temperature as a function of without-feedback equilibrium temperature .

The difference lies in the extrapolation coefficient . For that quantity Mr. Stokes uses, as any high-school algebra student would know to do, the function’s local slope, which in Lord Monckton’s slide would be . As the slide shows, however, Lord Monckton instead uses the average slope or . He has somehow convinced himself that feedback theory dictates this. It doesn’t (and that the near equality between his alternative values indicates that the function is nearly linear).

Quite some time after he started shopping this theory around (he started at least by the time of this talk: https://www.youtube.com/watch?v=Ebokc6z82cg) he hit upon a second rationale, which is that local-slope “measurements” are too noisy to rely upon, whereas the constituents of average slope are large values that tend to swamp the noise out. If we were sure that as a function of is nearly linear, that would make some sense. But, of course, that begs the question; the IPCC’s ECS estimates imply that it isn’t linear at all.

Occasionally, therefore, Lord Monckton backs up to telling us why that function can’t be much different from linear. He goes into Clausius-Clapeyron, logarithmic forcing functions, and a lot of hand-waving that not everyone would find compelling. (I’m not saying that he and his “eminent” co-authors haven’t got a compelling near-linearity demonstration squirreled away somewhere. But there’s no reason to believe that the paper he keeps mentioning is any more coherent than he’s shown himself to be so far.)

In other words, what we get is a motte-and-bailey argument in which the bailey is the impression he gave in the YouTube video: that he’s mathematically demonstrated a fundamental feedback-theory error on “climatology’s” part. The motte is the much-less-attractive physical argument that the function can’t be very nonlinear.

(All this, of course, ignores Mr. Hansen’s point that all these equilibrium quantities are unicorns anyway.)

Unfortunately, Mr Born knows less math than a high-school student. For the secant slope that he asserts (without evidence) to be the system-gain factor (which he calls the “extrapolation coefficient”) is nothing of the kind. It is merely a secant slope. To the extent that the function E(R) is a growth function, the system-gain factor E/R will always be less than the secant slope.

As to the alleged nonlinearity of E(R), I have already explained to Mr Born that official climatology considers the climate-sensitivity parameter, which encompasses the influence on temperature of the sensitivity-altering temperature feedbacks, to be “typically near-invariant”.

Thus, official climatology would take the ratio of the equilibrium sensitivity 32.5 K in 1850 to the 10 K reference sensitivity to the naturally-occurring, noncondensing greenhouse gases and derive therefrom a system-gain factor of about 3.2. It would then extrapolate to the warming from a CO2 doubling by taking Charney sensitivity as the product of 3.2 and the 1.05 K reference sensitivity, giving 3.4 K, which is indeed approximately the midrange estimate of Charney sensitivity in the models. In official climatology’s world, this is indeed – within the margin of uncertainty – a linear extrapolation, consistent with the statement in IPCC (2001, ch.6.1) that the climate-sensitivity parameter is “typically near-invariant”.

It is only when one remembers to take account of the fact that the feedbacks present in 1850 are feedbacks not only to the naturally-occurring perturbation of the input signal (emission temperature) but also to the entire reference signal, which is the sum of that perturbation and of emission temperature, that one can see that, contrary to the assertion that the climate-sensitivity parameter is typically near-invariant, official climatology’s Charney-sensitivity estimates imply a strong nonlinearity in E(R) that has no physical justification and, therefore, always leads to a contradiction.

Mr Born himself discovered this when he tried to set up a power-law function E(R) and found that the ratio of the feedback fraction in response to greenhouse-gas warming would be 11 times greater than the feedback fraction in response to emission temperature, which is self-evidently impossible, and is certainly in conflict with the statement that the climate-sensitivity parameter is “typically near-invariant”.

Somehow the LaTeX didn’t work. The average-slope values I was referring to were E_1/R_1 and E_2/R_2.

I’ll try the LaTeX again:

The difference lies in the extrapolation coefficient . For that quantity Mr. Stokes uses, as any high-school algebra student would know to do, the function’s local slope, which in Lord Monckton’s slide would be . As the slide shows, however, Lord Monckton instead uses the average slope or . He has somehow convinced himself that feedback theory dictates this. It doesn’t.

Note how Lord Monckton buries his bald assertions in nomenclature to frighten the natives. His fanboys love it.

But if you work through his slide–that is, if you plot the two points he extrapolated from and the third point , which he extrapolated to–you’ll see his theory boils down to bad extrapolation.

As to whether the relationship in question actually is very nonlinear, I have no real opinion, although my guess is that it’s not. But his bald assertions and tortured readings of the literature don’t prove it’s not.

As to the 11 times he mentions, that’s from the example at https://wattsupwiththat.com/2019/06/05/the-moral-case-for-honest-and-competent-climate-science/#comment-2717128 by which I showed that even a highly nonlinear function could, contrary to what Lord Monckton’s reasoning (““) at about 17:50 in his talk at https://www.youtube.com/watch?v=kcxcZ8LEm2A, implies, result in very little average-slope change over a small interval.

By the way, if you think of E as a DC amplifier’s response to a stimulus R, the average slope I’ve been referring to would be the large-signal gain, while the local slope would be the small-signal gain. An engineer who relied on the former rather than the latter to assess stability could encounter a nasty surprise.

Your understanding is qualitatively correct, michel, but not quantitatively. It’s not nearly that large.

It is generally expected that warmer temperatures should increase the amount of water vapor in the atmosphere, because warmer air holds more moisture (roughly 7% more for each 1°C of warming). That’s is called water vapor feedback. This effect is usually approximated in climate calculations by assuming stable relative humidity as temperatures change. Under that assumption, warmer temperatures cause greater amounts of water vapor in the atmosphere, and since water vapor is a greenhouse gas, increased water vapor in the atmosphere should increase greenhouse warming: a positive feedback.

This is generally believed to be the most important

positiveclimate feedback mechanism, by far.I ran MODTRAN (Tropical Atmosphere), using the U. Chicago web interface, and found that with an older version (circa 2012), an increase in CO2 level produced a

65%greater increase in temperature with constant relative humidity (i.e., with water vapor feedback) than with constant absolute humidity (i.e., without water vapor feedback). However, they must have updated their MODTRAN version, because when I did the same exercise in 2015 it showed just over8%amplification, rather than 65%.The earlier value (+65%) is not far from AR5’s estimate of the combined effects of positive water vapor feedback and negative lapse rate feedback. AR5 considers Water Vapor and Lapse Rate feedbacks together (section 7.2.5, p.587) and gives an estimated range of +0.96 to +1.22 W/m² per 1°C of warming, for the net effect of the two feedbacks, combined. If we also assume that it takes 2.8 to 3.4 W/m² forcing to cause °C of global warming, that would imply a 0.96/3.4=28% to 1.22/2.8=43% positive net combined feedback from water vapor & lapse rate feedbacks, which, with “compounding,” would result in a net amplification of 1/(1-ƒ) = 1/(1-(0.96/3.4)) to 1/(1-(1.22/2.8)) = 1.39× to 1.77×, adding 39% to 77% (best estimate

54%) to the original warming.The highest estimate (by far!) that I’ve ever seen for the effect of water vapor feedback is the +300% figure you mentioned, from a 2013 “control knob” paper by Lacis, Hansen,

et al, which claims (without support) that the“feedback contribution to the greenhouse effect by water vapour and clouds”effectively quadruples (adds 3× to) the warming effect of CO2 and other GHGs. That is certainly a fringe viewpoint.Ah Dave, you are a wonderful man. I take back everything I have ever said about you.

“water vapor feedback is the +300% figure you mentioned, from a 2013 “control knob” paper by Lacis, Hansen, et al, which claims (without support)”

Such a wonderful take down of it might even be +600% Andy I have not seen before.

Even the name of the paper.

Pure magic.

I have a question, then. On a calm day, locally, the air temperature drops until it gets near the dew point, then it stops. You can even get a temperature inversion, where the air is ‘warmer’ than the ground. The dew point has a diurnal variation, too. It drops in the morning about an hour or so after sunrise to its low point and rises in the afternoon to its high point about an hour or so after sunset. That suggests that ‘warming’ does not, directly, change the absolute humidity much. If daytime sunshine doesn’t do it, how could ‘greenhouse’ gases do it?

Hi Griff

I don’t mind the non-linearity question. If small perturbations are considered (and 3K out of 280k might well be considered small), perhaps the linearity approximation is good enough. This wouldn’t hold for the claimed “tipping points”, but that’s an entirely separate question I don’t want to get into here.

On the question of amplification of small perturbations, if A and ‘beta’ represent passive components, their values will both be less than 1.0. That means the closed loop gain A/(1+A.beta) is absolutely less than 1.0. Therefore if the climate response to increased CO2 is a passive system, the closed loop response cannot amplify the input.

A passive component is energy dissipative. The output cannot contain more energy than its input and therefore amplification is not possible.

In other words, climate sensitivity cannot be increased by amplifying processes. Is that how you see things Griff?

Or if it is not how you see things, can you explain the climate system components that are not passive (relevant to climate sensitivity) and give a reasoned suggestion of their numerical values to fit into the feedback model you describe.

Thanks

The most important aspect of any climate feedback that is pretty much never considered let alone discussed is the feedback on the change in flux at the TOA, itself. That is what determines future climate.

The irony, Tim, is that the feedback to TOA net flux is the ONLY feedback considered by energy balance models, which is what Lord Monckton is actually challenging. There is never any direct feedback to temperature. A change in temperature changes state variables which change the TOA net flux balance. Any impact of a temperature change on further temperature change is always via net TOA flux since (normally by fundamental assumption) that it the only way that the climate system can heat or cool in a zero-dimensional model. This is one of several reasons why trying to partition an absolute temperature in 1850 which is assumed to be in equilibrium is a fool’s errand.

It’s curious the often mentioned 1850 as some kind of stable equilibrium state given it is near the juncture of LIA to present climate “optimum”. Almost as though it was latently primed at that point. While using the global mean to search for the greenhouse/feedback fingerprint likely the only way to prove the point, it can only come from knowing shift from baseline profile. In a few decades we’ll have the temporal range of real global observation to confirm longer term trends empirically. But the undisturbed baseline will still be elusive. A bit of a fools game. Must say from the input here and elsewhere, the likelihood of stepping from glaciated to interglacial into a 3rd state unknown hot state seems exceedingly unlikely in the ice age we are currently in. Much more likely and intuitive that buffering by water with its various states, mobility and other properties and will keep our climate quite liveable. The same can’t be said for our impact on prime land that we also depend on heavily, nor ballooning population and inequity that exacerbate it.

I see Nick’s point in a way. CO2 has gone up, and temp has gone up. Now just explain the time lags and short term inverse relationships and…done (not really but you can see the temptation).

But other things in the atmosphere have gone up too. For every gallon of motor fuel burned, 6 gallons of water are created and sent out the tail pipe, for instance. Does the earth have more water (and water vapour) than in 1850 ? Water vapour is a real effective GHG. Or is it a drop in the bucket ? It is water vapour, and does rise up in the atmosphere, so already in the correct form to effect albedo, etc.

For CO2 being the variable to isolate in an equation with many others, then perturb it, then look at the temperature increase, for the last 160 years, appears to be the main effort of the models. But CO2 is also the weakest of all the things that have an effect on the surface temperature (assuming constant sunlight). And there are significant periods of time when the effect is negative as well as positive.

Until that is understood, why spend trillions on it when a paper boy would have a better understanding of maintaining his ins and outs and predict the weekly profits to save for the Schwinn ?

Where is this model actually documented ? How is version control and testing and CM performed ? Each change in source code that implements each functional change in the document is kept where ?

Charlie June 7, 2019 at 6:13 pm

Nobody knows about since 1850 … but

is some recent data. Short answer is that total preciptiable water has gone up about 5% since 1988.herew.

Willis, 5% seems surprisingly large, since I would only expect about 7% more TPW per +1°C, and temperatures since 1988 have risen only somewhere between 0.3 and 0.5 °C since 1988.

But then I looked at your graph, and it appears to me that half of that 5% increase was really just the 2015-16 El Nino spike at the end.

Do you agree?

I’m not a climate scientist or a system theorist but I am an empirical scientist and I believe that equating the very complex system that is earth’s climate to a linear approximation of a transistor (or any other such readily deterministic system) is rather unrealistic to say the least. Clearly, there are inputs to the climate system that we don’t understand and that are therefore not accounted for by such models. Take the obvious causal relationship between the sun’s magnetic field strength and climate. There is no consensus mechanism to explain it so it is ignored. This is the only reasonable approach for a theorist to take – of course you have to ask about what else is being left out – but it also makes the reliability of these climate models highly suspect IMHO.

Nick, the “feedback diagram” you show immediately after “Example 1 — the abstract feedback system” and the words you employ in the “blue box” aren’t internally consistent. [Note: I assume “A” as depicted in the feedback loop diagram is the same thing as the “AOL” term in equations in the blue box.] To illustrate the inherent inconsistency between the “diagram” and the “words,” I choose to set AOL=1 and B=1. I know, it’s not a very good amplifier, but there’s nothing preventing me from using a unity-gain amplifier. You restricted B to be a fraction; but since 1 is a fraction, I’m free to set B=1.

Let Vin be the input voltage applied to the circuit–i.e., the input to the “+” side of the subtractor in your feedback diagram. As you wrote: “Without feedback, the input voltage V’in” {which at this point isn’t defined, but can’t be anything other than Vin} “is applied directly to the amplifier input. The according output voltage is Vout = AOL * V’in.” In the case of a unity gain amplifier (AOL=1), the output voltage, Vout=Vin.

You then continue: “Suppose now that an attenuating feedback loop applies a fraction B*Vout of the output to one of the subtractor inputs so that it subtracts from the circuit voltage Vin applied to the other subtractor input.” Okay, let’s do that. For B=1, the voltage B*Vout=B*Vin=Vin. For this case, both inputs to the “subtractor” are Vin, which means the subtractor output will be 0. The subtractor output is the input to the amplifier so the amplifier’s input is 0, which for a linear amplifier means the amplifier’s output will also be 0.

All this sounds fairly straight forward. All I’ve done is subtract the output of a unity gain amplifier from the amplifier’s input and fed the difference back to the amplifier. But according to your formula for the closed-loop gain [AFB = AOL/(1 + B*AOL)], the value of AFB for the above circuit parameters is AFB=1/(1+1)=1/2.

In summary, I have a feedback-loop circuit with a closed-loop gain of 1/2 to which I input a non-zero voltage signal and get 0 output. Say what?

I don’t know how the “amplifier community” does things; but in the digital signal processing world, when you represent a feedback device (a device whose output is fed back as an input to the device) or for that matter a feed-forward device, a shift (time delay) is inherently assumed somewhere in the feedback/feed-forward diagram. Without the shift, it is impossible to decide the order in which numerical operations should be carried out. [Note: A similar thing happens in a spreadsheet when operations involving multiple cells loop back on themselves–the spreadsheet doesn’t know where to start or stop operations.] Then if you want to know how the device behaves to a constant input signal (i.e., to a direct current or DC input), you input a DC signal and observe the output as a function of time.

Reed,

“the words you employ in the “blue box” “As said, the words in the blue box are the accompanying text – ie in Wiki. My point in quoting it was just to show how it was solely manipulating linear expressions. But I think the fault in your analysis is

“Okay, let’s do that. For B=1, the voltage B*Vout=B*Vin=Vin. For this case, both inputs to the “subtractor” are Vin”You are subtracting the pre-feedback Vout. But it should be the post-feedback value, as yet unknown. That is why I urge people to stick with the linear equations, and solve them as they appear. In your case, with AOL=B=1, that gives

Vout=V’; V’=Vin-Vout and so

Vout=Vin/2

which is a perfectly straightforward gain=.5 amplifier.

Nick, I apologize. I may have entered this comment into the wrong location. I’m going to re-enter it here so that I know it’s in the right place.

Okay Nick try this.

For all time AOL=1 and B=1. For all time up to but not including time 0, no signal has been input to the circuit. At time 0 and for all time thereafter the input signal is 6. I argue that immediately prior to time 0, the only “acceptable” value of the circuit output is 0, because any non-zero value of the circuit output at time 0 would have to be justified.

I’m going to “loop” through the circuit a number of times. I start by computing the circuit output at the end of the first “loop.” Multiply the circuit output (0) at the start of the first “loop” by the factor B=1. That product is 0. Subtract that product from the input signal giving a difference of 6. Multiply that difference by AOL=1 to get the circuit output at the end of the first “loop.” Thus, the circuit output value at the end of the first “loop” is 6.

Now make a second “loop” through the circuit. At the start of the second “loop” the circuit output is 6 so that the product of B=1 and the circuit output is also 6. Subtract that product from the input signal (6) giving a difference value of 0. Multiply that difference by AOL=1 to get the circuit output at the end of the second “loop.” Thus, the circuit output value at the end of the second “loop” is 0.

Now make a third “loop” through the circuit. At the start of the third “loop” the circuit output is 0 so that the product of B=1 and the circuit output is also 0. Subtract that product from the input signal (6) giving a difference value of 6. Multiply that difference by AOL=1 to get the circuit output at the end of the third “loop.” Thus, the circuit output value at the end of the third “loop” is 6.

Repeat the above ad infinitum. The circuit output oscillates between 0 and 6, which has an average value of 3, but at no time is 3.

Now if you allow the circuit output to be 3 immediately prior to starting the first “loop,” then you get the following. Multiply the circuit output (3) at the start of the first “loop” by B=1. That product is 3. Subtract that product from the input signal (6) for a difference of 3. Multiply that difference by AOL=1 to get 3, which is the circuit output at the end of the first loop. Repeat ad infinitum. The output value at the end of each “loop” will always be 3, which is consistent with a gain of 0.5.

But if the input signal had been 8 instead of 6 and I wanted to ensure that the circuit output value was always 4 (i.e., corresponded to a closed-loop gain of 0.5), then I would have to have set the circuit output value at the start of the first “loop” to 4, not 3.

Thus to ensure a circuit gain of 0.5, the circuit would have to have a priori knowledge of the input signals strength. Where did the circuit get this knowledge?

If this doesn’t convince you there is a problem, try the following. Open a spreadsheet. Enter into Cell F1 the value of AOL—in our case, enter 1 into Cell F1. Enter into Cell F2 the value of B—in our case, enter 1 into Cell F2. Enter into Cell F3 the value of the input signal—in our case, enter 6 into Cell F3. Your equation for the output is “output = AOL*(input – B*output)”. Let Cell F4 store the output. Then the equation to be entered into Cell F4 is: “ = F1*(F3 – F2*F4)”. What you’re going to see when you hit the “return” button is a “circular reference warning” indicating that the spreadsheet doesn’t know what to do. In your post you wrote: “A computer (or a student?) could have solved them at any stage.” Microsoft Excel must be deficient , because it can’t solve “them.”

“Repeat the above ad infinitum. The circuit output oscillates between 0 and 6, which has an average value of 3, but at no time is 3.”This again is emblematic of the tangles you can get into by not just using the linear equation. The equation is, dropping the out on Vout

V = V_in – V

Easy to just solve. But you insist on creating a sequence

V_n = V_in – V_(n-1)

Well, V_n = .5*Vin is a valid solution to that relation. But it turns out that you can’t converge to that solution by forward recurrence. OK, that just means that it is a bad way of solving a very simple linear relation. Don’t do it.

In fact, if the relation were V_n = V_in – p*V_(n-1), it would converge to the correct answer if p1. But there is no reason to do it that way. My point in this post is that you start with the linear relation, and then derive different ways of looking at it. If you derive something that doesn’t help, go back.

“p1”Bah, HTML. p<1

Nick, to my comment (

‘Repeat the above ad infinitum. The circuit output oscillates between 0 and 6, which has an average value of 3, but at no time is 3.’) you responded with “This again is emblematic of the tangles you can get into by not just using the linear equation.”Including that response, here’s a summary of where we stand to date. You created a circuit and developed an equation that represents the behavior (input to output) of that circuit. Specifically, for the circuit in your guest post, after re-arranging terms the equation you came up with that relates the output to the input is

Vout = constant * Vin

Someone comes along and using (a) valid values for the circuit parameters (AOL and B) and (b) a valid input signal (in this case a DC or non-zero constant input) shows that when that signal is input to the circuit, the output is not a “constant times the input.” Your response is in essence

You’re getting entangled in details. Just trust the equation.In math you can’t get entangled in details. The equation is either right or wrong. If by (a) inputting a valid signal to the circuit and (b) performing a valid analysis of the circuit output signal for that input signal the result contradicts the equation you purport represents the circuit, then the equation you purport represents the circuit is wrong. There is no wiggle room. In mathematics one counter example is sufficient to prove an equation is wrong.Next, you select a different valid set of values for the circuit parameters (you required that P be less than 1) and argue for that circuit and the valid input signal used previously, the output signal

converges to the correct answer. There is nothing in your derivation of the “input-signal/output-signal equation” that restricts its use to “converged signals only.” Your derivation applies to any output signal, not just a “converged output signal.” Arguing that for a specific input signal the converged output signal equals the input signal times a constant does not make your equation valid. One would still conclude your equation does not represent the circuit.This completes my summary of where we are to date. Now to go forward. Another valid input signal is the sum of two components: a DC component and a sinusoidal component. For example the DC component might be “6” and the sinusoidal component might be “4*SIN(w*t+theta)”, where “4” is the amplitude of the sinusoid, “w” is the non-zero angular frequency of the sinusoid, and theta is the phase of the sinusoid at time t=0. If this signal is input to the circuit, then even when P is less than 1, not only does the output signal NOT equal “the input signal times a constant,” the output signal will not even converge to “the input signal times a constant.” The reason being that the “gain” of the circuit for a DC input is not equal to the “gain” of the circuit for a non-zero frequency sinusoidal input [See https://www.allaboutcircuits.com/textbook/semiconductors/chpt-1/amplifier-gain/%5D. For the input signal

6 + 4*SIN(w*t + theta)

the output signal converges to

C1*6 + C2*4*SIN(w*t + theta + phi)

where (a) C1 and C2, which are functions of the circuit parameters AOL and B, are NOT in general equal but may be equal for a specific input frequency w, and (b) the angle phi may not be zero. Since (a) C1 does not equal C2 and (b) phi may be non-zero, even after waiting an infinite amount of time (until convergence is reached), the output signal

C1*6 + C2*4*SIN(w*t + theta + phi)

will not be a constant times the input signal

6 + 4*SIN(w*t+theta).

This is another example (the third) of how your equation

Vout*(1 + B*AOL) = AOL*Vin

that purports to relate the output signal, Vout, to the input signal, Vin, does not represent the operation of your circuit diagram.

I’ve given three examples of where a valid signal is input to the circuit and the output signal is not a constant times the input signal. As far as proving that your equation does not represent circuit operations is wrong, a single example would have been sufficient. I used three examples only because you made objections (in my opinion, invalid objections but that’s for the reader to decide) to the first two examples. Your claims that I am somehow “entangled” in the details and that I “should just use the linear equation” to relate the output signal to the input signal may be acceptable to the AGW community, but they require a leap-of-faith I am not willing to take. The fact is your equation that relates the circuit output to the circuit input is just flat wrong. It may be correct of a few specific input signals, but it is not correct for a general valid input signal.

Reed,

“a valid input signal (in this case a DC or non-zero constant input) shows that when that signal is input to the circuit, the output is not a “constant times the input.””No, you don’t show that. The system (which is not mine, but Wiki’s) says that the output will be half the input, with your parameters. And it will.

What you are doing is imagining a sequential way in which the system might respond. The amplification first, then the feedback etc. But that isn’t part of the system specification.

In fact, what you are doing is an established solution technique, called Hadamard iteration. To solve

x=f(x)

you take an x value, substitute on the right, get a new value on the left, and so forth. For an equation F(x)=0, you can write the sequence as

x_{n+1)=b*F(x_n)+x_n

for any nonzero b, you can see that if that converges, it will converge to a solution of x=0. But it may not converge. Then you have to try another value of b.

You have to distinguish between the existence of a solution and the ability of any particular method to find it.

Of course, all your argument is irrelevant to an application like climate, where there is just linear equation and no circuit diagram. You can create that if you think it helps understanding. And you can try to solve it by an iterative method, though there is no good reason for doing so. But if it doesn’t help, don’t do it.

“You’re getting entangled in details. Just trust the equation.”The equation was the starting point. You created the details in trying to solve it. If tangled, try another way. The equation is the datum.

No, Nick, the starting point isn’t the equation, it’s the circuit diagram. If the starting point is the equation, what does it have to do with the feedback loop diagram? It’s every bit as logical to claim your equation represents the number of flapjacks it takes to cover the roof of a doghouse.

By the way, the way you depicted the circuit, if AOL = 4 and B=0.3, for a constant non-zero input signal the output of your circuit doesn’t even converge to a finite value–it grows without bound. Just try it Nick and see what happens. This non-convergence will occur whenever |AOL*B| is greater than 1. Your formula says the gain should be 4/(1 + 4*0.3) = 1.8181818—i.e., for a constant input signal of magnitude 1, the output signal should converge to 1.8181818. The output signal doesn’t converge to 1.8181818; it grows without bound. Your formula is valid only when the output signal converges; and for a DC input signal, that only occurs when the absolute value of AOL*B is less than 1.

You’re an interesting guy, Nick. You draw a circuit diagram with (a) an input, (b) two multipliers (AOL and B), (c) one summer (actually a differencer), (d) a feedback loop, and (e) an output. You apply reasoning to that circuit and derive a formula for the circuit gain—the relationship between the magnitude of the input signal and the magnitude of the output signal. Someone (and you can do this yourself) generates (a) a valid input signal (a unit DC step function—i.e., a signal that is 0 before some time and 1 after that time) and (b) valid values for the circuit parameters (AOL=4, B=0.3). He/she inputs that signal to the circuit and monitors the output signal. The output signal grows without bound—i.e., the output signal alternates sign with the absolute value of both the positive and negative terms increasing without bound. Faced with the discrepancy between your theoretical circuit gain and the observed signal output, you don’t question either your derived gain formula or the circuit input/output analysis. You simply say to that person “

You’re getting entangled in details. Just trust the equation.” I sure hope that way of thinking is not representative of the AGW community.PS. I don’t care what Wikipedia says. Just set your AOL and B circuit parameters to, respectively, 4 and 0.3, and input the unit DC step signal to your circuit. Execute the circuit and let me know what the output signal looks like.

Reed

“You’re an interesting guy, Nick. You draw a circuit diagram with…”You seem determined to overlook that it is actually a Wiki article, not mine. But you might like to think about where this gets you. Are you saying that feedback amplification can’t work? That this perfectly orthodox feedback loop can’t work?

There is not much I can add to what I’ve said above.

Nick,

“

You seem determined to overlook that it is actually a Wiki article, not mine. But you might like to think about where this gets you. Are you saying that feedback amplification can’t work? That this perfectly orthodox feedback loop can’t work?”There is not much I can add to what I’ve said above.

It may be a Wiki article, but you used it to “Demystify Feedback.” Where it gets me is: “Wiki’s analysis is wrong.” I’m not saying that feedback amplification can’t work. I am saying that Wiki’s analysis of feedback amplification as represented by the diagram you presented in your Guest Post is wrong.

In a previous comment, I wrote: “

I sure hope that way of thinking is not representative of the AGW community.” Upon reflection, that does seem to be a fairly common practice in the AGW community. Build a model for how you believe the earth behaves and will behave in the future. Run that model and compare its predicted outputs with measurements of the quantities predicted by the model. If the two disagree, say:Quit getting entangled in the measurements; the model is the datum. Trust the model.”At the end of your most recent comment, you wrote: “

There is not much I can add to what I’ve said above.” I feel the same way – not that there isn’t anything you can add, but I’ve pretty much said all I feel is helpful. Given that, unless you specifically request I answer a question, this will be my last comment on this issue.Sincerely,

Reed Coray

Nick writes “That this perfectly orthodox feedback loop can’t work?”

I’ll add that in the real world, of course amplifiers like that work because there is no discrete time step.

Only in the world of AGW’s CGMs where the calculations are necessarily time stepped is this a possible issue.

Re: Your reference

[…]Additional conditions being necessary (as a matter of fact, conditions of regularity), Paul Levy imposes such conditions either (1) at infinity—a difficult subject, which we shall leave aside—or (2) in the neighborhood of a determinate value, which the successive itératives of ƒ are assumed to approach.

Unless I’m misinterpreting this, sounds a lot like your recommendation

“Well, V_n = .5*Vin is a valid solution to that relation. But it turns out that you can’t converge to that solution by forward recurrence. OK, that just means that it is a bad way of solving a very simple linear relation. Don’t do it.”

…which is essentially just to use the “answer” which you must have got to by other means.

In my comment https://wattsupwiththat.com/2019/06/06/demystifying-feedback/#comment-2720155, I wrote: ‘

Given that, unless you specifically request I answer a question, this will be my last comment on this issue.” I lied—or in the words of the Watergate politicians, “That statement no longer applies.” In a previous comment I had claimed: “a valid input signal (in this case a DC or non-zero constant input) shows that when that signal is input to the circuit, the output is not a “constant times the input.” Nick pointed out that for the values I chose for the Negative Feedback Circuit parameters (AOL=1, B=1), I had done no such thing. His words were: “No, you don’t show that. The system (which is not mine, but Wiki’s) says that the output will be half the input, with your parameters. And it will.”So I’ll illustrate my point with better examples. In particular, I select three cases of valid Negative Feedback Circuit parameter values and valid (bounded) input signals. In none of the cases is the output signal a constant multiple of the input signal.

[Note: For any bounded input signal, when the absolute value of the product of AOL and B is less than 1, the output signal will always be bounded—i.e., the Negative Feedback Circuit is stable. For a bounded input signal, when the absolute value of the product of AOL and B is greater than 1, the output signal will become unbounded—i.e., the Negative Feedback Circuit is unstable. For a bounded input signal, when the absolute value of the product of AOL and B is equal 1, the behavior of the Negative Feedback Circuit is likely to be a function of the frequency content of the input signal.]

In the first case, the input signal is the unit step function (0 for time t<0, and 1 for time t≥0). For this case, the output signal

to a constant multiple of the input signal, and that multiple is the purported gain of the Feedback Circuit. So the first case partially illustrates Nick’s contention that a negative Feedback Loop can/will attenuate an input signal.convergesIn the second case, the input signal is the unit step function …plus… a sinusoid multiplied by the unit step function. For this case, the output signal has the same frequency components as the input signal (a DC term and a single sinusoid term); but the gain of the DC component is not equal to the gain of the AC component. The gain of the DC component is equal to the purported Closed-Loop gain; but the gain of the AC component is NOT equal to the purported Closed-Loop gain. Thus for the second case even after convergence, a single gain (constant multiplier) does not relate the output signal to the input signal.

In the third case, the input signal is again the unit step function, but the output signal is unbounded—i.e., does not converge to a finite value.

The contention of the Wiki portion of your Guest Post is that for a Negative Feedback Circuit with Feedback Parameters AOL and B, (a) the output signal will be a constant times the input signal, and (b) the Closed-Loop Gain, AFB, (which is defined to be the output signal divided by the input signal) is given by

AFB = AOL/(1 + B*AOL).

The three cases below illustrate that this is not the case.

Case 1:

Negative Feedback Circuit Parameters:

AOL = 4

B = 0.2

Product of AOL*B = 0.8—therefore the Negative Feedback Circuit is stable.

Purported gain, AFB = AOL/(1 + B*AOL) = 2.2222222222…

Input Signal:

Unit Step Function (0 for time t<0; 1 for time t≥0)

Output signal:

0 for time t<0; at time t=0 starts fluctuating (first 6 outputs: 4, 0.8, 3.36, 1.312, 2.9504, 1.63968), but converges fairly rapidly to a constant value of 2.2222222222.

Conclusion: (a) The output signal is NOT a constant times the input signal; (b) but the output signal

to a constant value that IS the product of the input signal and a constant (the purported gain).convergesCase 2:

Negative Feedback Circuit Parameters:

AOL = 4

B = 0.2

Product of AOL*B = 0.8—therefore the Negative Feedback Circuit is stable.

Purported gain, AFB = AOL/(1 + B*AOL) = 2.2222222222…

Input Signal:

Unit Step Function …plus… Unit Step Function multiplied by a Sinusoid of Amplitude “A=0.5”, Frequency “F=0.25*FS”, and 0 Phase at time 0, where FS is the sampling rate. Thus, the Input Signal is 0 for time t<0 and 1 + 0.5*SIN(2*pi*n*0.25) for time t≥0.

Output signal:

0 for time t<0; at output index n=0, starts fluctuating (for indices n = 0, 1, 2, 3, 4, 5, 6, 7 the output signal values are, respectively, 4, 2.8, 1.76, 0.592, 3.5264, 3.17888, 1.456896, 0.8344832); but converges fairly rapidly to a four-point repeating sequence (for indices n = 99,992, 99,993, 99,994, 99,995, 99,996, 99,997, 99,998, 99,999 the output signal values are, respectively, 3.197831978353, 3.441734417317, 1.246612466137, 1.002710027090, 3.197831978327, 3.441734417339, 1.246612466140, 1.002710027088). Note: The converged four-point repeating output signal corresponds to a signal of the form

OUT(n) = 2.2222222222 + 1.561737619 * SIN(2*pi*n*0.25 + 0.674740942)

That is, the converged output signal is the sum of a DC term (magnitude 2.2222222222) and a sinusoid term [Amplitude of 1.561737619, Frequency of 0.25*FS, and Phase (radians) at time t=0 of 0.674740942]. Thus, both the input and output signals consist of a DC term and a common-frequency Sinusoid term. However, (a) the output DC term is 2.2222222222 times the input DC term, (b) the output sinusoid amplitude is 3.123475238 times the input sinusoid amplitude, and (c) at time t=0 the phase of the output sinusoid is not equal to the phase of the input sinusoid. Thus after convergence, (a) the actual DC gain, 2.2222222222, is equal to the purported Closed-Loop gain, but (b) the actual Sinusoid gain, 3.123475238, is NOT equal to the purported Closed-Loop gain. As such, the purported Negative Feedback Circuit gain of 2.2222222222 at best applies only to the input signal’s DC component and does NOT apply to the input signal’s AC component.

Conclusion: (a) The output signal is NOT a constant times the input signal; and (b) ignoring the phase shift in the AC term, even after the output signal converges to a repeating pattern, the output signal is NOT a constant times the input signal.

Case 3:

Negative Feedback Circuit Parameters:

AOL = 4

B = 0.3

Product of AOL*B = 1.2—therefore the Negative Feedback Circuit is unstable.

Purported gain, AFB = AOL/(1 + B*AOL) = 1.8181818…

Input Signal:

Unit Step Function

Output signal:

0 for time t<0; at time t=0 starts growing without bound (just try it).

Conclusion: (a) The output signal is NOT a constant times the input signal, and (b) the Negative Feedback Circuit gain is NOT the purported gain.

Now Nick claimed all of the above is irrelevant to climate. In particular, he wrote: “

Of course, all your argument is irrelevant to an application like climate, where there is just linear equation and no circuit diagram.” If Nick’s perspective is true, then why did he bring Wiki’s Negative Feedback Circuit to hisDemystifying feedbackguest post? In fairness, I’m not sure if Nick or the blog moderator titled the guest postDemystifying feedback; but independent of the title, it appears that by bringing up Wiki’s Negative Feedback Circuit diagram he has confounded the discussion.“That statement no longer applies”I think the word used by Ziegler was “no longer operable”. Anyway, here is the fallacy in your reasoning. You take a relation between input Vin and output V

V = Vin – V

and assume that an input Vin will first affect only the first V, via the amplifier, and not the second, via the feedback. Then you bring in the feedback, and so on. But there is nothing in the specification to justify that. The feedback is probably just a resistor. There is no reason to believe, as you say, that the output from the amplifier (first V) and the termination of the feedback, which is the same point, will remain stationary. Or to put it another way, that no current would flow through the feedback resistor while the amplifier output was adjusting to the input.

The sequence that you describe is sometimes used, as in the math sequence I described. The reason is that if somehow that could be implemented,

and it converged, then the converged result would be a satisfactory solution. But it actually can’t be implemented, and it doesn’t converge.The title was mine. The argument is that in each of the cases listed (and in other cases too) the circuitry described simply specified linear equations, and all the feedback does is elementary manipulation of those equations. If you don’t believe the Wiki circuit, which is bog standard, try the transistor circuit. Do you really think that can’t work?

“operable”Sorry, operative.

Nick,

I’ve been giving the matter of feedback loop representations of heat transfer between objects a lot of thought. I would like to continue this discussion with you; but I would like to do so using an exchange of emails instead of an exchange of comments on this thread. There are two reasons for changing the method of communication. First, Microsoft Word contains features that I struggle with when using HTML blog comments. For example, equations and figures can easily be included in Word documents, but are awkward (at least for me) to include in comments; and since I can attach Word documents to an email, email exchanges allow me to more easily express my thoughts.

Second, at some time in the future Anthony Watts will close comments to this thread, which will make communication on this issue harder to track. [Note, I have encountered the “termination of comments” communication problem when I was discussing a different issue using Joanne Nova’s blog as the communication vehicle.] If either you or I decide he would like to make the content of an email attachment a comment on this blog (or any other blog), he is free to do so provided he can convert the equations/figures in the email-attached Word file to equations/figures compatible with comments for that blog.

I’m sure we can get Anthony to (a) send you my email address and (b) send me your email address. Let me know if you’re interested in such an exchange by responding to this comment and so indicating. If you’re interested, will start the process of exchanging email addresses.

Sincerely, Reed Coray

Nick,

In lieu of comments on this thread, as best I can tell you have declined to take up my offer to exchange information via emails–https://wattsupwiththat.com/2019/06/06/demystifying-feedback/#comment-2724233. Given that, I’ll describe in this comment why I believe the representation of heat transfer between two objects

cannotalways be, and when heat transfer via conduction or convection is present should not be, represented by a feedback loop whose input is the internal rate energy enters one of the objects.Define a two-object system in the vacuum of cold space (near 0 Kelvin) as follows. Object “A” is a solid sphere of radius “RS”>0 whose surface acts like a blackbody—i.e., absorbs all radiation incident on it and radiates energy in accordance with Planck’s cavity (blackbody) radiation law. Distributed symmetrically just below the surface of the sphere is a constant source of internal thermal energy at a rate “H” Watts.

Object “B” is an inert (no internal source of thermal energy) concentric spherical shell of inner radius “RI”>“RS”, outer radius “RO”>“RI”, thermal conductivity “k” and inner/outer surfaces act like a blackbody.

If the system is in energy-rate-equilibrium (ERE)—i.e., if the rate energy enters the system (or any part thereof) is equal to the rate energy leaves the system (or any part thereof)—then accounting for the sphere’s internal source of thermal energy and the radiation from all surfaces (sphere, shell inner, shell outer), the total rate, Hsphere_total, energy enters the sphere is given by:

Hsphere_total = H + 4*pi*sigma*(RS^2)*{(RO – RI)*H/(4*pi*k*RI*RO) + [H/(4*pi*sigma*RO^2)]^(1/4)}^4

where Sigma is the Stefan-Boltzmann constant = 5.670373212*10^(-8) Watts per meter^2 per K^4.

As a function of (a) geometrical constants, and (b) the internal power input to the sphere, H, the form of the equation that gives the total rate Hsphere_total energy enters the sphere is

Hsphere_total = H + C1 * [C2*H + C3*H^(1/4)]^4

Where C1, C2 and C3 are constants that depend on the sphere/shell geometry, but are independent of H. Note that if C2=0, the form of the equation for Hsphere_total is

Hsphere_total = H + C4*H = H*(1 + C4).

That is, when C2=0 the total rate, Hsphere_total, thermal energy enters the sphere is directly proportional to the internal rate, H, thermal energy enters the sphere. As such, it is possible to express the total rate thermal energy enters the sphere in terms of (a) the rate, H, internal energy enters the sphere and (b) a closed loop feedback system whose “per-loop” multiplier is “0<D<1”. Specifically, for a “per-loop” multiplier of “D”, the total rate energy enters the sphere (including the internal rate) is

H/(1 – D).

By finding the value of “D” such that

(1+D)*(1+C4)= 1,

a feedback loop can be constructed that will result in the correct total rate, Hsphere_total, of thermal energy entering the sphere. I’m not sure (a) how to determine what the feedback multiplier should be other than to use independent logic to determine the total rate energy enters the sphere and “back out” the feedback multiplier, or (b) what is gained by constructing such a feedback loop; but such a loop can be constructed.

However, if C2 is not equal to zero, Hsphere_total cannot be written as a linear function of H. If a feedback loop converges, the output is directly proportional to the input—i.e., is a linear function of the input. Thus, for the example in this comment, a feedback loop whose input is a constant, H, cannot be used to compute the total rate energy enters the sphere.

I believe that whenever thermal energy (heat) transfer exists via convection and/or conduction, the likelihood of representing heat transfer as a feedback loop is nil. For the Earth/Earth-atmosphere system to a high degree energy enters/leaves the system via radiation. It is for this reason that I believe that the AGW community overly focuses on radiative heat transfer relative to conductive/convective heat transfer. Using a feedback loop to characterize heat transfer between the atmosphere and the Earth’s surface is one such example.

Nick, the real world isn’t that simple

https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2015MS000493

Abstract : The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time‐mean shortwave cloud forcing (∼10 W/m2) and longwave cloud forcing (∼5 W/m2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom‐heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. The results may also be useful for helping to tune them.

When changing the time step size changes the way a GCM works, that confirms the feed backs depend on the calculations themselves.

“When changing the time step size changes the way a GCM works, that confirms the feed backs depend on the calculations themselves.”GCM’s do not use feedbacks, but people sometimes calculate them from the solutions, as a way of interpreting. The solutions themselves are of differential equations, which are approximated using small differences. As the differences get larger (timestep), that approximation deteriorates, and the solution changes. That isn’t due to feedback, just to the behaviour of discretised pde’s.

Nick writes “GCM’s do not use feedbacks”

Everything a GCM calculates is a feedback against the sun’s energy simply radiating away like for the barren moon. Or do you want to play some sort of definition game?

Nick writes “As the differences get larger (timestep), that approximation deteriorates, and the solution changes. That isn’t due to feedback, just to the behaviour of discretised pde’s.”

And confirms they were only ever a fit in the first place.

Nick,

I thought some more about your reply to my comment. One way to analyze the behavior of a “loop” is to walk through the loop. For a feedback loop, this means walking through the loop an infinite number of times. Each time you walk through the loop, the various “loop” values change. In your writeup where you developed your set of linear equations, the value of V’in in your first equation (input to the triangular multiplier) represents V’in for the first pass through the loop. The value of V’in in your second equation represents V’in for the second pass through the loop. Thus, these two values are not the same, and setting them equal for the purpose of developing a set of linear equations misrepresents the operation of the loop.

For each pass through the loop you should assign to each loop variable (signal input, triangular multiplier input, output), an index (subscript) that corresponds to number of times you have gone through the loop. If you had done this, your first equation would have a subscript value of n=0 for V’in and your second equation would have a subscript value of n=1 for V’in. Equating these values, as you did when you substitute V’in0 equal to V’in1 is not valid; and any linear equations you generate from that substitution does not represent operation of the loop.

Actually, it’s only according to Lord Monckton that their “test rig” proves their point; we haven’t seen that test rig.According to our Lord all specs and circuit design are in the paper they wanna publish. An independent laboratory also built such rig, according to the specs. Let’s wait for the paper – I’d like to believe sooner or later it will be published. But in fact we don’r need to even wait for that – I reckon for proof of concept that should be fairly easy rig. I think authors used actual temperature input/output but in fact any input, as voltage, can be used. Arduino board would do a charm.

Just as we haven’t seen his “eminent” co-authors entering into the rough and tumble of defending what Lord Monckton says is their belief.Thanks God for that. If they engage in the blogospehere starting to fight with critics because ‘someone at the Internet is wrong’ that would be pretty worrying sign for me – amateurs in the game. The fact they keep the low profile is for me a very good sign that we’re actually dealing with respected specialists.

And I’m pretty sure the “test rig” merely proves that using average rather than local slope works if the system is linear—but also that local slope, whose use Lord Monckton tells us is the “grave error” that “climatology” makes, works, too.I reckon the whole purpose of such rig is to demonstrate that a feed back unit acts upon entire input plus disturbances, not just disturbances. And that’s far better approach than analogies and ‘thought experiments’ we’re exercising here. Firstly, analogies cannot replace hard evidence, secondly they can be misleading. You’ve seen Nick’s analogy with hand pump – if that works as Nick imagines the tyre would be actually inflating our hand pump, and not vice versa.

Unfortunately, Mr. Watts stopped running my posts when I exhibited insufficient deference to Lord Monckton’s (exceedingly questionable) expertise, so you won’t see my test-circuit design.You can stick it in the comments, as the ling to drawings.

PS – our Lord just posted another text! Looks like discussion is getting hotter.

Okay Nick try this.

For all time AOL=1 and B=1. For all time up to but not including time 0, no signal has been input to the circuit. At time 0 and for all time thereafter the input signal is 6. I argue that immediately prior to time 0, the only “acceptable” value of the circuit output is 0, because any non-zero value of the circuit output at time 0 would have to be justified.

I’m going to “loop” through the circuit a number of times. I start by computing the circuit output at the end of the first “loop.” Multiply the circuit output (0) at the start of the first “loop” by the factor B=1. That product is 0. Subtract that product from the input signal giving a difference of 6. Multiply that difference by AOL=1 to get the circuit output at the end of the first “loop.” Thus, the circuit output value at the end of the first “loop” is 6.

Now make a second “loop” through the circuit. At the start of the second “loop” the circuit output is 6 so that the product of B=1 and the circuit output is also 6. Subtract that product from the input signal (6) giving a difference value of 0. Multiply that difference by AOL=1 to get the circuit output at the end of the second “loop.” Thus, the circuit output value at the end of the second “loop” is 0.

Now make a third “loop” through the circuit. At the start of the third “loop” the circuit output is 0 so that the product of B=1 and the circuit output is also 0. Subtract that product from the input signal (6) giving a difference value of 6. Multiply that difference by AOL=1 to get the circuit output at the end of the third “loop.” Thus, the circuit output value at the end of the third “loop” is 6.

Repeat the above ad infinitum. The circuit output oscillates between 0 and 6, which has an average value of 3, but at no time is 3.

Now if you allow the circuit output to be 3 immediately prior to starting the first “loop,” then you get the following. Multiply the circuit output (3) at the start of the first “loop” by B=1. That product is 3. Subtract that product from the input signal (6) for a difference of 3. Multiply that difference by AOL=1 to get 3, which is the circuit output at the end of the first loop. Repeat ad infinitum. The output value at the end of each “loop” will always be 3, which is consistent with a gain of 0.5.

But if the input signal had been 8 instead of 6 and I wanted to ensure that the circuit output value was always 4 (i.e., corresponded to a closed-loop gain of 0.5), then I would have to have set the circuit output value at the start of the first “loop” to 4, not 3.

Thus to ensure a circuit gain of 0.5, the circuit would have to have a priori knowledge of the input signals strength. Where did the circuit get this knowledge?

If this doesn’t convince you there is a problem, try the following. Open a spreadsheet. Enter into Cell F1 the value of AOL—in our case, enter 1 into Cell F1. Enter into Cell F2 the value of B—in our case, enter 1 into Cell F2. Enter into Cell F3 the value of the input signal—in our case, enter 6 into Cell F3. Your equation for the output is “output = AOL*(input – B*output)”. Let Cell F4 store the output. Then the equation to be entered into Cell F4 is: “ = F1*(F3 – F2*F4)”. What you’re going to see when you hit the “return” button is a “circular reference warning” indicating that the spreadsheet doesn’t know what to do. In your post you wrote: “A computer (or a student?) could have solved them at any stage.” Microsoft Excel must be deficient , because it can’t solve “them.”

A few thoughts come to mind, after reading Nick’s article, but especially thinking back to his earlier reply about how models forecast weather accurately for a couple of weeks, but increasingly fail beyond that. This behavior might be due to a couple of mental reservations I’ve had about the modelling idea in general, and go to explaining why I’m in the sceptic camp.

1. Accuracy of linearized models relies on deviations being small enough so that higher power terms are unimportant. The failure of the weather (and climate) models might be due to development of excursions requiring cubic (time-symmetric) and higher-order treatment.

2. Truncation artifacts of a trivial type might apply to time variation as well. Nick models the state function as f(x0), rather than f(x0,t). An audio analogue to the latter might be predicting output from an overheating amplifier, where the operating conditions shift enough to substantially change gain or frequency response.

Time dependence of the climate state function f could arise, for instance, by oceanic upwelling: this could change the surface temperature, which would not only slightly disrupt the thermal balance, but might modify the viscosity and turbulence of the mixing layer, possibly affecting thermal transfer rates between air and water and CO2 ocean solubility and sequestration. The time scale for this variation is 100-1000 years.

Changes in biomass might also have an affect. Why was C14 sequestered so rapidly (~8 months) after the 1960’s nuclear tests, while the Bern profile persists much longer? On the face of it, one might conclude that an initial sequestering of CO2 triggers de-sequestration processes that go on much longer (micro-biomass is a possible suspect). If so, the state function f is being altered in a way indescribable by a linear perturbation.

To account for this in modelling terms, the state function f(x0) needs to be extended to f(x0,t) = f(x0, t0) + df(x0)*t+…. Unlike (1), this won’t de-linearize the equations (at least for shorter times), but it does break the steady-state assumption. However, since underlying hydrodynamic processes may well be cyclic, corresponding cyclic or stochastic descriptions, more complex than simple linear “drift”, might be necessary. After all, climatic variations show little indications of a tendency to achieve a steady state, if past history is a guide.

Despite some complexity, the math itself is “tedious but straightforward”. The larger question is, have ALL the relevant factors been identified, or are some missing?

I do think that the models ought to work to a degree where they can explain the colonization of Greenland, or the cold spells of the Justinian plague and the little ice age. We have a couple of thousand years of civilizational (proxy) behavior available for testing. After validation there, we can discuss to what extent we should trust their predictions to set policy.

“It is wrong to include variables from the original state equation. One reason is that the have been accounted for already in the balance of the state before perturbation.”

I can’t follow the math from either side. Take a balance sheet. That’s 1850. An income statement from that point on, makes no reference to that balance sheet. (This is a simplification.) You need to track inputs and outputs to get an income statement. You don’t say, we lost money, but look at the beginning balance sheet from 1850. That’s a distraction. We aren’t interested in the beginning or ending balance sheets as much as we are the income statement. The income statements drive the various balance sheets from different dates. Not the other way around. (This is a simplification.)

Balance sheets are still important. If you have a lot of money, you can lose money for a long time. The thermal mass of all the oceans are like a huge amount of cash on a balance sheet. Which means you can add a lot of CO2 before those change a lot. So, we should give the correct amount of weight to each thing. The balance sheets and the income statements.

Yes, that is a reasonable analogy. If you thing of total growth of wealth, you might include capital appreciation. Value of asset after, compared with before. All these go into a rate of change of worth (income) statement.

What Lord M is doing, in effect, is adding total asset values in as income.

Mr. Stokes, Mr. Watts, I need your help.

I have been debating the climate change farce with a nephew of mine who is a professor of chemistry at Yale, his wife is an assistant professor in Physics at Yale. Me, I am a retired carpenter.

They told me to take two two liter plastic bottles, fill each one with water one quarter full, drop a couple of Alka Seltzer tablets in one bottle, seal up both bottles tight , let them sit in the sun for an hour and then measure the temp of the air. I did tell them that this is a variation of Al Gore/Bill Neys experiment that was shown to be absolutely fraudulent, by you Mr. Watts. I told them I would do the experiment. I told them that I would do it 3 times ; 1 tablet, 2 tablets, 3 tablets. Take initial, final temp of both the water and air, I know that the reaction is endothermic, I know that the C02 bottle will be pressurized, there by retard evaporation, which cools. Kind Sirs, what else do I need to know?

John D, I guess you have googled;

https://principia-scientific.org/the-deliberately-false-greenhouse-gas-co2-experiment/

TonyN, thank you very much. I clicked on and read that article. I have one last favor.

What would be the resulting ppm of C02 caused by reacting 1000mg of anhydrous citric acid with a healthy excess of sodium bicarbonate in a 2 liter plastic container that has 600 ml of water. Add 400 ppm of CO2 to that number. I am hoping, sir, that the resulting ppm of C02 will be sky high, thereby invalidating the experiment. As a tangible means of saying thank you for your response, I have put you on my daily prayer list, you never know, huh.

This is in reply to

Beeze June 8, 2019 at 9:24 am

Consider this problem:

A 32 kg steel casting at a temperature of 425°C is quenched in 140 kg of oil initially at 20°C. Assuming no heat losses and the steel casting and oil to have constant specific heats of 502.416 and 2512.1 J/kg-K respectively, determine the change in entropy for a system consisting of the oil and casting.

There are three systems: 1) the steel casting which acts as a closed system, 2) the oil which is also acting as a closed system, 3) the oil and casting together which are acting as an isolated system because there is no heat gained or lost.

In this case, heat lost from the casting is gained by the oil, so total heat is zero:

or

Here I observe the standard where heat lost by a system is negative heat, and heat gained by a system is positive heat.

Next we need the definition of heat capacity:

We can usually igore the limit definition and use the constant pressure value. Also, specific heat capacity, little , is big divided by mass, or:

Solving for , we have:

,

where is final temperature minus initial temperature . Plugging these terms into the above heat equation and solving for we get:

Substituting the values from the problem, we get

The change in entropy for the system is:

The Clausius definition of entropy is:

Although this definition is for a reversible heat transfer, entropy is a state variable and that lets us usually ignore the reversible requirement. We also need the definition of heat capacity, but this time we will apply the limit:

Using the definition of specific heat capacity:

We solve for :

Substituting and integrating we get:

Now we can calculate the entropies:

Oops, we have a negative entropy. However, the second law only applies to isolated systems. So it’s possible for a closed system to have an entropy less than zero and not violate the second law. Now we compute the system entropy:

There are several things this example shows:

1. Heat is a conserved quantity. We couldn’t even begin the calculation if it wasn’t for the conservation of energy and heat.

2. The conservation of heat did not prevent this system from reaching equilibrium.

3. The conservation of heat does not violate the second law of thermodynamics.

4. The second law only applies to isolated systems.

And there you go.

Jim

Operating points are quite interesting. This all assumes that there is room for change. (the quantity can be increased by a factor of 2 say. )

And if the “amplifier” is actually saturated? Increasing the input by 10% with a gain of 100 will have near zero effect.

Look at Water Vapor Absorption. It nearly matches CO2 absorption. And there is 100XS as much in the atmosphere. Saturating the bands it absorbs in.

If there is already 99.9% reflection from water vapor adding CO2 will do very little.

GHG theory destroys itself on its own terms.

The frequent misuse of the analytic concept of feedback is one of the more egregious features of “climate science” as it seeks to wrap its ill-founded conclusions in the terminology of well-established system science . What is not revealed here is that operational amplifiers in electronic systems are invariably INDEPENDENTLY powered–an important factor found nowhere in the planetary climate system, which has no appreciable power source other than solar irradiance. While highly fungible in its manifold earthly expressions, solar energy may be locally stored, but its flux CANNOT be multiplied system-wide. Moreover, the usual analytic treatment of control systems assumes that there’s no drain of output power by the feedback loop. In the geophysical case, by contrast, we have only partial recirculation of thermal energy.

A far more credible system analogy would be a complex RC or RLC system, with no signal feedback whatsover. Such an analogue would provide a more tractable means of treating changes in system response characteristics induced as a follow-on effect by changes or modulations of input power. That, after all, is the poorly articulated sense of what is often really meant by “feedback” in the climate context.

“but its flux CANNOT be multiplied system-wide”It is indeed solar energy that provides the power supply, in the form of the up to 240 W/m2 of net IR that flows upward through the atmosphere. This is analogous to the power current flowing through a triode valve, say. GHG’s modulate that current, with gain, just as does the grid in the triode. The amplified signal can then be fed back.

But as said here, the enthusiastic use of circuit concepts comes mainly from folks here, not climate scientists. They mainly use feedback terms to describe the behaviour of coefficients in a linear relation, usually from a flux balance. If there is a sum of coefficients multiplying what is regarded as a driving variable, or input, then the negative ones are described as positive feedback, and vice versa, anchored by one dominant positive one (Planck) which keeps the sum positive. That is a descriptor; nothing much hangs on its use.

The gain factor for triodes applies to the voltage (i.e., energy level), not the power current, which involves the inexorable flow of time.

There’s simply no way that power can be amplified in a passive system; it needs to be continually produced. (Otherwise, power utilities would use mere looping of transmission lines to multiply their profits.) That’s what makes the arithmetic “climate science” take on ostensible surface power fluxes (in the 500+ watts/m^2 range) wholly fantastic. You will not find any such aphysical nonsense in presentations by bona fide scientists (e.g., Peixoto & Oort). “Feedback” is not the hand-waving meme of ‘folks here,” who generally tend to grasp the rigorous sense of the term far better.

Some climate feedbacks have nothing to do with power fluxes, or even temperatures.

Among the most important climate feedbacks are the strong negative (stabilizing) feedbacks which regulate the CO2 level in the atmosphere. The higher CO2 levels go, the faster CO2 is removed from the atmosphere, by terrestrial greening and the oceans:

higher CO2 level → faster uptake by biosphere & oceans → lower CO2 level

“It is like pumping air into a tyre with a puncture: the harder you pump the faster the air escapes.”–Clive Best (though he was actually discussing a different negative feedback)“The gain factor for triodes applies to the voltage (i.e., energy level), not the power current, which involves the inexorable flow of time.”The anode current is not inexorable. The grid voltage modulates it. The voltage gain is not the key to the device. A transformer can increase the voltage too. The point is that the triode delivers its voltage at the output with a lower output impedance than a transformer. More current can be drawn, hence more power. This comes as a deduction from the idle current. The equivalent of idle current in climate is that 240 W/m2 upward IR.

When global temperature rises because of CO2, more water vapor enters the air. It absorbs part of that 240 W/m2, and this gets reradiated. Some adds to the downwelling IR, and so warms the surface. That warming is due to the diversion from the 240 W/m2. Just as in the triode.

Paradoxically, a passive system can indeed “amplify” power, at least in the sense you’re referring to.

No, wait, hear me out.

Simplify the climate system radically enough to consider only steady-state radiative transfer. From the thousand-mile-high view we have, net of reflection, simply 240 W/m^2 flowing into the earth and 240 W/m^2 flowing back out. Now, the system’s passive. But that passive system has “amplified” the 240 W/m^2 coming in to cause 390 W/m^2 at the surface: a surface temperature of 288 K.

How does it do that without an internal power source? Multiple counting: part of the incoming power absorbed and re-emitted by the surface gets absorbed by the atmosphere and re-emitted back toward the surface, where it’s again absorbed and re-emitted so that we count it again. We need no internal power source, because we really haven’t added power. We’ve only counted it more than once, so incoming power is “amplified” to arrive at the surface’s emitted power.

The surface temperature thus depends on the degree to which we double count, i.e., on the degree to which the surface again absorbs and at steady state therefore emits again power it has already emitted previously. And the degree to which the surface re-emits depends on the atmosphere’s infrared-light opacity, which depends in part on atmospheric water-vapor concentration—which in turn depends on surface temperature.

So there’s positive feedback: a surface-temperature increase increases atmospheric opacity, which increases surface radiation, i.e., the surface temperature. That opacity is like a valve, and there’s no reason in principle why the valve can’t control far more power than it takes to operate it.

Now, I’m not saying the opacity change and thus the positive feedback actually is very great; I don’t think it is. (And, yet again, we’ve intentionally left out many other real-world mechanisms for the sake of discussion.) But there’s nothing about feedback theory generally that prevents power “amplification” of the type climatologists postulate.

In anticipation of the above responses, I’ve already pointed to:

There’s nothing at all new presented here that would justify the analytic treatment of climate as a feedback control system in any rigorous dynamical sense of the term. Nick’s would-be counter-example of a triode is that of partial recirculation of stored energy (not power), typical of the static gain seen in process engineering. By contrast, the transfer function of a genuine (closed-loop) feedback system is given by H/(1-HG), where H is the open-loop transfer function, G is that of the feedback loop, and both are functions of frequency. His conception of back-radiation is that of a mathematician, not a geophysicist who properly distinguishes between extensive heat fluxes and mere intensive expressions of state.

ΔR is the change in flux at TOA, which is the GHG forcing. ΔT is the surface temperature response. The feedback factors are T for temperature,w water vapor, C clouds and α (=a) for albedo. What they are actually doing (multiply by λ) is writing a flux balance

–

Small question,

If CO2, the GHG in question, is itself increased by the surface temperature response. Increased water temp more CO2, Where is this factor taken into account?

Demystifying feedback. Guest post by Nick Stokes,

A hoot.

The trick is to distract the audience from the real game at hand.

Well done, sir.

The point being that the article is not about demystifying feedback at all.

The goal is to muddy the waters wherein CM is wading.

Not that it needs much as the eloquent but overladen word salad he uses hides understanding from mere mortals.

But,

the mere fact that Nick has felt the need to make this attack means that there must be some substance, some message, some understanding to glean from it.

This could come in 3 ways, a simpler precise explanation by CM [unlikely, he doesn’t get it], An explanation in reverse by Nick in a a Damascene conversion [unlikely, he doesn’t get it] or one of the regulars with a bit of inspiration

Feedback is a very difficult concept due to the interplay of numerous other factors than CO2 plus the unknown interactions of both CO2 itself and the temperature rise that should be associated with it on the other factors.

Ceteris paribus there should be a consistent temperature rise with CO2 rise.

A linear relationship one would be stoked to admit.

And yet there is none.

One can see the CO2 at Mauna Loa doing its little sawtooth movement upward in an impossibly regular manner, yet the Temperature runs to its own beat. True they both finish up from the start to the end but not one sign of synchronicity.

Now one could put this down to Natural Variation.

But Natural Variation of the magnitude to completely obliterate any vestige of relationship would imply that the signal is not there.

The interesting part of Nick’s summary is that there should be a positive feedback of 3 times the passive signal. Which would make even the hint of a relationship much more obvious [magnified].

But zip nil nada

I will give him a possible CO2 temp relationship, on science, but virtually no positive feedback yet detectable on practice.