Guest Post by Willis Eschenbach
Now that my blood pressure has returned to normal after responding to Dr. Trenberth, I returned to thinking about my earlier somewhat unsatisfying attempt to make a very simple emulation of the GISS Model E (herinafter GISSE) climate model. I described that attempt here, please see that post for the sources of the datasets used in this exercise.
After some reflection and investigation, I realized that the GISSE model treats all of the forcings equally … except volcanoes. For whatever reason, the GISSE climate model only gives the volcanic forcings about 40% of the weight of the rest of the forcings.
So I took the total forcings, and reduced the volcanic forcing by 60%. Then it was easy, because nothing further was required. It turns out that the GISSE model temperature hindcast is that the temperature change in degrees C will be 30% of the adjusted forcing change in watts per square metre (W/m2). Figure 1 shows that result:
Figure 1. GISSE climate model hindcast temperatures, compared with temperatures hindcast using the formula ∆T = 0.3 ∆Q, where T is temperature and Q is the same forcings used by the GISSE model, with the volcanic forcing reduced by 60%.
What are the implications of this curious finding?
First, a necessary detour into black boxes. For the purpose of this exercise, I have treated the GISS-E model as a black box, for which I know only the inputs (forcings) and outputs (hindcast temperatures). It’s like a detective game, trying to emulate what’s happening inside the GISSE black box without being able to see inside.
The resulting emulation can’t tell us what actually is happening inside the black box. For example, the black box may take the input, divide it by four, and then multiply the result by eight and output that number.
Looking at this from the outside of the black box, what we see is that if we input the number 2, the black box outputs the number 4. We input 3 and get 6, we input 5 and we get 10, and so on. So we conclude that the black box multiplies the input by 2.
Of course, the black box is not actually multiplying the input by 2. It is dividing by 4 and multiplying by 8. But from outside the black box that doesn’t matter. It is effectively multiplying the input by 2. We cannot use the emulation to say what is actually happening inside the black box. But we can say that the black box is functionally equivalent to a black box that multiplies by two. The functional equivalence means that we can replace one black box with the other because they give the same result. It also allows us to discover and state what the first black box is effectively doing. Not what it is actually doing, but what it is effectively doing. I will return to this idea of functional equivalence shortly.
METHODS
Let me describe what I have done to get to the conclusions in Figure 1. First, I did a multiple linear regression using all the forcings, to see if the GISSE temperature hindcast could be expressed as a linear combination of the forcing inputs. It can, with an r^2 of 0.95. That’s a good fit.
However, that result is almost certainly subject to “overfitting”, because there are ten individual forcings that make up the total. With so many forcings, you end up with lots of parameters, so you can match most anything. This means that the good fit doesn’t mean a lot.
I looked further, and I saw that the total forcing versus temperature match was excellent except for one forcing — the volcanoes. Experimentation showed that the GISSE climate model is underweighting the volcanic forcings by about 60% from the original value, while the rest of the forcings are given full value.
Then I used the total GISS forcing with the appropriately reduced volcanic contribution, and we have the result shown in Figure 1. Temperature change is 30% of the change in the adjusted forcing. Simple as that. It’s a really, really short methods section because what the GISSE model is effectively doing is really, really simple.
DISCUSSION
Now, what are (and aren’t) the implications available within this interesting finding? What does it mean that regarding temperature, to within an accuracy of five hundredths of a degree (0.05°C RMS error) the GISSE model black box is functionally equivalent to a black box that simply multiplies the adjusted forcing times 0.3?
My first implication would have to be that the almost unbelievable complexity of the Model E, with thousands of gridcells and dozens of atmospheric and oceanic levels simulated, and ice and land and lakes and everything else, all of that complexity masks a correspondingly almost unbelievable simplicity. The modellers really weren’t kidding when they said everything else averages out and all that’s left is radiation and temperature. I don’t think the climate works that way … but their model certainly does.
The second implication is an odd one, and quite important. Consider the fact that their temperature change hindcast (in degrees) is simply 0.3 times the forcing change (in watts per meter squared). But that is also a statement of the climate sensitivity, 0.3 degrees per W/m2. Converting this to degrees of warming for a doubling of CO2 gives us (0.3°C per W/m2) times (3.7 W/m2 per doubling of CO2), which yields a climate sensitivity of 1.1°C for a doubling of CO2. This is far below the canonical value given by the GISSE modelers, which is about 0.8°C per W/m2 or about 3°C per doubling.
The third implication is that there appears to be surprisingly little lag in their system. I can improve the fit of the above model slightly by adding a lag term based on the change in forcing with time d(Q)/dt. But that only improves the r^2 to 0.95, mainly by clipping the peaks of the volcanic excursions (temperature drops in e.g. 1885, 1964). A more complex lag expression could probably improve that, but with the initial expression having an r^2 of 0.92, that only leaves 0.08 of room for improvement, and some of that is surely random noise.
The fourth implication is that the model slavishly follows the radiative forcings. The model results are a 5-run average, so it is not clear how far an individual model run might stray from the fold. But since the five runs’ temperatures average out so close to 0.3 times the forcings, no individual one of them can be very far from the forcings.
Anyhow, that’s what I get out of the exercise. Further inferences, questions, objections, influences and expansions welcomed, politeness roolz, and please, no speculation about motives. Motives don’t matter.
w.

So why is there still a black box at all? Were the AGW folks claiming they are completely open now?
“Moritz Petersen says:
January 17, 2011 at 3:04 am
Very interesting article
What is the source of “3.7 W/m2 per doubling of CO2″ I have read this multiple times, but I would like to look into how this has been calculated.
Thanks
Moe”
Moe,
Emission and absorption lines in the atmosphere are rather well known. Projects, like HItran that started in the 1960s by the military have been going on for decades where almost every molecule type has been measured and or calculated to have hundreds and thousands of spectral lines. If you take that and create a model of the atmosphere for pressure, temperature, and molecular content, and the spectrum can then be created by combining these 10s of thousands of lines. you can then calculate the difference in power transmission and absorption between a reference point, like conditions in 1976, and another point, say with twice the co2 present in 1976. Looking down from the tropopause, one finds that the difference in power reaching there is about 3.6 or 3.7 w/m^2 for our two points. The v alue is also for clear skies only as clouds will block even more radiated power than that.
When warmers claim the science is well understood, this is what they are referring to although they are essentially lying about it because there is still much that is poorly understood in this. Also, they conveniently forget that cloud cover matters dramatically and it is unpredictable and accounts for over half of the sky conditions.
If you want to play with a simplified yet still sophisticated system online, check out the Modtran calculator by Archer. It isn’t line by line calculations but it does a pretty fair job of working at least up to about 70km in altitude.
It’s a fairly decent number to know but its effects are not that straight forward.
******
Jim D says:
January 17, 2011 at 8:10 pm
A thought experiment. Imagine the forcing suddenly went to zero in the last year. Willis’s model’s temperature perturbation would immediately go to zero, but obviously the earth’s (or the GISS model’s) temperature would not respond that quickly, maybe taking decades”
******
Huh??? Hot, subtropical deserts go from 35C to near freezing every night.
Wir respect to volcanic aerosol impact, here’s an old one:
Nir Shaviv: The Fine Art of Fitting Elephants
The questions on my thought experiment illustrate the point. In a day, the forcing changes hundreds of W/m2, but the sensitivity is maybe only 0.1 C per W/m2. For higher frequencies, the sensitivity goes down due to thermal inertia. Thermal inertia effects only go away gradually over decades, which is the whole reason why an equilibrium sensitivity has to be distinguished from the transient one. It has to do with the depth of the layer that the warming gets to, which also determines how lasting the effect will be.
The real problem is that these climate so-called ” scientists” are practicing yellow journalism. It should be obvious that the term “contribution” should be used instead of the absurd term “forcing”.
The fact of the matter is that the contribution of CO2 to the atmospheric temperature is nil, far to small an amount to even be measured, if there is any contribution at all.
jorgekafkazar says:
January 17, 2011 at 1:35 pm
Autocorrelation may reduce the significance of a given r^2 value, but it does not change the value of r^2.
Thanks,
w.
steven mosher says:
January 17, 2011 at 2:39 pm
Your meaning is not clear here, Steve. If you are saying build it using one half, test it using the other, the difference in results is trivial between the two halves. It’s a computer model we’re looking at after all, not the earth.
Both you and Joel Shore have made this claim, and neither of you seem to have thought it all the way through. Since there is no sign of the effect that you are claiming, despite the record being over ONE HUNDRED AND TWENTY YEARS LONG, handwaving and saying something like “Just wait a little longer, it’ll be here any day now, honest it will” as you two are doing strikes me as special pleading. Since the warming “in the pipeline” hasn’t shown up in 120 years of record, I’m afraid “wait another sixty years and you’ll see it” just doesn’t have the piquant ring of truth about it …
Since both you and Joel obviously think there is a greater than 120 year lag between the application of a forcing and the results of that forcing, I applaud your imaginations … but you have to come up with some kind of data to back that claim up, you can’t just say that if we didn’t find it in a 120-year dataset we’ll find it in a 1e0 year dataset ….
Look, I’m not saying that the planet doesn’t have a lag between the application of a forcing and the response.
I’m saying that the GISSE model does not contain much of a lag, and the data shows it. For you to try to handwave that away with a “wait 60 years” is interesting tactics, but poor science.
I think GISS Model E just covers the lag issue by assuming that CO2 will always increase.
You don’t need to go back in time and calculate the lagged impact from every daily change in CO2 back to 1700.
You just build in a Temp response per ln (CO2) that simulates the lag response. You need to get to +3.0C by the year 2100 and CO2 rises to 715 ppm by 2100. It just takes a simple module in the model to make that work. The actual monthly temperatures in Model E as a result of GHG forcing seems to follow this principle extremely closely all the way back to the beginning of the simulation. So, if the response is not actually programmed in this way, then the model spontaneously spits that out.
So the 0.3C per W/m2 already incorporates the lag (as long as CO2/GHGs are increasing).
Given what is shown about what I have seen about what happens to temps after GHGs stop increasing, there is very little lag built into the models. Hansen’s 1988 model fully adjusted in 7 years. In IPCC AR4, there is only 0.1C of temperature increase after CO2 stops increasing in 2000 (although it takes 100 years to get there). 0.1C of lag after 100 years is nothing to make special note of.
steven mosher says:
January 17, 2011 at 2:49 pm
If you believe that, Steve, you don’t understand what I have done here at all, so clearly my explanation is not as clear as I had clearly thought …
I am not trying to correlate two datasets to uncover a relationship, as you seem to think. You say my work here is like “correlating temperature to sunspots or movements of the planets”. Let me explain why that is not true.
First, comparing temperature to sunspots is usually done to try to see if there is a relationship, to see if one depends on the other.
But in a climate model, we not only know that the output depends on the inputs, we know that the output depends solely on the known inputs. This makes the game very, very different.
Second, when we look at sunspots and temperature, we don’t know how many variables go into making up that temperature. It might be affected by sun and CO2 and ocean heat content and a host of unknown things.
But in analyzing the climate model, we know exactly how many variables the model uses. It uses ten variables, and we know exactly what their values are.
As a result, I am not doing what you said, examining datasets which may or may not be related to try to tease out a relationship which may or may not exist.
Instead, I am doing a “black box” emulation of a system where a) we know the relationships exist, and b) we know not only the number of total variables involved but we know the value of each those variables at every point in time for the 120 year record.
This is not very clear, but if I understand it, again, I am not at all concerned with the “mechanisms” within the climate models. That’s a very different thing than what I am doing. I don’t care in the slightest if their mechanisms are physically based or not, nor am I trying to understand the mechanisms in the model that “connect the independent and dependent variables”. I’m not doing that kind of an investigation.
Instead, I am doing an emulation of a black box where all inputs and outputs are known. This just means that I am trying to find the simplest combination of input variables that gives a result that is the same as (or as near as possible to) the result of the black box. In this case the simplest combination I have found is the adjusted forcing times 0.3. I make no claim that this represents an “understood mechanism”, it is just a very simple (and very accurate) emulation of what the black box is doing. To within a RMS error of 0.05°C, the output of the GISSE model can be emulated by ∆T= 0.3 ∆Q.
I hope you can see the difference between that emulation given known inputs and outputs, and looking to establish some reputed correlation between temperature and some random or not-so-random variable. If not, as always, I’m happy to give another shot at explaining it further.
Joel, as always, good to hear from you. For those that don’t know him, Joel is one of the few AGW-supporting scientists who is willing to stand up for his ideas using his own name, and I applaud him for it. He is a physicist, and nobody’s fool. We disagree often, but please don’t make the mistake of thinking he is unskilled in the world of science, as others have found out to their cost.
IF the GISSE modelled climate (remember that we are not discussing the real climate) gradually “adjusts to the forcing over time”, then the latter parts of the record would show a greater increase in temperature (per unit of forcing added) than the earlier parts of the record. But I see no sign of that. I suggest you doing some actual work with the numbers to establish your claim, rather than superciliously sending me to read the literature that I’ve already read.
If the gradual trend to a different long-term climate sensitivity is there, surely it must show up in a 120 year record. Your job, if you want to claim that it is there, is to point it out and tell us how large it is …
That all sounds great, Joel, but it is all handwaving until you can actually show us in the numbers where this is happening. For what you claim to be true, you need to show, not claim but show that there is a really long exponential lag between the imposition of the forcing and the results of that forcing, and you need let us know your estimate of what that exponent might be.
One of the effects of such a long delayed response, of course, is that the initial response to the forcing must be much smaller than the eventual response to the forcing … but again I see no sign of that. If you do, point to where, and tell us how you calculated it. I’m not saying it’s not happening. I’m saying I’ve looked, and I can’t find it, so if you want to say it is there, show us. You are claiming that the eventual response is larger than the initial response, which seems like it makes sense … but you’ve neglected to give us the time span for the exponential approach to the equilibrium.
So let’s suppose that the approach to equilibrium is slow, as you seem to think. Let’s suppose that each year the temperature moves say only 5% of the way towards the equilibrium value. That means in the first year of the forcing, we will hardly see any effect at all … perhaps you can point out where that is happening in the record.
In 13 years, it will be halfway to equilibrium. In 27 years, it will be 3/4 of the way there. In 45 years, it will be 90% of the way to equilibrium … but where is the sign of that in the record? For even that slow (5% per year movement towards equilibrium) to be happening, in my analysis it would have to show up as an increasing “effective climate sensitivity” over the course of the 120 year record … but where is it? You can’t simply wave your hands and assert that it is there, Joel. You have to show us where it is. I can’t find it, but that often doesn’t mean much, so bring it on, there’s always more for me to learn …
True but incomplete. You say that “As the temperature rose, this imbalance would decrease but not as fast as the Stefan-Boltzmann Equation would imply because the rise in temperature would cause an increase in water vapor in the atmosphere, which would reduce the effect of this heating up in restoring the radiative balance.” This is true.
However, as the temperature rises, a whole host of other negative feedbacks kick in as well as the positive feedback from water vapor. These negative feedbacks include increases in: cumulus clouds, thunderstorms, wind, albedo, latent heat transport from the surface to the atmosphere, sensible heat transport from the surface to the atmosphere, transport of heat (bypassing the GHGs) to the upper atmosphere via deep tropical convection, transport of cool air and water from the atmosphere to the surface, and the like. All of these mechanisms cool the surface.
It is the net feedback, from the water vapor you mention plus all of the feedbacks you didn’t mention, which counts … and even the modelers will admit when pressed that a) it is likely that the models don’t include all of the feedbacks, and b) it is likely that the models have wrong values for some of the feedbacks that are included, and c) we don’t understand some of the feedbacks (e.g. clouds) enough to even say whether they are negative or positive overall, much less assign a value to them.
As a result, while your explanation is true, it gives a false sense that we understand what is happening in the world of climate feedbacks. We don’t have such an understanding, scientists don’t even agree if the net cloud feedback response to increased temperatures is negative or positive.
This harks back to what I have discussed elsewhere, which is the underestimation of uncertainty from the AGM folks in general. We have nowhere near the amount of confidence in our knowledge of the feedbacks as you portray in your answer. You are not wrong, mind you, what you say is likely true … but it is also very incomplete.
All the best,
w.
Joel Shore says:
January 17, 2011 at 3:08 pm
Then I’m sure you’d have no trouble giving us a citation which shows the size of the exponent, or alternatively the exponential half-life (or e-folding time)? Once we have that, we can see how it applies to the current situation. I agree with you that there is a lag in the real climate system … but the size of the lag in the model is at this point unknown.
Paul_K says:
January 17, 2011 at 5:29 pm
Here you go, have at it …

Other than the errors from the volcanoes (a slight overestimation by my method), I don’t see a lot happening there.
w.
Stephen says:
January 17, 2011 at 9:43 pm
First, I can get to almost exactly the same result by regressing the forcings, not on the GISSE results, but on the actual temperature itself. I strongly doubt that you could do that in your example regarding string theory. So your claim that “there is no way of knowing this would happen” might be true in the case you describe, but it is totally false in this case.
Second, you and Lazy Teenager seem to have missed the part where I said:
You are right when you say that an emulation can only be done once the model is up and running … so what? What difference could that possibly make, to state that you can only emulate a system if there is a system to emulate? That’s a useless tautology.
It doesn’t seem that either you or Lazy understand what I am doing here. I am investigating the implications of the fact that (to a high degree of accuracy) the GISSE model can be replaced by a black box that simply multiplies the input forcings by 0.3. That does not “discredit the original model” as you seem to think I am saying. However, it does imply things about the original model, very interesting things that it might be worthwhile for you to consider.
w.
AusieDan says:
January 17, 2011 at 10:01 pm
I agree, and one of the implications of this study is that the GISSE model is likely not very useful for calculations of the future evolution of the climate.
w.
orkneygal says:
January 18, 2011 at 4:11 am
Yes, some people say that, while others like the backgrounds. I like them myself, for several reasons.
The first is that I think science should be fun and interesting and visually rewarding, not stodgy and boring.
The second is that I don’t mind if people have to study the graph a bit more to find out what it is saying, they may just come away with a better comprehension of what the graph actually says by studying it a bit more deeply.
The third is that when people see one of my graphs, if they’ve seen any of my graphs before they’ll immediately know “who done it” … which in a crowded marketplace of ideas is a huge competitive advantage.
And finally, I put the pictures there to expand the mental arena of the discussion, to constantly remind people that we are not just talking about numbers that we do understand, we’re talking about nature in all of its wondrous complexity and mystery that we don’t understand, and we would be wise to remember that.
So I’m sorry to say, dear lady of Orkney, that despite your lovely and genteel protest, I’ll likely continue my evil ways … and yeah, I know, my wife says the same thing about me that you are probably thinking right now …
w.
kzb says:
January 18, 2011 at 5:15 am
The model algorithms are generally available, but the code is often poorly commented, many-times changed, incomprehensible spaghetti. However, knowing the code doesn’t help for the kind of analysis I’m doing. In general, it is only “by their fruits shall ye know them”. That is to say, the effects of iterative code (like a climate model) can only be seen by running them, they generally can’t be calculated from looking at the code itself.
w.
Willis, by putting in an exponential lag response with a time-scale to be determined (as in your replies to Joel), you will be able to remove your volcano fudge factor. I would say you could try tuning this time-scale parameter in such a way as to allow the full effect of volcanoes. I say this because, and it may be obvious, the volcano forcing is high-frequency spikes, so any kind of time-averaging of the forcing will diminish their effect automatically without need for the fudge factor.
I’m having trouble accepting the “lag time” theory.
We experience changes in forcing in the real climate regularly, (the seasons) these don’t take years to manifest themselves.
Is CO2 forcing somehow supra-special that it’s effects take years to manifest themselves?
Bill Illis says:
Well, I can’t speak to Hansen’s 1988 model, which treated the oceans in pretty primitive ways relative to modern incarnations (and the oceans are really what matter for this issue). But, the IPCC does not show what you claim it does at least anywhere that I can find. In fact, in Section 10.7 of the WG1 report, they say:
I’m not sure how you reached your erroneous conclusion, but perhaps it was from misinterpreting or misremembering this statement in the same section:
Needless to say, 0.1°C per decade is not the same thing as 0.1°C increase by 2100.
Willis Eschenbach says:
Willis, I agree that this is sort of handwaving. I was not attempting to do the work for you but just to point you in the right direction. If I didn’t have over 100 intro physics exams to grade, I might be able to do more of the research to answer your question. Since, alas, I do have these other commitments, I am just trying to point out what I think the issue is and where you can find more discussion of it. One direction was the section of the IPCC AR4 report on transient climate response. Another is the section that I pointed out to Bill Illis on the long term climate commitment. (See, in particular, Figure 10.34, although alas the scale there is not ideal because they are trying to show a lot of things on one graph. I know there are some papers in the literature that look at the “constant composition commitment” scenario in more detail.)
The advantage of models is that one is not constrained by the (estimated) real world forcings, which is what your study of the GISS Model E has addressed so far. One can easily test the models by putting in all sorts of different forcing scenarios. I honestly don’t know if even a simple exponential relaxation model is sufficient to get reasonable emulation of the models or if one has to assume non-exponential relaxation, but certainly exponential relaxation would be better than the instantaneous assumption that you are using now.
As Jim D pointed out, one way to go about estimating things with the current data is to see what kind of exponential relaxation is necessary to get a better fit to the GISS Model E response to volcanic forcings without having to put in your volcano fudge factor. This will give you an estimate of the relaxation timescale, although probably an underestimate because I am pretty sure that the form the relaxation will actually take is non-exponential, with an initial fairly rapidly approach but then a longer-than-exponential tail.
I am not saying the lag is greater than 120 years. The fact is that the net forcings were fairly small over much of that 120 year span and it is only over the past 30 or 40 years that the net forcing has really ramped up.
“”””” Willis Eschenbach says:
January 17, 2011 at 4:18 am
peter_ga says:
January 17, 2011 at 3:31 am
“is that the temperature change in degrees C will be 30% of the adjusted forcing change in watts per square metre (W/m2)”
Does one not usually compare apples and apples using percentages, and not apples and oranges? I stopped reading after this figure. It was too mentally draining.
peter_ga, don’t give up so quickly. Orthodox climate theory posits something called the “climate sensitivity”. This says that their is a linear relationship between changes in top-of-atmosphere forcing Q (in watts per square metre, or W/m2) and changes in surface temperature T (in °C).
These changes are related by the climate sensitivity S, such that
∆T = ∆Q * S
or “Change in temperature is equal to change in forcing times the climate sensitivity”.
Climate sensitivity has the units of degrees C per W/m2, so all of the units work out, and we are dealing with apples and apples. Or oranges and oranges.
Please note that I do not subscribe to this idea of “climate sensitivity”, I am reporting the mainstream view. “””””
Now you have me totally copnfused. I was under the impression that “Climate Sensitivity” is defined as the increase in global mean Temperature (presumably the lower troposphere two metre high thing) for a doubling of CO2; thereby enshrining forever the presumption that Temperature is proportional to log of CO2 abundance. That seems to be what IPCC defines it as 3.0 deg C per doubling +/-50%
It seems like everyone who writes on this subject has their own definition of “Climate Sensitivity” How did W/m^2 get into the picture if it is just a CO2 doubling that does it.
I don’t believe either the logarithmic bit or the value of the deg C per doubling (which I don’t believe in anyway). Going from 280 ppm to 560 ppm CO2 gives the same three degree warming that going from one ppm to two ppm gives; preposterous; but correct according to the definition. And the definition doesn’t say anything about H2O; just CO2 barefoot.
Joel & SMosher:
I don’t know where these 120 yr “lag” times mentioned are coming from. Ocean (like ENSO) & wind currents may cycle on various timescales — yrs to decades to perhaps even 1000s of yrs.
But that’s completely different from the reaction times/lags to a forcing. That’s determined by mass & the resultant “storage” of heat. A larger mass at a given temp will come into equilibrium over a longer time period after a given forcing.
The ocean is really the only “storage” medium for heat — land & air lack the mass or thermal conductivity. Look at a thermal map of the ocean — it’s literally a cold-water tub /an oil-slick thickness of warm water at the top. Most of the ocean mass is well below the earth’s avg temperature! That’s not a very good “heat storage” mechanism at all. It’s actually storing “cold” in relation to the earth’s average temperature. And it’s stratified/isolated from the warm-water above, except where upwelling occurs.
So the only significant heat-storage is the first few hundred meters of ocean — on the scale of paint-thickness on a toy globe. Global pulse-forcings like Pinatubo have demonstratively shown transient responses of only 0.6 yrs! Equilibrium in a mere 2.5 yrs. That’s all. Much bigger volcanoes would have longer response lags, but not much — maybe a decade for an instantaneous super-volcano.
What’s it all mean? It means one can toss out all the “heat in the pipeline” arguments. And toss out the 120 yr “effects” down the road. And that what one sees right now from CO2 is what one gets. The yearly increase in CO2 is only a few ppm, so considering the earth’s quick response, particularly to such a small incremental forcing, means there is no significant lag to human-emitted CO2.
Changing ocean currents & other cycles are a different, separate issue to forcing/response time issues. Now, if someone wants to venture that CO2 causes ocean-currents changes & such, that’s stretching beyond belief at this point in our understanding.
Willis;
Your reduction of the entire GISSE model to a single multiplication has an interesting implication:
If a model consists of linear equations, it can always be reduced to a single arithmetic operation in the end. The only value of the model/equation set is to discover what that operation is.
You have done so with GISSE, so its purpose is now achieved, and it can be retired.
🙂
😉
George E. Smith says:
People use the term in a few different ways. The more fundamental definition of climate sensitivity that holds for any sort of forcing is to define it in terms of degrees Celsius per (W/m^2). When you apply this to the forcing due to a doubling of CO2 (which basically everyone from Richard Lindzen and Roy Spencer and the climate scientists who support the consensus view on AGW agree is ~4 W/m^2), you get the number for a CO2 doubling. In particular, 3 deg C for a doubling corresponds to roughly 0.75 C per (W/m^2).
George, I know people have explained this to you countless times here: Before you can choose to believe or not believe something, it is best to at least understand what it is you are choosing to believe or disbelieve. The logarithmic bit refers to the fact that the radiative forcing due to increased CO2 increases approximately logarithmically in the concentration regime we are in. It is not a law of nature…It is just an empirical fit that works pretty well in said regime. At lower concentrations, it becomes more like linear in CO2, I believe…and at higher concentrations than the current regime, it transitions to something that is more like a square root dependence (at least for a while). This has to do with which absorption bands are contributing the most to the radiative forcing effect and what regime one is in for those particular bands (saturated in the center but not in the wings, …).
Also, in going from concentration to the effect on global average temperature, one also has to consider how the climate sensitivity [i.e., the number in C per (W/m^2)] varies with the climate state. So, that is an additional factor that comes into play. As I understand it, the current thinking is that it is not strongly dependent on the climate state, at least in the general regime that we are currently in.