Guest Post by Willis Eschenbach
In my earlier post about climate models, “Zero Point Three Times The Forcing“, a commenter provided the breakthrough that allowed the analysis of the GISSE climate model as a black box. In a “black box” type of analysis, we know nothing but what goes into the box and what comes out. We don’t know what the black box is doing internally with the input that it has been given. Figure 1 shows the situation of a black box on a shelf in some laboratory.
Figure 1. The CCSM3 climate model seen as a black box, with only the inputs and outputs known.
A “black box” analysis may allow us to discover the “functional equivalent” of whatever might be going on inside the black box. In other words, we may be able to find a simple function that provides the same output as the black box. I thought it might be interesting if I explain how I went about doing this with the CCSM3 model.
First, I went and got the input variables. They are all in the form of “ncdf” files, a standard format that contains both data and metadata. I converted them to annual or monthly averages using the computer language “R”, and saved them as text files. I opened these in Excel, and collected them into one file. I have posted the data up here as an Excel spreadsheet.
Next, I needed the output. The simplest place to get it was the graphic located here. I digitized that data using a digitizing program (I use “GraphClick”, on a Mac computer).
My first procedure in this kind of exercise is to “normalize” or “standardize” the various datasets. This means to adjust each one so that the average is zero, and the standard deviation is one. I use the Excel function ‘STANDARDIZE” for this purpose. This allows me to see all of the data in a common size format. Figure 2 shows those results.
Figure 2. Standardized forcings used by the CCSM 3.0 climate model to hindcast the 20th century temperatures. Dark black line shows the temperature hindcast by the CCSM3 model.
Looking at that, I could see several things. First, the CO2 data has the same general shape as the sulfur, ozone, and methane (CH4) data. Next, the effects of the solar and volcano data were clearly visible in the temperature output signal. This led me to believe that the GHG data, along with the solar and the volcano data, would be enough to replicate the model’s temperature output.
And indeed, this proved to be the case. Using the Excel “Solver” function, I used the formula which (as mentioned above) had been developed through the analysis of the GISS model. This is:
T(n+1) = T(n)+λ ∆F(n+1) * (1- exp( -1 / τ )) + ΔT(n) exp( -1 / τ )
OK, now lets render this equation in English. It looks complex, but it’s not.
T(n) is pronounced “T sub n”. It is the temperature “T” at time “n”. So T sub n plus one, written as T(n+1), is the temperature during the following time period. In this case we’re using years, so it would be the next year’s temperature.
F is the forcing, in watts per square metre. This is the total of all of the forcings under consideration. The same time convention is followed, so F(n) means the forcing “F” in time period “n”.
Delta, or “∆”, means “the change in”. So ∆T(n) is the change in temperature since the previous period, or T(n) minus the previous temperature T(n-1). ∆F(n), correspondingly, is the change in forcing since the previous time period.
Lambda, or “λ”, is the climate sensitivity. And finally tau, or “τ”, is the lag time constant. The time constant establishes the amount of the lag in the response of the system to forcing. And finally, “exp (x)” means the number 2.71828 to the power of x.
So in English, this means that the temperature next year, or T(n+1), is equal to the temperature this year T(n), plus the immediate temperature increase due to the change in forcing λ F(n+1) * (1-exp( -1 / τ )), plus the lag term ΔT(n) exp( -1 / τ ) from the previous forcing. This lag term is necessary because the effects of the changes in forcing are not instantaneous.
Figure 3 shows the final result of that calculation. I used only a subset of the forcings, which were the greenhouse gases (GHGs), the solar, and the volcanic inputs. The size of the others is quite small in terms of forcing potential, so I neglected them in the calculation.
Figure 3. CCSM3 model functional equivalent equation, compared to actual CCSM3 output. The two are almost identical.
As with the GISSE model, we find that the CCSM3 model also slavishly follows the lagged input. The match once again is excellent, with a correlation of 0.995. The values for lambda and tau are also similar to those found during the GISSE investigation.
So what does all of this mean?
Well, the first thing it means is that, just as with the GISSE model, the output temperature of the CCSM3 model is functionally equivalent to a simple, one-line lagged linear transformation of the input forcings.
It also implies that, given that the GISSE and CCSM3 models function in the same way, it is very likely that we will find the same linear dependence of output on input in other climate models.
(Let me add in passing that the CCSM3 model does a very poor job of replicating the historical decline in temperatures from ~ 1945 to ~ 1975 … as did the GISSE model.)
Now, I suppose that if you think the temperature of the planet is simply a linear transformation of the input forcings plus some “natural variations”, those model results might seem reasonable, or at least theoretically sound.
Me, I find the idea of a linear connection between inputs and output in a complex, multiply interconnected, chaotic system like the climate to be a risible fantasy. It is not true of any other complex system that I know of. Why would climate be so simply and mechanistically predictable when other comparable systems are not?
This all highlights what I see as the basic misunderstanding of current climate science. The current climate paradigm, as exemplified by the models, is that the global temperature is a linear function of the forcings. I find this extremely unlikely, from both a theoretical and practical standpoint. This claim is the result of the bad mathematics that I have detailed in “The Cold Equations“. There, erroneous substitutions allow them to cancel everything out of the equation except forcing and temperature … which leads to the false claim that if forcing goes up, temperature must perforce follow in a linear, slavish manner.
As we can see from the failure of both the GISS and the CCSM3 models to replicate the post 1945 cooling, this claim of linearity between forcings and temperatures fails the real-world test as well as the test of common sense.
w.
TECHNICAL NOTES ON THE CONVERSION TO WATTS PER SQUARE METRE
Many of the forcings used by the CCSM3 model are given in units other than watts/square metre. Various conversions were used.
The CO2, CH4, NO2, CFC-11, and CFC-12 values were converted to w/m2 using the various formulas of Myhre as given in Table 3.
Solar forcing was converted to equivalent average forcing by dividing by 4.
The volcanic effect, which CCSM3 gives in total tonnes of mass ejected, has no standard conversion to W/m2. As a result we don’t know what volcanic forcing the CCSM3 model used. Accordingly, I first matched their data to the same W/m2 values as used by the GISSE model. I then adjusted the values iteratively to give the best fit, which resulted in the “Volcanic Adjustment” shown above in Figure 3.
[UPDATE] Steve McIntyre pointed out that I had not given the website for the forcing data. It is available here (registration required, a couple of gigabyte file).
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Joel Shore says:
“More nonsense. You can point a radiation detector up at the sky and detect the back-radiation.”
So, what is the contribution to the greenhouse effect of all the O2 and N2 emitting back-radiation then? 🙂
The Laws of heat transfer apply to radiation as well as to conduction. A cold object does not raise the temperature of a warmer object, even through it conducts with it and may even diffuse into it. Likewise, a colder object will not raise the temperature of a hotter object by radiation, even though it exchanges radiation with it.
“Furthermore, engineers use the same radiative transfer equations that scientists are using for the greenhouse effect in thousands of real world calculations.”
LOL. No, they do not, for if they did, we would have free-energy heating systems. Put 240W/m2 in, get 480 W/m2 out. And passively. Sorry, don’t think so.
“What are you talking about? Who says different gases have different lapse rates. The molecules in the air rapidly thermalize. There are no different lapse rates.”
What are YOU talking about? Lapse rate depends on thermal capacity and that is specific to each gas! Why in the world would you say something like this?
“And, the reason that this discrepancy between what the surface emits and what the earth / atmosphere system as seen from space emits can be supported is because of the greenhouse effect.”
No, it is a natural consequence of any atmosphere. Even a purely inert atmosphere of Helium would be warmer on the bottom than the top, the radiative emission average would be somewhere in the middle, and the ground-air average temperature would be warmer than the mathematical spherical average.
Joel Shore
“In terms of the relationship between forcings (mainly changes in albedo and greenhouse gas concentrations) and temperature: yes, quite impressively. See http://arxiv.org/abs/1105.1140 (Fig. 4). ”
Nonsense. There is nothing impressive about a short time span ‘model’ which claims to find an accumulating heat content in the ocean which cannot be validated.
In particular, this ‘reference’ says absolutely nothing about why ~120,000 years ago the global sea level was 4 – 6 m higher than now at a time when atmospheric CO2 levels were still significantly lower than now. It’s as simple as that.
“The basics of the timing of these changes in terms of Milankovitch osccilations (sic) leading to the buildup or melting of land ice is also largely understood, even if some details are still fuzzy. And, there are still some question about the exact mechanism that triggers the observed rise in the greenhouse gases, although again there is a rough understanding of why this would occur.”
Again nonsense. In terms of the state-of-the-art of GCMs the details are still so very very fuzzy they are not even included!
You are just spouting juvenile generalisations which don’t mean much at all.
There is no GCM which includes the (Milankovitch) eccentricity, tilt and precessional forcings, which, perhaps in combination with TSI and geomagnetic field cyclicity produce a reduction in NH insolation sufficient to cause the polar sheets to grow outwards for a period of ~100,000 years, even in the absence of a significant change in GHG forcing. The ice record clearly shows CO2 changes initiate neither an interglacial or a glacial.
Where is the GCM which is consistent with the 10Be ice record of solar activity over the last 9300 years?
They can’t even explain frequency or phase changes of ENSO or PDO cycles for goodness sake!
Postma said (May 16, 2011 at 2:55 pm)
The Laws of heat transfer apply to radiation as well as to conduction … a colder object will not raise the temperature of a hotter object by radiation, even though it exchanges radiation with it.
Nothing written here will change Postma’s view on this. Others may find The Science of Doom interesting, entertaining and useful:
The First Law of Thermodynamics Meets the Imaginary Second Law
Postma says:
If this statement is about a body that is not receiving energy or generating thermal energy, then it is correct but irrelevant. If the statement is about an object like the earth getting energy from the sun (or the human body generating thermal energy from chemical energy) then it is incorrect.
Get real. Do you have any data to back up this claim? Those molecules in isolation don’t have emission / absorption lines in the mid and far-IR. The only way that emission / absorption can happen is via collisional processes, which give very small contributions at earth’s atmospheric pressures.
Here is a paper discussing the measurement of such absorption lines at pressures of 0 to 10 atmospheres: http://www.opticsinfobase.org/view_article.cfm?gotourl=http%3A%2F%2Fwww.opticsinfobase.org%2FDirectPDFAccess%2FBC73981D-EC8E-6BE9-5F66346A49A16C1A_60399.pdf%3Fda%3D1%26id%3D60399%26seq%3D0%26mobile%3Dno&org=Rochester%20Institute%20of%20Technology
Note that even for the strongest absorption line, the measurements rely on obtaining ultra-high purities of nitrogen because any small contamination by CO or CO2 overwhelms the measurement:
So, apparently only 1 part per million of CO2 was, along with the CO, enough to render the measurements useless. Imagine what 380 parts per million does! Furthermore, as this graph shows http://www.learner.org/courses/envsci/visual/img_med/electromagnetic_spectrum.jpg , the absorption line of N2 that they are talking about, which is at 4.3 um, would already place it quite far out in the wings of the terrestrial radiation spectrum.
What? It would not be 87 C unless you believe that the only thing that matters is the local radiation intensity. In the absence of IR absorption in the atmosphere, what energy balance constrains is the average of the T^4 over the surface of the earth (really the emissivity times T^4 but the emissivity of most terrestrial surface…or most surfaces period…in the IR is very close to 1).
No…That’s not basic conservation of energy. That’s nonsense. The temperature is determined by energy balance considerations. Of course Venus absorbs enough energy to get up to 700 C. It absorbs enough energy to get to an arbitrarily high temperature if it never emitted any energy back out into space. Of course, that’s a counterfactual because it can’t not emit energy back into space. However, the only specific limit that I know of on its temperature is that it could never get hotter than the sun that heats it.
Postma says:
Again, this statement is either true but irrelevant if you are talking about a cold object and a warm object with no other source of energy OR it is false if you are talking about a case, such as the sun, earth, and atmosphere, where you have to compute the radiative balance of the system.
The greenhouse effect is no big mystery: In the absence of an IR-absorbing atmosphere, all the radiation emitted by the earth goes back out into space and the earth’s surface temperature is determined by the balance of what it receives from the sun and what it emits back out into space. In the presence of an IR-absorbing atmosphere, some of the radiation that it emits finds it way back to the earth and hence, for a given surface temperature, the net heat flow away from the earth is reduced. The earth’s temperature must rise until radiative balance is retored.
“Furthermore, engineers use the same radiative transfer equations that scientists are using for the greenhouse effect in thousands of real world calculations.”
You are smart enough that you can’t possibly believe this nonsense. Why do you continually confuse the case of an object with no source of thermal energy with the earth that is receiving energy from the sun?
That notion does not even obey conservation of energy. An atmosphere transparent to IR radiation would allow all the radiation from the surface of the earth to go out into space. At its present surface temperature, the earth would be emitting much more than it absorbs from the sun and would rapidly cool.
Postma says:
If you really believe this is true in any relevant way (i.e., to an object like the earth receiving energy from the sun…or an object generating its own thermal energy), explain to me how the examples that we have presented in Sections 2.2 and 2.3 of our comment on G&T is wrong: http://scienceblogs.com/stoat/upload/2010/05/halpern_etal_2010.pdf
Ecoeng says:
Did you miss the “Fig. 4” part. You seem to spend your entire post going off on irrelevant tangents.
The CO2 levels have only shot up to their current values in the last 100 years. Do you think the sea level instantaneously adjusts to the forcings?
Postma says:
Somehow I missed this priceless comment. You are telling me that gases in the atmosphere don’t thermalize through collisions but instead the different constituent gases are at different temperatures? I really think I need you to expound on this one a bit more.
Joel Shore, regarding the Hansen paper, it says:
So the model that Hansen admits contains large errors, and which is forced into balance by adjusting a single parameter, shows an imbalance?
Perhaps that impresses the credulous, Joel, but the idea that we can measure (or “infer” as Hansen would have it) the forcing imbalance to the nearest five hundredths of a degree is risible to me. NASA cites Hansen’s work when they say:
And Hansen himself says (pdf):
And despite the uncertainties in the observations being too large to even compare them with the models, and despite acknowledged errors as high as 50 W/m2 in some areas, Hansen says the models are accurate to the nearest five hundredths of a degree, and that the error value is 0.15 W/m2?
Joel, if you believe that, you’ll have to tell me why. The “balance” in the GISSE model is established by tuning a single parameter … and yet you believe it is meaningful? Hansen says (PDF):
You’ll have to explain that one, because I don’t see it. If you are tuning for balance, rather than actually calculating for balance, then any outcome is possible when you subsequently change the forcings—it all depends on why and how much it was out of balance in the first place, and which way the parameter dial was turned to balance it out.
They can only succeed in tuning the model to be within half a W/m2 of balance, and then they proceed to claim an error of 0.15 W/m2 …. WUWT, as they say?
w.
Joel Shore
“The CO2 levels have only shot up to their current values in the last 100 years. Do you think the sea level instantaneously adjusts to the forcings?”
No I don’t. Only those who can’t recognise cheap shots when they see them might think that. But I do think someone has to come up with GCMs which can, for example:
Explain what forcings or (even just linear) combinations of forcings could produce a sea level 4 – 6 m higher than where it is now for much lesser CO2 levels than now, not once, but repeatedly through the Pleistocene.
Explain the good proxy temperature and 10Be record of the last 9300 years again for much lesser CO2 levels than now and thus ordinary things like why (say) the Inca were growing maize on little terraces (visitable to this day) halfway up the sides of mountains 1000s of feet above where it can be grown now.
Your comments betray all the classic traits of the AGW hubris., firmly rooted in a naive, post-modernist belief that a complex, chaotic and partially non-equilibrium thermodynamic system is either reliably predictable and/or such predictions can be somehow imposed by act of will. Laughable at a time when even the science of large scale non-equilibrium thermodynamics is new.
I’m sorry, this is a signal processing sleight of hand. What you have done is fit parameters of a convolution. This will inevitably lead to a correlation which is spurious. In fact you have performed one of the most basic statistical errors in attempting to test the significance of data that is used to construct a hypothesis.
I suggest you read Bendat& Piersol or Papoulis.
Willis,
You and Ecoeng are going off on all of these tangents. I am not discussing all of Hansen’s paper…or even the main thesis of Hansen’s paper. I am simply pointing to one figure in it, Figure 4, that addresses this question / comment:
The point is that, yes, the ice core temperature record does show what appears to be a linear relationship between the forcings, as determined from greenhouse gas levels and sea levels (to get the albedo forcing), and the temperature.
Ecoeng says:
And yet, there are some things that can reliably be predicted. I can reliably predict that the average temperature here in Rochester in July will be roughly 25 C warmer than in January, even if I can’t predict the weather on any particular day.
Is there the possibility of some surprises as we embark on our little “experiment” with the earth’s climate system? Absolutely…But such surprises seem more likely to be unpleasant than pleasant.
“Entia non sunt multiplicanda praeter necessitatem”
(“Entities should not be multiplied more than necessary”).
Very simplistic Willis (in a good way) Nice!
netdr, the ensemble average is steadily 0.2 C per decade. Individual members may vary from zero to 0.4 C per decade. The current climate is within the ensemble. Confusion with using the smooth ensemble projection to compare with the observations, which are like an individual noisy member, is a common problem here.
Willis Eschenbach says:
Now that I have read through your diatribe on Hansen’s paper (which I didn’t do until now simply because it wasn’t relevant to the tiny piece of the paper that I was even referring to), I have to admit that I am pretty confused. As near as I can tell, Hansen’s estimate of the energy imbalance that you quote is based on various pieces of empirical data…The nearest they come to using a model for any of the pieces is a 1-d conduction equation that they use to estimate the very small land contribution. (They do then do some comparisons to Model E calculated values for the imbalance, but that is in fact to show that they think the model may have too slow a response because of the way that mixing into the deep ocean is handled.)
My last comment was incomplete and partially wrong.
What has been do here is NOT a test of linearity. Instead a first order linear step method has been applied which stems from Taylor’s theorem and is the basis of Euler’s method for solving a differenetial equation. It says nothing about the linearity of the process involved.
I know, Mr Eschenbach, that you pride yourself on being a generalist, but should you not try and learn some basic mathematics and physics before presenting ideas like this?
RC Saumarez: it’s easy to fill one’s mouth and keyboard with big-sounding names. Please explain to us poor ignoramuses what you know of “Taylor’s theorem”…
As I’ve already said, the point is not that one can approximate the model’s output. The point is that one can approximate it so well AND as simply as Willis did.
@Maurizio Morabito
I’m sorry that you have feel I’ve filled my mouth with big sounding names. Taylor and Euler were mathematicians in the late 18th and early 19th century and laid the ground for a large body of computational mathematics.
Taylor’s theorem is quite simple:
If you know the value of a function, say temperature T, at a time t, can one establiblish the temperature at a later time t+delta_t?
Taylor’s theorem states that:
T(t+delta_t)=T(t)+delta_t*T’+ delta_t^2*T”/2!+ delta_t*T”’/3! ……..
where T’. T”, T”’ are the first,second and third derivatives of T with respect to t.
The point about this theorem is that for small values of delta_t, a linear approximation is valid, and this is the basis for the solution of differential equations by (crude) multistep methods.
This is essentially what has been done in the analysis presented here and does not imply that the system is linear. Rather, it is an assumption that arises from the use of Taylor’s theorem in the numerical solution of a first order differential equation.
To be rigorous, a linear system must show:
a) Proportionarity, i.e.: if you put in twice the input, you get twice the output.
b) Stationarity: the system does not change its properties over time.
c) Superposition: ie: if the response to x(t)=y(t) and xx(t)= yy(t), the response to
x(t)+xx(t)= y(t)+yy(t).
There is nothing in the analysis presented here that distinguishes the difference between the criteria of a truly linear system and the fact that a differential equation can be solved numerically by using the first term of Taylor’s theorem, i.e: Euler’s method.
I regard this whole approach as superficial and wrong. If the mathematics of climate models is to be addressed, I would argue that this is better done by people with a proper background in the relevent mathematics and experience in its use.
@JoelShore “the emissivity of most terrestrial surface…or most surfaces period…in the IR is very close to 1”
I am bothered by the use of albedo and emissivity in an apples vs oranges way. It is generally accepted that albedo very close to 0.3 reduces sunlight absorption. No problem with that. We can understand measurement of albedo from space is consistent with this. But quoting an emissivity of 1 assumes the land surface is the emitter. From space, the emitter is the earth including atmosphere (clouds). Clearly the emissivity of earth as a whole as seen from space is not 1, the blackbody, it is less for the greybody.
As I understand Kirchoff’s Law (of thermal radiation), is a law applying to equilibrium, even a greybody cannot absorb more than it emits or vice versa. One can’t use a greybody for absorption and an blackbody for emission when considering the energy of earth in radiative thermal equilibrium with the sun.
RC Saumarez – even Wikipedia shows how a simple non-linearity can’t be much reduced to any number of Taylor components.
Once again: what really stinks is how easy it was to use just the first term. Add to that the fact that it can be easily referred to the original “forcings” and there is something quite rotten in Climatemark.
Joel Shore states, “Is there the possibility of some surprises as we embark on our little “experiment” with the earth’s climate system? Absolutely…But such surprises seem more likely to be unpleasant than pleasant.”
The models predict much that is unpleasant. Thus far the observations have been, on the whole, pleasant, even if difficult to ascribe to CO2.
The decadal surprise has been that since 1998 the atmosphere has slightly cooled.
BLouis79 says:
Kirchhoff”s Laws say that absorptivity and emissivity have to be equal at a given wavelength. The emission spectrum of the sun and the earth are very different…with an extremely small overlap. In the visible, UV, and near IR of the solar radiation, the earth’s surface is not that close to a perfect blackbody (although, it isn’t that far away: it’s something like 12% reflectance). In the mid- and far-IR, it much better approximates a blackbody. Take snow, for example: It can be a very good reflector in the visible but is nearly a perfect absorber for the wavelengths of significance for terrestrial radiation.
David says:
Actually, that is not in fact true anymore. Furthermore, it is not a surprise that one can cherry-pick periods over which the trend is not very different from zero or is even negative, especially as one makes the length of the period shorter. The same thing is seen for climate models that are forced with steadily-increasing greenhouse gases. The shorter the period of time, the larger the error bars on the trend estimate.
Joel Shore says:
May 17, 2011 at 6:51 pm
Joel, my apologies. You are 100% correct. I misunderstood the paper, I thought it was an updated version of Hansen’s earlier “smoking gun” paper. I’m juggling too many balls at times.
In the current paper, he seems to have shown that the earth is warming. Of course, being an AGW supporting scientist, he can’t say “the earth is warming”. He gives it a much scarier name, the “EARTH’S ENERGY IMBALANCE”. This makes it sound like the Earth was in balance, but now humans have thrown it out of balance.
In fact, by and large the earth has always been either warming or cooling. This means that the feared “EARTH’S ENERGY IMBALANCE” could be much more honestly described as the normal state of affairs.
Now, Hansen claims in the title (“Earth’s Energy Imbalance and Implications”) that there are “implications” in the fact that the Earth is not in energy balance. Unfortunately, since it has rarely been in balance in the past, what are the “implications” in the fact that it is not in balance now?
He claims that the implications reside in the claim that the imbalance is now due to GHGs, as shown by … why, the GISSE model. The one that’s out by dozens of W/m2. The one that’s tuned to a warming planet. That’s what gives us the “implications”.
Which is an interesting hypothesis, but there is scanty evidence to establish it as reality based.
We’re back to the question of the null hypothesis. You guys need to show that the energy imbalance today is somehow different from that of the past. Otherwise, it’s just another energy imbalance, commonly known as “the globe warms and cools” … are you surprised? Because I’m not.
Near as we can tell, the globe has been warming (in fits and starts) for about 300 years. It seems to warm for a while, and then run level or even drop a bit for a while, and then warm for a while, and so on. Late 1800s were kind of level, it warmed to about 1945, cooled or stayed level to about 1975, warmed again to 1998, and has stayed about level since then.
Perhaps you see the hand of doom with implications of Thermageddon in that record. I simply don’t see it. Same old energy imbalance we’ve had for the last couple centuries.
You also say:
Joel Shore:
May 17, 2011 at 4:56 am
Huh? There is a linear (basically) relationship between the CO2 and the temperature during the ice ages. However, since the CO2 lags the temperature, this is generally accepted as reflecting the outgassing/absorption of CO2 by the world’s oceans from changes in the ocean’s temperature. The linear relationship is there … but the causation is going the other way round.
Regarding the change in the sea ice albedo, yes, we can estimate that. But since we can’t even begin to guess at what was happening with the cloud albedo 50,000 years ago, how is the equation solvable? There’s a huge missing term, and we certainly can’t assume it is constant.
Finally, none of these proposed mechanisms explain the extraordinary stability of the temperature record during the glacials and the interglacials. Each interglacial has a temperature within ± one percent or so of the other interglacials. Each ice age has a temperature within ± one percent or so of the other ice ages. The earth’s temperature has only varied by less than ±1% during the Holocene.
For me, that clearly shows the existence of “preferred states” in the temperature of the planet. It demonstrates that there are homeostatic mechanisms in operation which constrain the variations in planetary temperature to a very narrow range in either the ice ages or interglacial periods.
And it is those homeostatic mechanisms which destroy the linearity so prized by … well … linear thinkers.
As a result, any claim of a demonstration of linearity of forcings and temperatures during the ice ages and interglacials is certainly resting on very shaky pins, except as regards CO2, which is linearly exhaled or inhaled by the ocean as the temperature changes …
Always good to hear from you,
w.
@JoelShore “Kirchhoff”s Laws say that absorptivity and emissivity have to be equal at a given wavelength.”
If you take that strictly to be true (as a variation on the looser Kirchoff’s Law which is happy for the integrated absorptivity and emissivity to be equal across a range of wavelengths), then how can the earth absorb as a blackbody with albedo 0.3 from outside the atmosphere and emit as a “nearly black” greybody from the surface inside the atmosphere without creating a big hole (“energy imbalance”) caused by the clouds and atmosphere accounted for in absorption and neglected in emission??
Simplistically, the surface has to emit more causing warming nearer the surface in order for the radiative equilibrium to occur above the clouds after clouds have absorbed some the the outgoing radiation.
BLouis79 says:
Yes…And, not just clouds but also greenhouse gases absorb (and re-emit) outgoing terrestrial radiation. The fact that the surface must be warmer than in the case when there is no such absorption is what is called “the greenhouse effect”.