Roger Tattersall (aka Tallbloke) writes on his blog of a WUWT comment. Unfortunately WUWT gets so many comments a day that I can’t read them all (thank you moderators for the help). Since he elevated Dr. Robert Brown’s comment to a post it seems only fair that I do the same.
I saw this comment on WUWT and was so impressed by it that I’m making a separate post of it here. Dr Brown (who is a physicist at Duke University) quotes another commenter and then gives us all an erudite lesson. If Nikolov and Zeller feel they need to take any of the complaints on WUWT about the way they handle heat distribution from day to night side Earth seriously, they probably need to study this post carefully. this is also highly relevant to the reasons why Hans Jelbring used a simplified model for his paper, please see the new PREFACE added to his post for further elucidation.
———————————————————————————-
I can’t speak for your program, but I will stand by mine for correctly computing the ‘mean effective radiative temperature’ of a massless gray body as a perfect radiator. Remember, there is no real temperature in such of an example for there is no mass. It takes mass to even define temperature. (but most climate scientist have no problem with it and therefore they are all wrong, sorry)
I’d like to chime in and support this statement, without necessarily endorsing the results of the computation (since I’d have to look at code and results directly to do that:-). Let’s just think about scaling for a moment. There are several equations involved here:
is the total power radiated from a sphere of radius R at uniform temperature T. \sigma is the Stefan-Boltzmann constant and can be ignored for the moment in a scaling discussion. \epsilon describes the emissivity of the body and is a constant of order unity (unity for a black body, less for a “grey” body, more generally still a function of wavelength and not a constant at all). Again, for scaling we will ignore \epsilon.
Now let’s assume that the temperature is not uniform. To make life simple, we will model a non-uniform temperature as a sphere with a uniform “hot side” at temperature T + dT and a “cold side” at uniform temperature T – dT. Half of the sphere will be hot, half cold. The spatial mean temperature, note well, is still T. Then:
P’ = (4 \pi R^2) epsilon sigma ( 0.5*(T + dT)^4 + 0.5(T – dT)^4)
is the power radiated away now. We only care how this scales, so we: a) Do a binomial expansion of P’ to second order (the first order terms in dT cancel); and b) form the ratio P’/P to get:
P’/P = 1 + 6 (dT/T)^2
This lets us make one observation and perform an estimate. The observation is that P’ is strictly larger than P — a non-uniform distribution of temperature on the sphere radiates energy away strictly faster than it is radiated away by a uniform sphere of the same radius with the same mean temperature. This is perfectly understandable — the fourth power of the hot side goes up much faster than the fourth power of the cold side goes down, never even mind that the cold side temperature is bounded from below at T_c = 0.
The estimate: dT/T \approx 0.03 for the Earth. This isn’t too important — it is an order of magnitude estimate, with T \approx 300K and dT \approx 10K. (0.03^2 = 0.0009 \approx 0.001 so that 6(0.03)^2 \approx 0.006. Of course, if you use latitude instead of day/night side stratification for dT, it is much larger. Really, one should use both and integrate the real temperature distribution (snapshot) — or work even harder — but we’re just trying to get a feel for how things vary here, not produce a credible quantitative computation.
For the Earth to be in equilibrium, S/4 must equal P’ — as much heat as is incident must be radiated away. I’m not concerned with the model, only with the magnitude of the scaling ratio — 1375 * 0.006 = 8.25 W/m^2, divided by four suggests that the fact that the temperature of the earth is not uniform increases the rate at which heat is lost (overall) by roughly 2 W/m^2. This is not a negligible amount in this game. It is even less negligible when one considers the difference not between mean daytime and mean nighttime temperatures but between equatorial and polar latitudes! There dT is more like 0.2, and the effect is far more pronounced!
The point is that as temperatures increase, the rate at which the Earth loses heat goes strictly up, all things being equal. Hot bodies lose heat (to radiation) much faster than cold bodies due to Stefan-Boltzmann’s T^4 straight up; then anything that increases the inhomogeneity of the temperature distribution around the (increased) mean tends to increase it further still. Note well that the former scales like:
P’/P = 1 + 4 dT/T + …
straight up! (This assumes T’ = T + dT, with dT << T the warming.) At the high end of the IPCC doom scale, a temperature increase of 5.6C is 5.6/280 \approx 0.02. That increases the rate of Stefan-Boltzmann radiative power loss by a factor of 0.08 or nearly 10%. I would argue that this is absurd — there is basically no way in hell doubling CO_2 (to a concentration that is still < 0.1%) is going to alter the radiative energy balance of the Earth by 10%.
The beauty of considering P’/P in all of these discussions is that it loses all of the annoying (and often unknown!) factors such as \epsilon. All that they require is that \epsilon itself not vary in first order, faster than the relevant term in the scaling relation. They also give one a number of “sanity checks”. The sanity checks suggest that one simply cannot assume that the Earth is a ball at some uniform temperature without making important errors, They also suggest that changes of more than 1-2C around some geological-time mean temperature are nearly absurdly unlikely, given the fundamental T^4 in the Stefan-Boltzmann equation. Basically, given T = 288, every 1K increase in T corresponds to a 1.4% increase in total radiated power. If one wants a “smoking gun” to explain global temperature variation, it needs to be smoking at a level where net power is modulated at the same scale as the temperature in degrees Kelvin.
Are there candidates for this sort of a gun? Sure. Albedo, for one. 1% changes in (absolute) albedo can modulate temperature by roughly 1K. An even better one is modulation of temperature distribution. If we learn anything from the decadal oscillations, it is that altering the way temperature is distributed on the surface of the planet has a profound and sometimes immediate effect on the net heating or cooling. This is especially true at the top of the troposphere. Alteration of greenhouse gas concentrations — especially water — have the right order of magnitude. Oceanic trapping and release and redistribution of heat is important — Europe isn’t cold not just because of CO_2 but because the Gulf Stream transports equatorial heat to warm it up! Interrupt the “global conveyor belt” and watch Europe freeze (and then North Asia freeze, and then North America freeze, and then…).
But best of all is a complex, nonlinear mix of all of the above! Albedo, global circulation (convection), Oceanic transport of heat, atmospheric water content, all change the way temperature is distributed (and hence lost to radiation) and all contribute, I’m quite certain, in nontrivial ways to the average global temperature. When heat is concentrated in the tropics, T_h is higher (and T_c is lower) compared to T and the world cools faster. When heat is distributed (convected) to the poles, T_h is closer to T_c and the world cools overall more slowly, closer to a baseline blackbody. When daytime temperatures are much higher than nighttime tempratures, the world cools relatively quickly; when they are more the same it is closer to baseline black/grey body. When dayside albedo is high less power is absorbed in the first place, and net cooling occurs; when nightside albedo is high there is less night cooling, less temperature differential, and so on.
The point is that this is a complex problem, not a simple one. When anyone claims that it is simple, they are probably trying to sell you something. It isn’t a simple physics problem, and it is nearly certain that we don’t yet know how all of the physics is laid out. The really annoying thing about the entire climate debate is the presumption by everyone that the science is settled. It is not. It is not even close to being settled. We will still be learning important things about the climate a decade from now. Until all of the physics is known, and there are no more watt/m^2 scale surprises, we won’t be able to build an accurate model, and until we can build an accurate model on a geological time scale, we won’t be able to answer the one simple question that must be answered before we can even estimate AGW:
What is the temperature that it would be outside right now, if CO_2 were still at its pre-industrial level?
I don’t think we can begin to answer this question based on what we know right now. We can’t explain why the MWP happened (without CO_2 modulation). We can’t explain why the LIA happened (without CO_2 modulation). We can’t explain all of the other significant climate changes all the way back to the Holocene Optimum (much warmer than today) or the Younger Dryas (much colder than today) even in just the Holocene. We can’t explain why there are ice ages 90,000 years out of every 100,000, why it was much warmer 15 million years ago, why geological time hot and cold periods come along and last for millions to hundreds of millions of years. We don’t know when the Holocene will end, or why it will end when it ends, or how long it will take to go from warm to cold conditions. We are pretty sure the Sun has a lot to do with all of this but we don’t know how, or whether or not it involves more than just the Sun. We cannot predict solar state decades in advance, let alone centuries, and don’t do that well predicting it on a timescale of merely years in advance. We cannot predict when or how strong the decadal oscillations will occur. We don’t know when continental drift will alter e.g. oceanic or atmospheric circulation patterns “enough” for new modes to emerge (modes which could lead to abrupt and violent changes in climate all over the world).
Finally, we don’t know how to build a faithful global climate model, in part because we need answers to many of these questions before we can do so! Until we can, we’re just building nonlinear function fitters that do OK at interpolation, and are lousy at extrapolation.
rgb
So, R Gates, which model do you work on at the NCAR? How well has it predicted global and regional surface air and sea temperatures over the last 15 years? How about tropical upper tropospheric temps? Rainfall?
Robert of Ottawa says:
January 6, 2012 at 2:42 pm (Edit)
For the Earth to be in equilibrium, S/4 must equal P’ — as much heat as is
For the Earth to be in equilibrium, S/4 must equal P’ — as much ENERGY as is
Yep, I spotted that one as well.
Septic Matthew says:
January 6, 2012 at 2:26 pm
[Steven Mosher: When you add more GHGs you increase the effective radiating height of the earth system. Raising that height results in a system that radiates from a colder place.]
The second sentence does not follow; if the effective radiating height of the atmosphere is increased as a result of more CO2 absorbing and radiating energy, it isn’t necessarily the case that the increased height is cooler. If you add GHGs at a particular high altitude, and the GHGs at that altitude absorb radiation, then the mean temp at that height should increase, not decrease. Isn’t that so?
Very interesting. You just may have solved the problem of mechanism for my maximum GHE conjecture on Ira’s UTC thread. This would be dependent on the density of the atmosphere and its mass. This is exactly what the UTC found to be the case for many planets.
If there exists a maximum height then the GHE has a limit. Further emissions of more GHGs would have no affect on temperatures.
R. Gates says:
January 6, 2012 at 1:59 pm
blah blah blah……….
====================================================
Gates, it’s a simple question……
You said that “Climate models can tell me however, that there will be natural variability and even how long periods of natural variability might mask underlying forcing from greenhouse gases”
All I asked was when do the climate models tell you this period of not-warming will end?
.
,
,
,
=======================================================
R. Gates says:
January 6, 2012 at 1:27 pm
Climate models can tell me however, that there will be natural variability and even how long periods of natural variability might mask underlying forcing from greenhouse gases.
============================
LazyTeenager said @ur momisugly January 6, 2012 at 3:26 pm
“. Remember, there is no real temperature in such of an example for there is no mass. It takes mass to even define temperature. (but most climate scientist have no problem with it and therefore they are all wrong, sorry)
———–
This claim is utterly wrong.
The definition of temperature has no dependency on mass whatsoever. At the microscopic level to define temperature simply requires a collection of particles whose energy distribution obeys Boltzmann statistics.
Counter examples to this claim are obvious. The temperature of the microwave background radiation for example. In this case the particles are photons and have no mass.”
Sorry LT, you are wrong. A photon has a rest mass of zero. But photons aren’t at rest; they fly hither and thither at the speed of light. If photons were massless, they would be unable to exert pressure.
How does radiation transfer change between a thick, high pressure atmosphere and a thin, low pressure atmosphere.
Photons generated in the centre of the Sun take an average 200,000 years to make it out to the surface. My calculations say the comparable number on Earth is something like 44 hours. The strict speed of light calculation says it should only take 0.0007 seconds in and out.
It seems to me there needs to some marriage between the two concepts of pure radiation transfer and pure atmospheric pressure.
N2 and O2 reflect the average temperature of whatever level of the atmosphere they are in. Yet they do not participate in the pure radiation transfer explanation.
Let’s take another tack to the question and say what would happen if we pulled out all the non-GHG gases out of the atmosphere (the majority 99% N2, O2 and Argon), Then what is the surface temperature?
Assume the 14 to 16 um IR photons are intercepted by 100,000 CO2 molecules, it still only takes 0.5 seconds for these photons to make it back out into space assuming one-quarter downradiation. The remaining blackbody radiation spectrum is in and out in 0.0007 seconds.
The surface temperature would be -18C in the daytime, approaching -110C within a few seconds of the Sun setting. In a matter of seconds, the IR energy at 14 to 16 um migrates its way through the CO2 molecules and has gone off to space. They only hang onto the energy represented these photons for 0.000005 seconds on each interception.
So, it would be similar to the moon’s temperatures.
Fred H. Haynie wrote;
“The processes of evaporation/condensation and freezing/thawing are the factors that are controlling the rate of energy loss to space. Radiative energy transfer is essentialy “fast as light and line of sight”. The rates of these controlling processes are much slower.”
EXACTLY…
Here is my alternative hypothesis of the effects of adding “GHG’s” to the atmosphere;
1) Additions of GHGs are displaced by reductions in non-GHGs. After all there are only 1 million ppmv of gases in the atmosphere (by definition).
2) Heat flows through non-GHGs at the speed of heat (aka thermal diffusivity).
3) Heat flows through GHGs at close to the speed of light. A slight delay is added as some portion (less than 50%) makes a short side trip back towards the surface.
4) The speed of light is SIGNIFICANTLY faster than the speed of heat.
5) THUS; additions of GHGs to the atmosphere cause the gases in the atmosphere to warm up more quickly after an increase in energy arriving at a location in the system (i.e. sunrise or the dissipation of clouds). Alternatively, the gases in the atmosphere cool down more quickly after a decrease in energy arriving at a location in the system (i.e. sunset or the accumulation of clouds).
6) This effect is so small that we probably cannot afford to measure it.
7) The historical temperature databases (even after being water boarded into confessing to AGW) do not contain the necessary data (i.e. dT/dt) to confirm/refute this hypothesis.
The “missing” heat is currently travelling through Space as a spherical IR wavefront that is “X + d” light years away from the surface of the Earth. “X” represents the elapsed time since the energy arrived (i.e. 100 years for sunlight from 1911) and “d” represents the slight delay from a few (maybe 10-20 at most) side trips back towards the surface of the Earth. “d” is measured in light milliseconds (1 light millisecond =~ 917,000 feet).
In summary the “climate sensitivity” to GHG’s is EXACTLY 0.0000000 (not a small number approaching zero but exactly equal to ZERO).
So we can proceed as we have and once some forms of energy become more costly than newer forms of energy the newer forms will replace the older forms. Just like coal did back in England when they had a “PEAK WOOD” crisis, lookup the meaning of the term “windfall” to learn more about this historical fact.
In the Electrical Engineering Field (one of my professional pursuits) this is difference between a “DC” (direct current) system and a “AC” (alternating current) system. Clearly, the fact that Sun still rises and sets indicates we have an AC system here. Therefore we need to analyze it as such. The climate “models” (which assume such silliness as “equilibrium temperatures”) need to be scrapped.
Cheers, Kevin.
R. Gates says:
January 6, 2012 at 12:58 pm
Smokey said:
“Climate models still cannot make useful predictions.”
____
Would it be useful for a shipping company to know that the arctic might be ice free in the summer months sometime in the relatively near future? Since the first trans-arctic shipments have already been made, and saved the shipping companies no small amount of money in doing so, it seems climate model “predictions” can be useful…and potentially profitable.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
And how did that work out for the Vikings?
LazyTeenager says:
January 6, 2012 at 3:26 pm
…….”In this case the particles are photons and have no mass.”
================
“Photon mass is expected to be zero by most physicists, but this is an assumption which must be checked experimentally. A nonzero mass would make trouble for special relativity, Maxwell’s equations, and for Coulomb’s inverse-square law for electrical attraction.”
Per:
http://www.aip.org/pnu/2003/split/625-2.html
This is some good article here. Thanks TB and Anthony.
I damn near understood it.
{Snarky comments about lemming-like believers pre-snipped in the interest of keeping it classy}
Not a chance. Begs the question. Assumes GHG effectiveness, and distribution, and relevance Which is actually what the models are supposed to establish.
Or, are they? It seems much more that they are “projections” of what could happen in various sim-worlds if they were assumed to be efficacious in the first place.
From Cao Jinan’s analysis, to the observation above that a sphere’s (radiative) surface increases as the square of radius increase, the assumption of CO2-“forced” warming is challengeable on many significant bases, far removed from Arrhenius’ primitive “basic physics”.
As for the actual use of climate models to inform policy, putting BS through a blender does nothing to improve its odor. Quite the contrary, in fact.
Just thinking about Dr. Browns comment, trying to visualize and digest the ideas he has put forward, the thought that the thermal potential of pre-industrial Co2 has some how magically changed just because it doubled in quantity does not mean that it has doubled in temperature.
Instead of everyone trying to teach R. Gates anything which has been proven beyond a shadow of a doubt impossible (many times over), could someone help myself and probably more than one here on the binomial expansion.
I brought the ½ out to give (2πR2)εσ as a dropped constant in the derivatives and that (4*3)/2! gives the 6 multiplier in the second derivative but how do the two internal terms get from ((T+dT)4 + (T−dT)4) to (dT/T)2, even the squared is understood. I am finding my remembrance in binomial expansions a bit rusty and the mere 2 pages out of some 900 pages in both of my calculus books is not helping much at all ;-). Can anyone expand that quickly for us? I might be the only one, but I would like to know how, I can write a program and differentiate it numerically but an explicit path would save me hours in other similar cases, that’s a neat method.
Bill Illis said:
Regarding the temperature of the Earth without an atmosphere:
“The surface temperature would be -18C in the daytime, approaching -110C within a few seconds of the Sun setting.
—–
Not quite. The daytime temperature near the equator would be more like 130C and the nighttime temperature would fall to -110C within several hours after sunset (assuming no ocean). Cooling of the moon’s suface during lunar eclpises confirms this general rate. The rocks and soil would continue to release stored LW for several hours after the sun went down.
Evidently the ‘sup’ tag does not even work in Firefox. Bummer! Please mentally insert ‘^’ where appropriate in my comment above.
Well I’m not quite sure what magic Dr Brown is revealing here. I don’t know how many times over the last ten years or so, I have pointed out both here at WUWT, and other climate discussion sites, such as Tech central Station (years ago), the importance of the effect of cyclic Temperatures, on raising (always) the effect of a cyclic Temperature over a steady Temperature.
If we write T as (T0 + t) or as T = T0 (1 +t/T0), then T^4 becomes:
T^4 =T0^4 [1 + 4t/T0 + 6(t/T0)^2 + 4(t/T0)^3 + (t/T0)^4 ]
If we now allow (t) to undergo any cyclic variation, such as a diurnal cycle, and we integrate T^4 over the full period of the cycle, then the integral over the cycle of the term 4t/T0 is zero, since T0 is the average Temperature, leaving us with T0^4{1 + 6(t/T0)^2]
Actually, if the Temperature cycle is repetitive, so we can represent the cycle as a fourier series of a fundamental and harmonic components, then ti willl be found that the inegral of the third power term is also zero, and in practical cases (t/T0)^4 is negligible compared to the second order term.
I actually presented a full analysis of this effect to a group of people who were on Marc Morano’s short list he e-mailed to, when he was on Senator Inhofe’s staff. Roy Spencer it turns out was also on that list and probably received my short essay, on “The Importance of Cycles.”. I also sent another essay on “Cocktail Party Physics”, that Roy actually commented on. It made the point, that CO2 molecules in the atmosphere are orphans, and are quite unaware that another like them even exists, since on average, they have about 13 shell layers of surrounding molecules about them before the next CO2 molecule shows up. Consequently they operate in the atmosphere as individuals, and not in ANY co-operative way, at all.
How many times at WUWT have I pointed out that planet earth is cooling fastest in the hottest dryest tropical deserts, in the blaze of the midday sun, and the cold polar regions are quite ineffective in cooling planet earth.
Now I have observed, just watching the evening news weather reports, that a daily Temperature cycle amplitude of 15 to 20 deg C is very common, so that t/T0 routinely is 5 to 7%, and when one looks at the range of the annual Temperature cycle, the value is much greater, and then taking the full range of the Temperature extremes from -90 deg C to around +60deg C, the effect of the always upward offset of the earth’s radiative cooling rate is a significant error in energy balance calculations.
As to Brown’s assertion that there isn’t Temperature without mass, that is quite misleading. There certainly isn’t Temperature without material, but the actual mass of the material shows up nowhere in the calculation of Temperature.
The peak desert gray body emission rate of hot dry desert surfaces, is over twice the 390 W/m^2 that Trenberth et al give in their earth energy budget cartoon, in fact it is over 800 W/m^2, as I posted somewhere here at WUWT within the last week.
If you read ALL of the posts in almost any thread at WUWT, you find it looks like people shopping at the mall. Everyone is going every which way, and it is clear that many posters, simply NEVER read what is already there before they post, and many issues are already answered and ignored, as the crowd simply mills around, oblivious of what others are doing.
Brian H says:
January 6, 2012 at 5:31 pm
R. Gates says:
January 6, 2012 at 9:28 am
…
Could a model tell you the probability that it would be colder (or warmer) outside on a given day when comparing two different sets of greenhouse gas concentrations and holding all other variables constant? Absolutely…and that is precisely what they are meant to do.
Not a chance. Begs the question. Assumes GHG effectiveness, and distribution, and relevance Which is actually what the models are supposed to establish.
______
Models don’t “assume” greenhouse gas effectiveness nor distribution, nor does it “beg the question”. Perhaps 100 years or more ago this statement might have been at least partially true, at least for those working out the theory, but then we had no real global climate models back then, and so any assumptions back then were never put into models.
RGates, (le trollus)
“Would it be useful for a shipping company to know that the arctic might be ice free in the summer months sometime in the relatively near future?”
Not if the crystal ball prediction is totally incorrect and they had spent MILLIONS preparing for a false prediction!! (oh look , that’s just what is happening.. and IT ISN’T USEFUL !!!!! )
Tarot cards seem to have about the same predictive value as so-called climate models. !!
(feeds the trolls and hope they are still awake at sunrise 😉
Keith says:
January 6, 2012 at 4:07 pm
So, R Gates, which model do you work on at the NCAR? How well has it predicted global and regional surface air and sea temperatures over the last 15 years? How about tropical upper tropospheric temps? Rainfall?
_____
Climate models are not meant to prediict natural variabilty, but they can predict what the underlying warming rates will be when natural variability forcing is removed, so to answer your question– none have “predicted” these specifics as none are meant to. Your tourist map of NY City will not tell you about the crack in the sidewalk near Times Square (that you might very well trip on), as that is not its purpose, but it will tell you how to get from Central Park to the Empire State Building. If the weather forecast calls for a 60% chance of rain tomorrow, it might very well rain, but it won’t tell you exaclty when and where the first raindrop will strike your front window, as that is not the purpose of a general forecast.
The thrust of this post appears to be against the concept of “lumping”, where a diverse system is reduced to averages, loosing accuracy, but gaining analyzability. In fact the climate system is so hopelessly complicated that this is the only method that has any possibility of providing insight.
Also to be considered is the troposphere, which has the wonderful property of mixing up the major part of the atmosphere so that the black-body equilibrium temperature exists somewhere round its upper third, allowing gross generalizations to be not too inaccurate.
In fact an argument against such approximations is really an argument for massive computerised models, which has not currently been shown to indicate anything other than that the approach does not work.
So I would urge both sides of the debate to continue presenting their arguments in a lumped approximate averaged fashion.
Septic Matthew:
“Substantial energy amounts are transferred from the surface and lower troposphere to the upper troposphere by advection and convection. It is possible for this transfer to be speeded up, and hence for the surface to cool, even as energy is accumulating in the upper layers of the atmosphere.”
I believe this will prove to be a key issue in the calculation of current “sensitivity”. In particular, the advection of energy from surface to higher altitudes……primarily as latent heat I suspect. I believe this explains why all interglacials are bounded within a few degrees K of our current temperatures. You can move a LOT of energy as latent heat from low to high altitudes with a small change in temperature in this way.
AndyG5 says
Not if the crystal ball prediction is totally incorrect and they had spent MILLIONS preparing for a false prediction!! (oh look , that’s just what is happening.. and IT ISN’T USEFUL !!!!! )
———
Exxon just paid 15% of the company based on less ice in the Arctic. That would amount to billions. BP wanted to do the same but lost out in the negotiations. Maybe they know something you guys don’t.
FYI only: Lazy T is correct here. Temperature is a parameter that describes the distribution of speed in the Maxwell distribution, or energy in the Boltzmann distribution. This is how one can get negative absolute temperature in the inverted energy distribution of a laser.
Way back in the thread about the Unified Theory of Climate I got whacked around for saying that the actual distribution of temperature matters, and that the “mean” value ought to be the nth root of integrated temperature to the nth power with 4<n<5. Here I agreed with N&Z.
This exercise of calculating a mean "radiation" temperature, using the Stefan law on an airless planet that transfers heat perfectly and radiates uniformly is fun, but doesn't show anything useful as nearly as I can see. The actual temperature distribution does matter. Even more, the places on Earth where the LWIR radiation heads back out to space, be these high deserts, equatorial oceans, polar regions, or cloud tops, or a varying combination of these, matters too. There is a range of mean temperature that can obtain at equilibrium even with constant climatic driving, and this is makes me pretty unconcerned about small, "secular" changes in mean temperature.
thepompousgit says
Sorry LT, you are wrong. A photon has a rest mass of zero. But photons aren’t at rest; they fly hither and thither at the speed of light. If photons were massless, they would be unable to exert pressure.
———
Your right of course, though we typically speak of photons as having momentum and energy rather than mass.
R. Gates says:
January 6, 2012 at 6:15 pm
Keith says:
January 6, 2012 at 4:07 pm
So, R Gates, which model do you work on at the NCAR? How well has it predicted global and regional surface air and sea temperatures over the last 15 years? How about tropical upper tropospheric temps? Rainfall?
_____
Climate models are not meant to prediict natural variabilty, but they can predict what the underlying warming rates will be when natural variability forcing is removed, so to answer your question– none have “predicted” these specifics as none are meant to.
—
R Gates. – You haven’t answered Keith’s question. Could you please tell us which modeling group you work for at NCAR? Thanks. If you don’t answer, then I will assume that in fact you have no experience with modeling at all.
In the case that you in fact do have some modeling experience, please tell me which differential equations you are solving, along with the initial and boundary conditions, time step size, spatial resolution, etc. Presumably it is the belief in accuracy of these numerical solutions that leads you to believe that climate models can predict “underlying warming rates” (whatever this really means…warming of what? 2 m above the earth’s surface, then entire atmosphere?). And please list all of the equations – including ocean, sea ice, vegetation models, aerosols – oh, and how they are coupled. And while you’re at it, please let us know where the model verification and validation studies are located so we can examine them as well. Thanks!