Spencer: Spurious warming demonstrated in CRU surface data

Spurious Warming in the Jones U.S. Temperatures Since 1973

by Roy W. Spencer, Ph. D.

INTRODUCTION

As I discussed in my last post, I’m exploring the International Surface Hourly (ISH) weather data archived by NOAA to see how a simple reanalysis of original weather station temperature data compares to the Jones CRUTem3 land-based temperature dataset.

While the Jones temperature analysis relies upon the GHCN network of ‘climate-approved’ stations whose number has been rapidly dwindling in recent years, I’m using original data from stations whose number has been actually growing over time. I use only stations operating over the entire period of record so there are no spurious temperature trends caused by stations coming and going over time. Also, while the Jones dataset is based upon daily maximum and minimum temperatures, I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.

U.S. TEMPERATURE TRENDS, 1973-2009

I compute average monthly temperatures in 5 deg. lat/lon grid squares, as Jones does, and then compare the two different versions over a selected geographic area. Here I will show results for the 5 deg. grids covering the United States for the period 1973 through 2009.

The following plot shows that the monthly U.S. temperature anomalies from the two datasets are very similar (anomalies in both datasets are relative to the 30-year base period from 1973 through 2002). But while the monthly variations are very similar, the warming trend in the Jones dataset is about 20% greater than the warming trend in my ISH data analysis.

CRUTem3-and-ISH-US-1973-2009

This is a little curious since I have made no adjustments for increasing urban heat island (UHI) effects over time, which likely are causing a spurious warming effect, and yet the Jones dataset which IS (I believe) adjusted for UHI effects actually has somewhat greater warming than the ISH data.

A plot of the difference between the two datasets is shown next, which reveals some abrupt transitions. Most noteworthy is what appears to be a rather rapid spurious warming in the Jones dataset between 1988 and 1996, with an abrupt “reset” downward in 1997 and then another spurious warming trend after that.

CRUTem3-minus-ISH-US-1973-2009

While it might be a little premature to blame these spurious transitions on the Jones dataset, I use only those stations operating over the entire period of record, which Jones does not do. So, it is difficult to see how these effects could have been caused in my analysis. Also, the number of 5 deg grid squares used in this comparison remained the same throughout the 37 year period of record (23 grids).

The decadal temperature trends by calendar month are shown in the next plot. We see in the top panel that the greatest warming since 1973 has been in the months of January and February in both datasets. But the bottom panel suggests that the stronger warming in the Jones dataset seems to be a warm season, not winter, phenomenon.

CRUTem3-vs-ISH-US-1973-2009-by-calendar-month

THE NEED FOR NEW TEMPERATURE RENALYSES

I suspect it would be difficult to track down the precise reasons why the differences in the above datasets exist. The data used in the Jones analysis has undergone many changes over time, and the more complex and subjective the analysis methodology, the more difficult it is to ferret out the reasons for specific behaviors.

I am increasingly convinced that a much simpler, objective analysis of original weather station temperature data is necessary to better understand how spurious influences might have impacted global temperature trends computed by groups such as CRU and NASA/GISS. It seems to me that a simple and easily repeatable methodology should be the starting point. Then, if one can demonstrate that the simple temperature analysis has spurious temperature trends, an objective and easily repeatable adjustment methodology should be the first choice for an improved version of the analysis.

In my opinion, simplicity, objectivity, and repeatability should be of paramount importance. Once one starts making subjective adjustments of individual stations’ data, the ability to replicate work becomes almost impossible.

Therefore, more important than the recently reported “do-over” of a global temperature reanalysis proposed by the UK’s Met Office would be other, independent researchers doing their own global temperature analysis. In my experience, better methods of data analysis come from the ideas of individuals, not from the majority rule of a committee.

Of particular interest to me at this point is a simple and objective method for quantifying and removing the spurious warming arising from the urban heat island (UHI) effect. The recent paper by McKitrick and Michaels suggests that a substantial UHI influence continues to infect the GISS and CRU temperature datasets.

In fact, the results for the U.S. I have presented above almost seem to suggest that the Jones CRUTem3 dataset has a UHI adjustment that is in the wrong direction. Coincidentally, this is also the conclusion of a recent post on Anthony Watts’ blog, discussing a new paper published by SPPI.

It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
259 Comments
Inline Feedbacks
View all comments
David W
February 28, 2010 11:48 pm

Finally some research based on pure data without any adjustments, fixes, translations, one-off increases/decreases …. And what does this study show? A conclusion at odds with the supposed basic analysis of “raw” temperature data from the AGW sect (which turns out not to be raw at all- but massaged several times). This sort of back-to-basics analysis is going to be critical in unraveling the eco-political mess that is global warming science. More power to Roy W Spencer.

Manfred
February 28, 2010 11:51 pm

Ivan (13:34:07) :
“Manfred,
the picture you linked shows the pattern of warming in the case that GHG are the primary driver of warming. So, you assume that IPCC is basically right in attribution of warming to GHG.
Second, dr Spencer in his analysis in this comment asserts that correctly calculated surface trend for the USA should be roughly equal to the currently reported UAH satellite trend. If this is so, then, according to your hypothesis, predicted tropospheric warming should be about 0.37 or 0.38 degrees K per decade (1.7 x surface), not 0.22, as reported by Spencer and Christy. Are you suggesting that dr Spencer actually wanted to say by this analysis that his own satellite data set UNDERESTIMATED the real tropospheric trend by almost a half?”
1. realclimate claims, that the enhanced tropospheric warming should occur not only for greenhouse gases. they write:
“The basis of the issue is that models produce an enhanced warming in the tropical troposphere when there is warming at the surface. This is true enough. Whether the warming is from greenhouse gases, El Nino’s, or solar forcing, trends aloft are enhanced.”
http://www.realclimate.org/index.php/archives/2007/12/tropical-troposphere-trends/
here is their comparison for 2*CO2 and solar forcing:
http://www.realclimate.org/images/solar_tropical_enhance.gif
(Though models are most likely wrong considering feedback, they may be correct in this more fundamental point, that doesn’t require (garbage)assumptions.)
2. I can’t speak for Dr. Spencer, but in my opinion above trend enhancement is not neglibile. The warming bias of the ground based measurements is then even worse.
3. Assuming that ground based measurements are not reliable (due to critics by Watts, McKitrick, Pielke, missing UHI adjustments, dozens of case studies…) and assuming open source code + open data satellite based measurements are correct, land based measurments then roughly overstate warming by a factor of 2. this is in very good accordance with other peer reviewed literature such as McKitrick’s.

Manfred
March 1, 2010 12:08 am

actually, CRU does NOT correct for UHI, they only increase the uncertainty a little.
http://climateaudit.org/2009/01/20/realclimate-and-disinformation-on-uhi/
GISS does some correction, but outside the US there are almost as many downward (!) as upward corrections for UHI, making their algorithm appear useless.
By far worst of the pack is Tom Karl’s NOAA, doing no correction at all. this is particularly disturbing because NOAA controls other most important data sets as well and because Karl has been appointed as chief of the new influential government agency.

E.M.Smith
Editor
March 1, 2010 12:42 am

I would wager that the further back in time you extend your series, the more the divergence will be… The most extreme data oddities are based further in the past…

son of mulder
March 1, 2010 1:43 am

” wayne (17:42:15) :
Now the rural, generated-heat-free sites, they must be randomly distributed or at least form an evenly covered grid as close as possible. Am I missing something?”
The best we’ll have is the global set of rural stations. So what is happening to their average raw temperature measurement over time?

son of mulder
March 1, 2010 1:46 am

” wayne (18:05:24) :
For per-station daily raw data, I have only found sites (NCDC/NOAA) wanting to purchase the data or order CDs or are only of recent years. If you can find an explicit link to a page or ftp directory, please let us know. Some now have .gov, .edu, .org domain limits for access but can’t recall exactly where.”
Do you believe we are not allowed to see the elephant in the room?

Espen
March 1, 2010 2:24 am

An interesting exercise is to use the GISS map tool to try to pinpoint where the extra winter warming is located. I compared the 1921-1950 period to the 1980-2009 period for Dec-Jan-Feb and for the summer months. The 1921-1950 period is similar to the recent period in that the Arctic shows a similar positive anomaly. One of the biggest differences is that the 1980-2009 winters have a very warm interior of Siberia – in the 1921-1950 period only the Arctic sea coast of Siberia was warmer than normal.
I tried to look up individual station data from the warm interior of Siberia, and quickly ended up with Krasnoyarsk (Krasnojarsk in GISS), one of Russias largest cities. And (not very surprising…) the GISS adjustments ADD warming to the trend of this city instead of correcting for UHI:
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=222295700006&data_set=1&num_neighbors=1
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=222295700006&data_set=2&num_neighbors=1
For an even larger city, Omsk, the homogenity adjustment only seems to strip all pre 1930 data, but does not correct for the most certainly huge UHI.

Patrick Davis
March 1, 2010 2:58 am

“Peter Miller (23:48:02) :
Response to Patrick Davis re Australia – Average February 2010 temperatures:
City Min Max
Alice Springs* +0.6 -0.8
Adelaide +1.9 +2.1
Canberra +1.9 +2.1
Darwin +1.1 +1.2
Melbourne +1.8 +2.2
Perth +0.6 +0.5
Sydney +2.4 +1.7
All figures in degrees C
* Almost 5 times average rainfall in February. Source: Weatherzone”
Not sure what your point is. To me this seems to be rather normal variablity for Australian cities, certainly not proof that “global temperatures for January and February this year are higher than normal.” (Your words!) and certainly NOT the warmest ever recorded in Australia in modern history. But also consider that the NH is the other half of the globe too and they’ve had record cold. Some parts of the SH too have had record cold, some parts of the globe had snow, last year, for the first time in living memory.
Consider also that data from 75% of the land based thermometers have been removed from the official database, many that still do get used are badly sited or even at airports.
As for the rainfall, so? How far to records go back, 50, 100, 150 years? 5 time what average? So, let me get this straight. Rainfall was different in Feb 2010 to what it was in Feb 2009 to what it was in Feb 2008….uh huh, I hear ya!

Dinjo
March 1, 2010 3:57 am

mike roddy (09:14:21) :

Even according to Spencer, it’s still warming, so what’s the point? Glaciers are melting, antarctic ice is calving, and birds and plants are migrating north. Humans are the cause.
Deal with it, wattsupwiththat readers, or risk becoming increasingly ridiculous.

Love your sense of humour Mike, y’almost had me fooled there for a minute! (wink)
Hey up! Just seen a flock of dwarf bamboos flying in formation, heading north… wattsupwiththat!???!

Gareth
March 1, 2010 7:04 am

Peter Miller (13:09:46) :
What I don’t understand about the UAH figures is: Why are the high altitude temperatures decreasing, while the low altitude ones are increasing during the El Nino phenomenon?
Is El Nino a cause or a symptom? A slowdown in convection moving energy upwards could appear to us as a warming of the ocean due to a lack of cooling, a warming of the lower atmosphere (same as oceans) and a cooling of the upper atmosphere (a lack of warming). The timing of the changes would be key to working that one out.

Ivan
March 1, 2010 7:13 am

Manfred,
don’t you see what is the problem here? If you are right, it is quite puzzling why dr Spencer don’t accept the rural record over the USA as the best approximation of the real climatic trend, instead of trying to slightly correct Jones calculations based upon the urban and upward adjusted rural network (he does exactly this in his article)?
If the rural record is consistent with his satellite data (as you posit), and he still, as we clearly see, rejects that rural record and argues that the “real” surface trend is 2 or 2,5 times higher than that, that can only mean that he assumes his own satellite record to be a wild UNDERESTIMATE of the real tropospheric trend. Do you really believe that Spencer considers his own work to be so fatally flawed?
You cannot have it both ways. Either Spencer rejects your amplification theory, or he rejects his own satellite data and assumes that real tropospheric warming in USA is 0.37-0.38 deg per decade. Tertium non Datur.

George E. Smith
March 1, 2010 9:59 am

“”” Carsten Arnholm, Norway (04:36:34) :
I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.
Can someone point to an internationally accepted standard procedure for calculating an average daily temperature at a defined location?
I have just set up my own weather station and record data every 10 minutes. I am guessing that recording only min/max per day or 4 times per day as above might produce different results than averaging 6*24=144 daily values. “””
Well Carsten, there’s no question, that your recording every ten minutes, will yield a better average than the min/max, or even the four times daily.
My mathematics is showing a lot of rust; but if I am not mistaken, the average of min and max, is the true mean of a continuous function; if and only if the function is cyclical and time symmetric.
Meaning that f(t) = f(T-t) where T is the period of the cyclic variation. Now since you mentioned for a “defined location”, one might argue that one can model that location as a fixed object having a certain spectral emissivity, and also absorptance, over the range of wavelengths encompassed by solar radiation, and surface thermal radiation. Well already I am ignoring other thermal processes like conduction and convection; not to mention evaporation.
But the radiation only model is in principle solvable; and I think if you do that just assuming black body conditions to start with, you will find that the diurnal heating and cooling are not symmetrical. Cooling after sundown, should be slower, than heating after sunup; bearing in mind that the surface will be cooling fastest when it is at its highest temperature. In any case, I believe that your ten minute data, probably plots to show a faster warming, than cooling. In that case, min/max must have an error in the average of those two numbers; compared to the true average of the continuous function.
If the function is not a simple sinusoid; then the min/max strategy is already in violation of the Nyquist sampling theorem, even for recovery of the average, since the function must contain at least a second harmonic component (assuming it is a repetitive cyclic function); in which case four times a day, is the minimum required sampling to recover the average.
The real rub comes in when you ask what is the total thermal effect of the diurnal temperature cycling (still restricting ourselves to the radiative component.)
The rate of energy loss is not linear with temperature; but varies about as the 4th power of the temperature; so during the higher temperatures, the loss rate is higher than at the lower temperatures; and to get the average loss rate, you really need to average the 4th power of the temperature, rather than the temperature itself. Some very simply math assuming a repetitive cyclic daily function, will demonstrate that the average of the 4th power always has a positive offset compared to the 4th power of the average temperature; so the daily average temperature does not yield the correct daily average energy loss (radiative).
It gets worse than that if one considers the effect of a GHG like CO2, which absorbs at about 15 microns (13.5 to 16.5).
At the global mean of about 288K, the peak of the surface thermal spectrum is about 10.1 microns, where we have an atmospheric window interrupted only by the Ozone 9-10 micron hole.
The surface emittance (assumed BB) at the peak of the spectrum varies as the 5th power of the temperature; not the 4th, so at higher surface temperatures, the peak radiant emission grows even faster, than the total, and the wavelength moves further away from the 15 micorn CO2 band, so the influence of CO2 is diminished over the hottest deserts, during the day. Now to be fair to the cO2 we need to note that the mean particle velocity (atmospheric) varies as the square root of the temperature (K), so the Doppler broadening of the CO2 line will be more at higher temperatures; but if you do all the math, you will see that the CO2 still loses out at higher temperatures.
You could do us all afavor Carsten, by plotting some of your 10 minute daily data, so we can all see, just what it really looks like. It would be nice if it could be done on a cloudless day(s) so as to not run into the additional complication of cloud variation; whcih of course really screws up the daily average temperature obtained from min/max.
I’m happy to have the four times daily method, that Dr Spencer has used for this study, as an advance over the min/max; which clearly is quite wrong.
I’d love to see real cloud coverage properly included; but any step forward is better than the status quo.

George E. Smith
March 1, 2010 10:29 am

Well all this beautiful statistrickery, is well and good; and there seems to be a fascinating fixation on the condimental details of that methodology.
Does anybody really believe that suddenly, somebody is going to stumble of the CORRECT long tem trend,a nd the CORRECT standard deviation; etc etc, and suddenly we will have the final answer to whether there is MMGWAGWCC or not.
I commend to your study; the 600 million year history contained here:-
http://www.geocraft.com/WVFossils/PageMill_Images/image277.gif
Now of course it is a proxy study, since Hansen and Mann weren’t around 600 million years ago.
The first thing I would like you to note about this global temperature, and atmospheric CO2 abundance data; is how beautifully logarithmic is the relationship between the temperature and the CO2; thereby confirming the wisdom of Dr Steven Schneider’s concept of “Climate Senmsitivity”, which is the equivalent to the velocity of light (c) to climate “scientists”.
The second thing about this 600 million years of data I would like you to note is that temperature impenetrable ceiling of 22 deg C. If anybody has an explanation for those clearly fraudulent anomalies at -248 million years at the Permian/Triassic boundary, and the other one at about -50 million years in the early Tertiary.
Now I am sure that all the Electronic circuit engineers here know exactly how Voltage Regulators work. You need a fixed and known Reference Standard Voltage; such as a semiconductor band gap reference; or even a superconducting quantum reference. Then you compare your system output Voltage to that reference, and you apply a feedback loop (negative ) to forcee the output Voltage error from the reference, to zero.
It is quite clear that an exactly analagous process has been operating for the last 600 million years, to prohibit the earth’s mean temperature from ever going above 22 deg C.
Through all the Geologic changes, and meteorite collision and volcanic anomalies that have hapopened along with orbital shifts, etc, along with plate tectonics, and continental drift; SOMETHING has been acting as an absolute temperature reference, and powerful feedback processes, have acted to keep the earth’s mean temperature at 22 deg C for most of that 600 million years. Now it would appear that during the carboniferous period, and the boundaries of that era, something really powerful was stopping the earth from warming; yet it didn’t have the same effect during the mesozoic.
So what is the absolute temperature reference that has acted to maintain 22 deg C for most of this history. The one thing we know that has been there all that time, has been the earth’s oceans aka WATER, H2O; which has very spoecific physical properties as to freezing, and boiling temperatures, specific heats, latent heats of vaorization and freezing,a dn on and on; many of them attributable in some way to that 104 deg bend angle in the water molecule, and its resultant electrostatic polar moment. Throw in the unique dielectric constant of about 81, which enables water to dissolve most anything, and you have the makings of a universal Temperature reference, that is capable of marhslling the properties of water in its three phases, to prohibit earth’;s temperature mean from ever exceeding 22 deg C.
So as I have said many times: “IT’S THE WATER, SILLY !”

George E. Smith
March 1, 2010 10:36 am

Well danged if I know what happened to my post, that just vanished off the face of the earth. And when I tried posting it again, since it was still in the comment window, I got a duplicate post error message.
So I not only got my post scrubbed; but got told off for having it scrubbed twice.

Manfred
March 1, 2010 10:58 am

Ivan (07:13:19)
Spencer doesn’t argue “that the “real” surface trend is 2 or 2,5 times higher than rural.”
he clearly states that his result is not yet adjusted for UHI (and even so lower than CRU).
“This is a little curious since I have made no adjustments for increasing urban heat island (UHI) effects over time, which likely are causing a spurious warming effect, and yet the Jones dataset which IS (I believe) adjusted for UHI effects actually has somewhat greater warming than the ISH data.”

George E. Smith
March 1, 2010 12:00 pm

“”” Manfred (10:58:55) :
Ivan (07:13:19)
Spencer doesn’t argue “that the “real” surface trend is 2 or 2,5 times higher than rural.”
he clearly states that his result is not yet adjusted for UHI (and even so lower than CRU). “””
Well when I read about ‘”adjustments for UHI”, I immediately hear alarm bells go off.
There should be NO need to adjust for UHI. UHI are real places, that have real temperatures that can be read, just as easily as the temperature of Foggy Bottom Swamp can be read. The real measured temperature of FBS affects the global mean temperature just as does that of UHI.
The big problem; and the apparent need for “Adjustmnent”; read ‘fake data’, lies not in the temperature value measured at FBS or UHI, but the quite unwarranted assumption that that temperature reading is a good one to use for some other place than FBS or UHI. It is not; and it specially is not a good temperature to use for some place that is 1200 km away, or even 900 km away.
If “adjustments” for UHI are deemd necessary; then clearly the function being “adjusted” does not correctly represent the average temperature of the earth or its surface; if it did, then adjustments would not be necessary.
So once again the problem is in the sampling, and not in the data.
The temperature read in a UHI, whether or not the obligatory barbecue is running or not, is the correct temperature to use for that place; it is NOT the correct temperature to use to represent someplace else.
The basic problem is quite trivial. You multiply each measured temperature sample by the total area for which that temperature is a good sample; you add those products all up, and divide by the total earth surface area, to get the global mean temperature.
If you aren’t doing that then you aren’t reading the mean earth temperature.

sturat
March 1, 2010 2:08 pm

wrt:
“Menne et al used an incomplete dataset against my wishes, denying my right to publish first. At 88% the network looks a lot different.”
So, when does the world get to take a look at your conclusions? This week? next week? Next month?
Other’s are “publishing” their code and results. For example:
http://rankexploits.com/musings/2010/a-simple-model-for-spatially-weighted-temp-analysis/
and
http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
REPLY: When it comes out in a journal, just like Menne et al. Assuming it makes it past the gauntlet of peer review that should be in a few months. I can’t tell you when exactly since I have no control over publication. An SI will be published online with all data and anyone can then engage in any sort of analysis desired. – Anthony

sturat
March 1, 2010 2:46 pm

Fair enough. What’s your estimate of when you will be able to wrap up your analysis, complete the paper, and send it off for review?
The SI you mention to be published online with all data, would that include the code, also. (sorry, a little pedantic, here)
REPLY: Data, spreadsheets, code, everything needed to replicate. – Anthony

March 1, 2010 2:54 pm

George E Smith (10:29:44)
Well spotted George. And of course to this day the tropical ocean surfaces never go over 22C because that is the temperature set by the sun/ocean/atmospheric density and pressure interaction.
So how must that be maintained ?
A constantly changing speed of the hydrological cycle mediated by the size, intensity and latitudinal position of all the global air circulation systems.
Is this all ever going to ‘click’ in the heads of the climate establishment ?
Unless the extra CO2 ever becomes sufficient to significantly alter total atmospheric density and pressure then it will be of no significant effect and the work of Miskolczi suggests that even as more CO2 enters the atmosphere the system reduces total water vapour to compensate and thereby retain a constant optical depth which neutralises the effects from albedo changes or changes in the quantity of cosmic rays and ENSO phenomena as well so quite a few sceptical viewpoints bite the dust as well as the idea of CO2 ‘forcing’.
Thus, again, the speed of the hydrological cycle is the fundamental governor continually adjusting to move back towards an equilibrium in the troposphere despite ever changing energy flows from the oceans below and from stratosphere to space above.

sturat
March 1, 2010 3:39 pm

Just noticed you didn’t reply with an estimate of current progress and expected paper submittal date.
Can you provide these estimates?
Thanks

Keith Minto
March 1, 2010 4:47 pm

George E. Smith (10:29:44)
Now it would appear that during the carboniferous period, and the boundaries of that era, something really powerful was stopping the earth from warming…..
The answer must be to do with the ‘carboniferous’ part, the dynamics of land and aquatic based flora and fauna interacting with oceans/atmosphere to provide biota nurture.
We are so fortunate to have had liquid water on this planet for so long.

DeNihilist
March 1, 2010 7:29 pm

George E Smith – thank-you!

Manfred
March 1, 2010 9:42 pm

George E. Smith (12:00:31)
you are right with your points, but I think this doesn’t matter in this context.
As I understood it, Spencer’s analysis did not aim to compute he “correct” temperature trend.
He showed, however, that with an open and straightforward approach and with a worst case maximum warming assumption (no UHI corrections at all), he computed a lower trend than CRU.
this is sufficient to falsify a hypothesis, he doesn’t have to provide the “correct” answer as well.

George E. Smith
March 2, 2010 10:54 am

“”” Manfred (21:42:42) :
George E. Smith (12:00:31)
you are right with your points, but I think this doesn’t matter in this context. “””
Well I agree with you Manfred; my purpose was to make a basic point; and not so much to comment specifically on Dr Spencer’s Essay. I will have to digest his paper much more thoroughly, before I would be able to comment usefully on it (if at all).
I’m just trying to point out that some of the holy grail tenets of standard climate science, as it still is taught in schools, simply don’t hold water, in the light of day (pun intended).
Now I don’t know beans about what causes all the ocean circulations, and the ENSO and other cycles; so I’ll gladly leave that to those who study such things. But I am quite sure, that you can’t explain the stable range of comfortable temperatures on earth, without invoking the remarkable Physical and Chemical properties of H2O in all its three phases; and I really don’t think CO2 has very much to do with anything.

George E. Smith
March 2, 2010 11:15 am

“”” Stephen Wilde (14:54:01) :
George E Smith (10:29:44)
Well spotted George. And of course to this day the tropical ocean surfaces never go over 22C because that is the temperature set by the sun/ocean/atmospheric density and pressure interaction.
So how must that be maintained ? “””
Well Stephen, you don’t really want to go stepping out there on thin ice.
22 deg Cis only 71.6 deg F, and ocean surface waters easily exceed that temperature all the time. Well I’ve done enough fishing in tropical ocean waters to know that 22 C is not any real limit to surface temperatures.
As to how the equilibrium is maintained; cloud modulation, is my answer.
H2O is the only GHG that exists permanently in earth’s atmosphere in all three phases. As a vapor, it has both cooling and warming properties; the first by absorbing incoming solar energy in the near IR range from about 760 nm wavelength; perhaps as much as 20% of the total solar spectrum energy. That warms the atmosphere, but cools the surface, by lowering ground level insolation.
In the LWIR thermal radiation region, water vapor absorbs in many bands across a wide spectral range, becoming almost totally opaque beyond about 15-16 microns, and that too warms the atmosphere, but blocks very little solar spectrum energy.
But it is in the liquid and solid phases, where H2O forms clouds, that we get the greatest cooling influence on the surface.
When a cloud moves between the sun, and the surface, and casts a shadow, it ALWAYS cools the surface in the shadow zone; it is NEVER observed to warm the surface in the shadow zone.
On the other hand the LWIR thermal emissions from the surface, radiate in a very diffuse radiation pattern, that is at least Lambertian (cosine theta intensity), and more likely near isotropic, since the emitting surface is seldom an optically flat surface (well the ocean surface sometimes can be).
As a result, the same cloud that casts a penumbral edged shadow on the ground, can only intercept a small fraction of the diffuse LWIR emission from that surface, so with broken or scattered clouds, a whole lot of surface IR escapes interception (by the clouds).
With more CO2 or other GHGs, the equilibrium fraction of cloud cover simply increases, to maintain a robustly stable state.