Guest Post By Steve Fitzpatrick
Introduction
Projections of climate warming from global circulation models (GCM’s) are based on high sensitivity for the Earth’s climate to radiative forcing from well mixed greenhouse gases (WMGG’s). This high sensitivity depends mainly on three assumptions:
1. Slow heat accumulation in the world’s oceans delays the appearance of the full effect of greenhouse forcing by many (eg. >20) years.
2. Aerosols (mostly from combustion of carbon based fuels) increase the Earth’s total albedo, and have partially hidden the ‘true’ warming effect of WMGG increases. Presumably, aerosols will not increase in the future in proportion to increases in WMGG’s, so the net increase in radiative forcing will be larger for future emissions than for past emissions.
3. Radiative forcing from WMGG’s is amplified by strong positive feedbacks due to increases in atmospheric water vapor and high cirrus clouds; in the GCM’s, these positive feedbacks approximately double the expected sensitivity to radiative forcing.
However, there is doubt about each of the above three assumptions.
1. Heat accumulation in the top 700 meters of ocean, as measured by 3000+ Argo floats, stopped between 2003 and 2008 (1, 2, 3), very shortly after average global surface temperature changed from rising, as it did through most of the 1990’s, to roughly flat after ~2001. This indicates that a) ocean heat content does not lag many years behind the surface temperature, b) global average temperature and heat accumulation in the top 700 meters of ocean are closely tied, and c) the Hansen et al (4) projection in 2005 of substantial future warming ‘already in the pipeline’ is not supported by recent ocean and surface temperature measurements. While there is no doubt a very slow accumulation of heat in the deep ocean below 700 meters, this represents only a small fraction of the accumulation expected for the top 700 meters, and should have little or no immediate (century or less) effect on surface temperatures. The heat content in the top 700 meters of ocean and global average surface temperature appear closely linked. Short ocean heat lags are consistent with relatively low climate sensitivity, and preclude very high sensitivity.
2. Aerosol effects remain (according to the IPCC) the most poorly defined of the man-made climate forcings. There is no solid evidence of aerosol driven increases in Earth’s albedo, and whatever the effect of aerosols on albedo, there is no evidence that the effects are likely to change significantly in the future. Considering the large uncertainties in aerosol effects, it is not even clear if the net effect, including black carbon, which reduces rather than increases albedo, is significantly different from zero.
3. Amplification of radiative forcing by clouds and atmospheric humidity remain poorly defined. Climate models do not explicitly include the behavior of clouds, which are orders of magnitude smaller than the scale of the models, but instead handle clouds using ‘parameters’ that are adjusted to approximate the expected behavior of clouds. Adjustable parameters can of course also be tuned to make a model to predict whatever warming is expected or desired. Measured tropospheric warming in the tropics (the infamous ‘hot spot’) caused by increases in atmospheric water content, falls far short of the warming in this part of the atmosphere projected by most GCM’s. This casts doubt on the amplification assumed by the CGM’s due to increased water vapor.
Many people, including this author, do not believe the large temperature increases (up to 5+ C for doubling of CO2) projected by GCM’s are credible. A new paper by Lindzen and Choi (described at WUWT on August 23, 2009) reports that the total outgoing radiation (visible plus infrared) above the tropical ocean increases when the ocean surface warms, which suggests the climate feedback (at least in these tropical ocean areas) is negative, rather than positive as the CGM’s all assume.
In spite of the many problems and doubts with GCM’s:
1) It is reasonable to expect that positive forcing, from whatever source, will increase the average temperature of the Earth’s surface.
2) Basic physics shows that increasing infrared absorbing gases in the atmosphere like CO2, methane, N2O, ozone, and chloro-fluorocarbons, inhibits the escape of infrared radiation to space, and so does provide a positive forcing.
3) There has in fact been significant global warming since the start of the industrial revolution (beginning a little before 1800), concurrent with significant increases in WMGG emissions from human activities.
There really should be an increase in average surface temperature due to forcing from increases in infrared absorbing gases. This is not to say that there are no other plausible explanations for some or even most of the increases in global temperatures over the past 100+ years. For example, Lean (5) concluded that there may have been an increase of about 2 watts per square meter in average solar intensity (arriving at the top of the Earth’s atmosphere) between the little ice age and the late 20th century, which could account for a significant fraction of the observed warming. But regardless of other possible contributions, it is impossible to refute that greenhouse gases should lead to increased global average temperatures. What matters is not that the earth will warm from increases WMGG’s, but how much it will warm and over what period. The uncertainties and dubious assumptions in the GCM’s make them not terribly helpful in making reasonable projections of potential warming, if you assume the worst case that WMGG’s are the principle cause for warming.
Climate Sensitivity
If we knew the true climate sensitivity of the Earth (expressed as degrees increase per watt/square meter forcing) and we knew the true radiative forcing due to WMGG’s, then we could directly calculate the expected temperature rise for any assumed increases in WMGG’s. Fortunately, the radiative forcing effects for WMGG’s are pretty accurately known, and these can be used in evaluating climate sensitivity. An approximate value for climate sensitivity in the absence of any feedbacks, positive or negative, can be estimated from the change in blackbody emission temperature that is required to balance a 1 watt per square meter increase in heat input, using the Stefan-Boltzman Law. Assuming solar intensity is 1366 watts/M^2, and assuming the Earth’s average albedo is ~0.3, the net solar intensity is ~239 watts/M^2, requiring a blackbody temperature of 254.802 K to balance incoming heat. With 1 watt/M^2 more input, the required blackbody emission temperature increases to 255.069, so the expected climate sensitivity is (255.069 – 254.802) = 0.267 degree increase for one watt per square meter of added heat.
But solar intensity and the blackbody emission temperature of the earth both change with latitude, yielding higher emission temperature and much greater heat loss near the equator than near the poles. The infrared heat loss to space goes as the fourth power of the emission temperature, so the net climate sensitivity will depend on the T^4 weighted contributions from all areas of the Earth. Feedbacks within the climate system, both positive and negative, including different amounts and types of clouds, water vapor, changes in albedo, and potentially many others, add much uncertainty.
Measuring Earth’s Sensitivity
The only way to accurately determine the Earth’s climate sensitivity is with data.
Bill Illis produced an outstanding guest post on WUWT November 25, 2008, where he presented the results of a simple curve-fit model of the Earth’s average surface temperature based on only three parameters: 1) the Atlantic multi-decadal oscillation index (AMO), 2) values of the Nino 3.4 ENSO index, and 3) the log of the ratio of atmospheric CO2 concentration to the starting CO2 concentration. Bill showed that the best estimate linear fit of these parameters to the global mean temperature data could account for a large majority of the observed temperature variation from 1871 to 2008. He also showed that the AMO index and the Nino 3.4 index contributed little to the overall increase in temperature during that period, but did account for much of the variation around the overall temperature trend. The overall trend correlated well with the log of the CO2 ratio. In other words, the AMO and Nino3.4 indexes could hind cast much of the observed variation around the overall trend, and that overall trend could be accurately hind cast by the log of the CO2 ratio.
There are a few implicit assumptions in Bill’s model. First, the model assumes that all historical warming can be attributed to radiative forcing. This is a worst case scenario, since other potential causes for warming are not even considered (long term solar effects, long term natural climate variability, etc.). The climate sensitivity calculated by the model would be lowered if other causes account for some of the measured warming.
Second, the model assumes the global average temperature changes linearly with radiative forcing. While this is almost certainly not correct for Earth’s climate, it is probably not a bad approximation over a relatively small range of temperatures and total forcings. That is, a change of a few watts per square meter is small compared to the average solar flux reaching the Earth, and a change of a few degrees in average temperature is small compared to Earth’s average emissive (blackbody) temperature. So while the response of the average temperature to radiative forcing is not linear, a linear representation should not be a bad approximation over relatively small changes in forcing and temperature.
Third, the model assumes that the combined WMGG forcings can be accurately represented by a constant multiplied by the log of the ratio of CO2 to starting CO2. While this may be a reasonable approximation for some gases, like N2O and methane (at least until ~1995), it is not a good approximation for others, like chloro-fluorocarbons, which did not begin contributing significantly to radiative forcing until after 1950, and which are present in the atmosphere at such low concentration that they absorb linearly (rather than logarithmically) with concentration. In addition, chloro-fluorocarbon concentrations will decrease in the future rather than increase, since most long lived CFC’s are no longer produced (due to the Montreal Protocol), and what is already in the atmosphere is slowly degrading.
To make Bill’s model more physically accurate, I made the following changes:
1. Each of the major WMGG’s is separated and treated individually: CO2, N2O, methane, chloro-fluorocarbons, and tropospheric ozone.
2. Concentrations of each of the above gases are converted to net forcings, using the IPCC’s radiation equations for CO2, methane, N2O, and CFC’s (6), and an estimated radiative contribution from ozone inceases.
3. The change in solar intensity with the solar cycle is included as a separate forcing, assuming that measured intensity variations for the last three solar cycles (about 1 watt per square meter variation over a base of 1365 watts per square meter) are representative of earlier solar cycles, and assuming that sunspot number can be used to estimate how solar intensity varied in the past.
4. The grand total forcing (including the solar cycle contribution), a 2-year trailing average of the AMO index, and the Nino 3.4 index are correlated against the Hadley Crut3V global average temperature data.
This yields a curve fit model which can be used to estimate future warming by setting the Nino 3.4 and AMO indexes to zero (close to their historical averages) and estimating future changes in atmospheric concentrations for each of the infrared absorbing gases.

To find the best estimate of lag in the climate (mainly from ocean heat accumulation), the model constants were calculated for different trailing averages of the total radiative forcing. The best fit to the data (highest R^2) was for a two year trailing average of the total radiative forcing, which gave a net climate sensitivity of 0.270 (+/-0.021) C per watt/M^2 (+/-2 sigma). All longer trailing average periods yielded somewhat lower R^2 values and produced somewhat higher estimates of climate sensitivity. A 5-year trailing average yields a sensitivity of 0.277 (+/- 0.021) C per watt/M^2, a 10 year trailing average yields a sensitivity of 0.289 (+/- 0.022) C per watt/M^2, and a 20 year trailing average yields a sensitivity of 0.318 (+/- 0.025) C per watt/M^2, ~18% higher than a two year trailing average. As discussed above, very long lags (eg. 10-20+ years) appear inconsistent with recent trends in ocean heat content and average surface temperatures.
Oscillation in the radiative forcing curve (the green curve in Figure 1) is due to solar intensity variation over the sunspot cycle. The assumed total variation in solar intensity at the top of the atmosphere is 1 watt per square meter (approximately the average variation measured over the last three solar cycles) for a change in sunspot number of 140. Assuming a minimum solar intensity of 1365 watts per square meter and Earth’s albedo at 30%, the average solar intensity over the entire Earth surface at zero sunspots is (1365/4) * 0.7 = 238.875 watts per square meter, while at a sunspot number of 140, the average intensity increases to 239.05 watts per square meter, or an increase of 0.175 watt per square meter. The expected change in radiative forcing (a “sunspot constant”) is therefore 0.175/140 = 0.00125 watt per square meter per sunspot. When different values for this constant are tried in the model, the best fit to the data (maximum R^2) is for ~0.0012 watt/M^2 per sunspot, close to the above calculated value of 0.00125 watt/M^2 per sunspot.


Regional Sensitivities
Amplification of sensitivity is the ratio of the actual climate sensitivity to the sensitivity expected for a blackbody emitter. The sensitivity from the model is 0.270 C per watt/M^2, while the expected blackbody sensitivity is 0.267 C per watt/M^2, so the amplification is 1.011. An amplification very close to 1 suggests that all the negative and positive feed-backs within the climate system are roughly balanced, and that the average surface temperature of the Earth increases or decreases approximately as would a blackbody emitter subjected to small variations around the average solar intensity of ~239 watts/M^2 (that is, as a blackbody would vary in temperature around ~255 K). This does not preclude a range of sensitivities within the climate system that average out to ~0.270 C per watt/M^2; sensitivity may vary based on season, latitude, local geography, albedo/land use, weather patterns, and other factors. The temperature increase due to WMGG’s may have, and indeed, should have, significant regional and temporal differences, so the importance of warming driven by WMGG’s should also have significant regional and temporal differences.
Credibility of Model Projections
Some may argue that any curve fit model based on historical data is likely to fail in making accurate predictions, since the conditions that applied during the hind cast period may be significantly different from those in the future. But if the curve fit model includes all important variables, then it ought to make reasonable predictions, at least until/unless important new variables are encountered in the future. Examples of important new climate variables are a major volcanic eruption or a significant change in ocean circulation. The probability of encountering important new variables increases with the length of the forecast, of course. So while a curve-fit climate model’s predictions will have considerable uncertainty far in the future (eg 100 years or more), forecasts of shorter periods are likely to be more accurate.
To demonstrate this, the model constants were calculated using temperature, WMGG forcings, AMO, and Nino3.4 data for 1871 to 1971, but then applied to all the 1871 to 2008 data (Figure 4). The model’s calculated temperatures represent a ‘forecast’ from 1972 through 2008, or 36 years. Since the model constants came only from pre-1972 data, the model has no ‘knowledge’ of the temperature history after 1971, and the 1972 to 2008 forecast is a legitimate test of the model’s performance. The model’s 1972 to 2008 forecast performance is reasonably good, with very similar deviations between the model and the historical temperature record in the hind cast and forecast periods.

The model fit to the temperature data in the forecast period is no worse than in the hind cast period. The climate sensitivity calculated using only 1871 to 1971 data is similar to that calculated using the entire data set: 0.255 C per watt/M^2 versus 0.270 C per watt/M^2. A model forecast starting in 2009 will not be perfect, but the 1972 to 2008 forecast performance suggests that it should be reasonably close to correct over the next 36+ years.
Emissions Scenarios
The model projections in Figure 1 (2009 to 2060) are based on the following assumptions:
a) The year on year increase in CO2 concentration in the atmosphere rises to 2.6 PPM per year by 2015 (or about 25% higher than recent rates of increase), and then remains at 2.6 PPM per year through 2060. Atmospheric concentration reaches ~518 PPM by 2060.
b) N2O concentration increases in proportion to the increase in CO2.
c) CFC’s decrease by 0.25% per year. The actual rate of decline ought to be faster than this, but large increases in releases of short-lived refrigerants like R-134a and non-regulated fluorinated compounds may offset a large portion of the decline in regulated CFC’s.
d) The concentration of methane, which has been constant for the last ~7 years at ~1,800 parts per billion, increases by 10 PPB per year, reaching ~2,370 PPB by 2060.
e) Tropospheric ozone (which forms in part from volatile organic compounds, VOC’s) increases in proportion to increases in atmospheric CO2.
The above represent pretty much a “business as usual” scenario, with fossil fuel consumption in 2060 more than 70% higher than in 2008, and with no new controls placed on other WMGG’s. The projected temperature increase from 2008 to 2060 is 0.6834 C, or 0.131 C per decade. This assumes of course that WMGG’s are responsible for all (or nearly all) the warming since 1871; if a significant amount of the warming since 1871 had other causes, then future warming driven by WMGG’s will be less.
Separation of the different contributions to radiative forcing allows projections of future average temperatures under different scenarios for reductions in the growth of fossil fuel usage, with separate efforts to control emissions of methane, N2O, and VOC’s (leading to tropospheric ozone).

One such scenario can be called the “Efficient Controls” scenario. The year on year increase in CO2 in the atmosphere rises to 2.6 PPM by 2014, and then declines starting in 2015 by 0.5% per year (that is, 2.6 PPM increase in 2014, 2.587 PPM increase in 2015, 2.574 PPM increase in 2016, etc.), methane concentrations are maintained at current levels via controls installed on known sources, CFC concentration falls by 0.5% per year due to new restrictions on currently non-regulated compounds, and N2O and tropospheric ozone increases are proportional to the (somewhat lower) CO2 increases. These are far from small changes, but probably could be achieved without great economic cost by shifting most electric power production to nuclear (or non-fossil alternatives where economically viable), and simultaneously taxing CO2 emissions worldwide at an initially low but gradually increasing rate to promote worldwide improvements in energy efficiency. Under these conditions, the predicted temperature anomaly in 2060 is 0.91 degree (versus 0.34 degree in 2008), or a rise of 0.109 degree per decade. Atmospheric CO2 would reach ~507 PPM by 2060, and CO2 emissions in 2060 would be about 50% above 2008 emissions. By comparison, the “business as usual” case produces a projected increase of 0.131 C per decade through 2060, and atmospheric CO2 reaches ~519 PPM by 2060. So at (relatively) low cost, warming through 2060 could be reduced by a little over 0.11 C compared to business as usual.
A “Draconian Controls” scenario, with new controls on fluorinated compounds, methane and VOC’s, and with the rate of atmospheric CO2 increase declining by 2% each year, starting in 2015, shows the expected results of a very aggressive worldwide program to control CO2 emissions. The temperature anomaly in 2060 is projected at 0.8 C, for a rate of temperature rise through 2060 of 0.088 degree per decade, or ~0.11 C lower temperature in 2060 than for the “Efficient Controls” scenario. Under this scenario, the concentration of CO2 in the atmosphere would reach ~480 PPM by 2060, but would rise only ~25 PPM more between 2060 and 2100. Total CO2 emissions in 2060 would be ~15% above 2008 emissions, but would have to decline to the 2008 level by 2100. Whether the potentially large economic costs of draconian emissions reductions are justified by a ~0.11C temperature reduction in 2060 is a political question that should be carefully weighed.

Conclusions
The model shows that the climate sensitivity to radiative forcing is approximately 0.27 degree per watt/M^2, based on the assumption that radiative forcing from WMGG’s has caused all or nearly all the measured temperature increase since ~1871. This corresponds to response of ~1C for a doubling of CO2 (with other WMGG’s remaining constant). Much higher climate sensitivities (eg. 0.5 to >1.0 C per watt/M^2, or 1.85 C to >3.71 C for a doubling of CO2) appear to be inconsistent with the historical record of temperature and measured increases in WMGG’s.
Assuming no significant changes in the growth pattern of fossil fuels, and no additional controls on other WMGG’s, the average temperature in 2060 may reach ~0.68C higher than the 2008 average. Modest steps to control non-CO2 emissions and gradually reduce the rate of increase in the concentration of CO2 in the atmosphere could yield a reduction in WMGG driven warming between 2008 and 2060 of ~15% compared to no action. A rapid reduction in the rate of growth of atmospheric CO2 would be required to reduce WMGG driven warming between 2008 and 2060 by ~30% compared to no action.

Quote: “Longwave radiation, which is the stuff reflected off the surface of the Earth…” If this is so, why is LWR good enough to create natural warming from GHG atmospheric content? Also, no one is saying that oceans are boiling… what we are saying is that LWR increases due to GHG and holds more heat in, which in turn is transferring energy, raising over time the average kinetic energy which is temperature.
I think it is important here, Pamela that you see the distinction between temperature and heat. Temperature is the average kinetic energy of all motion and oscillations, while heat is all the internal energy due to motion and oscillations. So, over a period of 150 years of increasing GHG emissions, and about 200 years of research on weather and cimate, we see a wrming trend, where the correlation is so high, and the observations in physics, chemistry, environmental science, and climatology/meteorology, are so clear there is no doubt AGW is real. Then here we are discussing magnitude; the problem with Steve’s assumptions is that he neglects the physics and chemistry while ignoring the sheer number of peer reviewed papers (in comaparison to Willis’ paper and papers like it) showing an ocean warming.
Also he is ignoring the XBT errors reported on as well as ARGO sensor errors recently fixed, however, even if there was a temporary cooling of the oceans, this has been predicted for over 20 years, and in fact is has occured in the past int he 30 year trend.
Finally, before I get back to work, Steve I am disputing the heat capacity of the oceans, but you are neglecting temperature changes caused by El Nino, La Nina, chaotic weather systems in general, and mixing chamges at the surface down to a meter or so in the oceans. Heat capacity is extremely important, but we are not dicsussing a closed system, and just as Angstrom made severe errors in his experiments neglecting real world, dynamic, open systems, we must be careful here not to make the same error.
Pamela,
by all means check out the paper on scholar.
Edit: “Finally, before I get back to work, Steve I am ‘NOT’ disputing the heat capacity of the oceans…” (I am also in a rush and these keys stick).
Also Jacob, not a small amount of incoming SWR is used by the planet to live and is not reflected back as LWR. The net budget may or may not calculate this proxy measure. Over the long term (multiple years), the net budget will show warming or cooling tied to natural processes. At issue are the well-controlled studies that all say this: a CO2 signal CANNOT be found in the Net Radiative Budget.
Jacob, please provide correlation co-efficients for CO2 vs temp and CO2 vs ENSO.
Pamela,
The ocean is constantly radiating LW upward and receiving back radiation from the atmosphere (including clouds). If there is more back radiation then the net LWR flux is reduced.
Evaporation (latent heat flux) is large but not infinite, so increased downward flux (both radiant and sensible) can matter.
Where can it matter? Everywhere.
Jacob. No. LWR should be DECREASING under your AGW scenario. LWR is measured at the outer edge of the atmosphere, AFTER it has passed by GHG land. If it is increasing (and I am thinking you typed too fast), we are cooling if the net budget is negative. Please correct me if I am wrong.
Jacob Mack:
M Ablain, A Cazenave, G Valladeau, S (2009)
Their abstract reads:
“A new error budget assessment of the global
Mean Sea Level (MSL) determined by TOPEX/Poseidon and
Jason-1 altimeter satellites between January 1993 and June
2008 is presented using last altimeter standards. We discuss
all potential errors affecting the calculation of the global
MSL rate. We also compare altimetry-based sea level with
tide gauge measurements over the altimetric period. Applying
a statistical approach, this allows us to provide a realistic
error budget of the MSL rise measured by satellite altimetry.
These new calculations highlight a reduction in the rate of sea
level rise since 2005, by 2 mm/yr. This represents a 60%
reduction compared to the 3.3 mm/yr sea level rise (glacial
isostatic adjustment correction applied) measured between
1993 and 2005. Since November 2005, MSL is accurately
measured by a single satellite, Jason-1. However the error
analysis performed here indicates that the recent reduction in
MSL rate is real.”
This paper has nothing to do with measured ocean heat. It may have been (this is only a guess) a response to claims that the satellite altimeter data (which agree with the Argo ocean heat data which Cazenave et al had earlier reported) may have been incorrect. The authors carefully show that the measured fall in the rate of seal level rise is real.
This article is TOTALLY irrelevant to the discussion of ocean heat, except that it reinforces the author’s earlier publication of no net increase in heat in the oceans from 2003 to 2008. I am most puzzled that you would bring it up, unless you simply did not understand what the paper says.
oms, The oceans do this even at night? That would be a good trick. The oceans can only go so far to release their heat at night till it gives up all its LWR, since there isn’t much SWR at night to reflect back out into the air is there.
Pamela,
The oceans have some radiating temperature (and emissivity around 0.985). So, unless you think the SST is close to absolute zero when the sun goes down, then yes, it keeps doing it all night long.
And no, black body radiation is not the same thing as reflection.
tallbloke (12:23:18) :
“What I have found [empirically] is that people [like you] would gladly use a dataset [even if dubious] if the data agree with their own pet theory, and tend to spread FUD on other data.”
Let’s agree to disagree rather than resort to incivilities and accusations about motivation for which you have not one jot of evidence.
Regardless of your motivation, my empirical observation still stands. You are, it seems to me, living proof of its validity. There is nothing uncivil in making an observation. And it is a normal human reaction. If you started with a dataset and it agreed nicely with your carefully reasoned theory, you would not start out by immediately distrusting the data. You would find that the agreement is strong support for both your theory and the data. Newton originally rejected his own theory [and didn’t publish it] because it disagreed with the data about the distance to the Sun. Then, when he got new measurements that made everything fit, he took that as strong support and validation. Who wouldn’t?
Imagine then that some time after that, even newer data showed that the distance that agreed with the theory was, in fact, wrong. He would, rightfully be suspicious of the revision, same as you.
The way I thought it went was that irradiance was what arrived at the top of the atmosphere, and insolation was what hit the weasel on the ground, whatever the timescale.
Irradiance is what the Sun puts out per unit of time [and area]. Insolation is what the measuring device [the ‘surface’ of the Earth in this case] receives per unit of time [and area]. It is usually not a good idea to mix the two in the same paragraph [or even paper].
Given that trade winds and thunderstorms don’t die down at sundown, how does evaporative heat loss from the oceans at night compare with LWR?
Go here to look at daytime and nightime LWR differences.
http://www.osdpd.noaa.gov/ml/air/rad_budget.html
Sandy (20:03:17), having lived many years in the deep tropical Pacific, and spent untold hours on and in and under the ocean, I can assure you that both thunderstorms and trade winds die away during the night. Dawn in the tropical oceans is typically calm and clear.
You can see this in the excellent link given by Pamela Gray above. Take a look at paired observations of OLR from any of the satellites. During the day, OLR (outgoing longwave radiation) comes more from the cloud tops, which are cooler. At night, the clouds dissipate, and the radiation comes more from the surface, which is warmer.
Night-time evaporation is much smaller than daytime evaporation, but still goes on. How does night time evaporation compare with OLR (either day or night)? Couldn’t say, but we can assume that OLR is greater. This is because surface radiation (not that seen by the satellites, but from the surface) depends only on temperature. At typical tropical sea surface temperatures (SSTs), this is over 300 w/m2. Evaporation is much smaller than that during the night, in part because the sea is overturning and bringing cooler water to the surface. This reduces OLR, but also reduces evaporation.
w.
Just to add to what Willis Eschenbach wrote above, the latent heat flux through the sea surface has been observed in the tropics to be of order 100-150 W/m^2 (peak around noon, as one might expect).
Steve Fitzpatrick makes an all too ubiquitous error :
Assuming solar intensity is 1366 watts/M^2, and assuming the Earth’s average albedo is ~0.3, the net solar intensity is ~239 watts/M^2, requiring a blackbody temperature of 254.802 K to balance incoming heat.
This ignores Kirchhoff’s 150 year old insight that for a gray body , absorptivity must equal emissivity . It has the earth absorbing as a gray body of absorptivity 0.7 , but emitting as a black body , 1.0 . This is un-physical and produces an error of 1 – absorptivity^%4 to the down side , the infamous missing 33c in the case of the earth . See http://cosy.com/views/warm.htm for a full discussion and correct implementation of Stefan-Boltzmann/Kirchhoff .
I don’t know what to make of any calculations after that because they are working from a false assumption .
Willis Eschenbach (22:30:15) :
Evaporation is much smaller… during the night, in part because the sea is overturning and bringing cooler water to the surface. This reduces OLR, but also reduces evaporation.
Hi Willis, I bow to your superior knowledge of swimming in the tropical ocean regularly, you lucky man, but I’d have thought this would be the other way about.
Won’t the surface cool at night due to conduction/convection, and won’t the overturning action of the waves bring water warmed by the sun during the day to the surface as well as the cooled surface water sinking?
I ask because it seems to me the oceans lose heat to the air all the time. By radiation and latent heat of evaporation in daytime, and by radiation and conduction/convection at night time.
Thanks
Bob Armstrong (00:07:37), “ubiquitous” means appearing or present everywhere at once, so an error could be ubiquitous, but not “all too ubiquitous”. It’s like “more unique”, you can’t get there from here.
In any case, it appears you are confusing the albedo of the earth including clouds with the absorptivity of the earth’s surface. Total albedo is 0.3, which you incorrectly assume means a surface absorptivity of 0.7. It does not.
The temperature of the earth without greenhouse gases depends on what assumptions you make. Steve is calculating it in the common way, using the total albedo of the system, neglecting absorption by the air.
Note that this total albedo already includes the albedo of the surface. Since the absorptivity “e” of the surface is 1 – surface albedo, he has already included the surface absorptivity in his calculations.
Now, you can do your way, using the emissivity. Start with the top of atmosphere insolation 1366/4 =341 w/m2. Remove the amount absorbed by the clouds (75 w/m2) and the atmosphere (67 w/m2). This leaves 199 w/m2 that are actually striking the surface.
The average surface albedo of the earth is about 0.85. Using Stefan-Bolzmann including surface albedo, this give us a surface temperature of 254 K … Steven’s number again. We can verify this by noting that reflection = 1 – emissivity, or about 0.15. This gives about 30 w/m2 reflected by the surface, which is in agreement with observations.
So no, Steve’s calculation is not wrong, ubiquitously or otherwise.
Best to all,
w.
PS – dear friends, remember significant digits. We can’t say the earth blackbody temperature is “254.802 K”, that’s a bridge too far.
Leif Svalgaard (19:04:25) :
Irradiance is what the Sun puts out per unit of time [and area]. Insolation is what the measuring device [the ‘surface’ of the Earth in this case] receives per unit of time [and area]. It is usually not a good idea to mix the two in the same paragraph [or even paper].
I agree with that. I have been trying to get over to you two main points which involve each of these quantities, and maybe I should have separated the points better for clarity.
Point one. Whichever of the calibrations and assessments of TSI (irradiance) you believe in, the difference in TSI levels between the first and second half of the C20th is more than enough energy to account for the empirically observed acceleration and unsuing deceleration of the thermal expansion of the oceans. Since the air doesn’t heat the ocean, it must be the sun wot done it. To understand and accept how this is necessarily true without having to adopt an unrealistically high sensitivity of the climate to TSI (irradiance) you also need to appreciate…..
*****Gap to separate irradiance from isolation******
……Point two. It appears that there is in addition some kind of terrestrial amplification of changes in TSI, possibly due in large part to long term changes in cloud type, location and cover. This affects surface received insolation.
Corollary to point two: Because of the very dynamic nature of the feedback and exchange processes going on between the biosphere, oceans, atmosphere and Earth surface, solar energy absorbed can get ‘hidden’ from the surface temperature record and redistributed in ways which make an analogy with a black body radiating spacebourne lump of coal or a snowball inadequate to the correct understanding of the Earth.
Regardless of your motivation, my empirical observation still stands. You are, it seems to me, living proof of its validity. There is nothing uncivil in making an observation. And it is a normal human reaction. If you started with a dataset and it agreed nicely with your carefully reasoned theory, you would not start out by immediately distrusting the data. You would find that the agreement is strong support for both your theory and the data. Newton originally rejected his own theory [and didn’t publish it] because it disagreed with the data about the distance to the Sun. Then, when he got new measurements that made everything fit, he took that as strong support and validation. Who wouldn’t?
Imagine then that some time after that, even newer data showed that the distance that agreed with the theory was, in fact, wrong. He would, rightfully be suspicious of the revision, same as you.
I think you mischaracterising the situation in order to cast doubt on my ability to apply the scientific method and to bolster the status of your own hypothesis, which is strongly contested by other scientists apart from those you list as converts to your cause.
My reconstruction of SST’s from sunspot numbers and LOD variation is unquantified, but scalable and proportionate. This means it can accommodate whatever quantities and magnitudes you care to throw at it. So adjust, revise and minimise variation in TSI as much as you like, my method can cope, just as long as you don’t reduce solar variation to zero.
Steve Fitzpatrick (16:39:46) :
[Tenuc (16:10:04) :
“The amount of heat energy in the oceans is vast and it will take a long time before any trend becomes apparent unless methods and accuracy of measurement improve.”]
Steve Fitzpatrick
“Argo represents an enormous improvement in measurement accuracy compared to pre-2003 data.”
No it doesn’t. Here’s a quote from the Argo website:-
‘Argo has the potential, after careful data assessment, to provide salinity / temperature / pressure profiles that approach ship-based data accuracy.’
So, at best, the accuracy of the Argo data could be approaching that of the poor quality data coming from the ship based method.
Steve Fitzpatrick
“The reported ocean heat trend in the several published papers based on Argo data are pretty clear: the best estimate trends are slightly downward or flat.”
Argument fails as data poor.
Steve Fitzpatrick
“This is independently confirmed by satellite altimetry/ocean mass measurements that show sea level rise over the 2003 to 2008 period was caused almost entirely by increases in ocean water content, not thermal expansion. Prior to 2003, there was significant heat accumulation in the ocean, especially between 1985 and 2002, but this trend stopped in 2003.”
Satellite altimetry/ocean mass measurements are estimates, with the usual problems of orbital decay and instrument drift associated with this data capture method. Also gravity anomilies are difficult to remove. Again, data accuracy in dispute.
Steve Fitzpatrick
“I do not understand why you think further improvements in measurement accuracy and/or much longer measurement times are needed to prove this rather obvious change from the earlier trend. The published uncertainly estimates for ocean heat content are really not that wide; the ocean heat content is known today with more accuracy that ever before.”
The ocean is an inhomogenius dynamic chaotcic system – sea surface measurements are a poor proxy for state of ocean energy at any moment in time.
There are only about only c3000 Argo bouys (only 1 bouy per c30,000 square miles of ocean – about the size of the Czech Republic). Is it logical to think that with ocean currents on all scales – tidal effects – local variations in insolation due to cloud cover – variations in salinity – effects of local storms – effect of marine life…etc, that total energy estimates are accurate enough to measure a small short-term trend? I don’t think so.
To get a better numbers for ocean energy trends, we need better ‘ARGO type’ bouys with each covering 25square miles of ocean , along with a system which can measure deep ocean currents on a continuous basis.
The data is far from clear.
tallbloke, yes, living in the tropical Pacific has been berry berry good to me.
You are correct that the ocean loses heat all the time. However, it loses more during the day than the night.
The ocean is opposite to the atmosphere, in that the atmosphere overturns thermally during the day and is thermally stratified at night. The ocean, on the other hand, is stratified during the day and overturns at night.
During the day, the uppermost layer of the ocean warms from sun and downwelling longwave radiation (DLR). The heated water rises because it is less dense, and the warmest water is at the very top. This encourages both evaporation and radiation (OLR).
During the night, you are correct that the surface continues to lose energy through evaporation and radiation. While it still is getting DLR, the lack of sunshine make for a net heat loss, so it cools overall. As soon as it becomes cooler than the underlying layers, however, the top layer starts sinking, and cooler water is brought to the surface by thermal convection.
Hope this clarifies my often vague writing …
w.
Tenuc (16:10:04), you say:
and
Well … the people doing the research in the field like Josh Willis and Amy Cazenave would beg to differ. They analyze and write about those trends, and find them statistically significant. If you disagree that they are producing statistically significant results, please point to their errors, as a blanket denial doesn’t get you much traction on a scientific site.
The number of ARGO floats (which you decry as inadequate) is on the order of three times as large as the number of ground temperature stations. Since the ocean is a bit more than twice as large as the land, this means ocean coverage is better than ground station coverage.
In addition, the ARGO floats measure temperature in three dimensions rather than two. They measure the vertical temperature profile of the ocean, where ground stations only measure air temperature at a single point.
And while ARGO floats are not error-free (nothing is), they are not subject to the siting errors and barbecues and air-conditioner vent and UHI errors which plague the ground station record.
As a result, at present we are getting more, and much more accurate, information about the temperature of the ocean from the ARGO system than information about the temperature of the atmospheric temperature from the ground stations.
Note also that, unlike the atmosphere, the ocean temperature can be independently verified in three ways: sea level height, length of the day, and satellites.
Sea level goes up and down with temperature. The recent flat-lining of the atmospheric temperature has been matched by the flat-lining of the oceanic temperature. This is verified by the flat-lining of the sea level height.
In addition, as water warms it expand. This makes the equator bulge outwards. Like an ice skater extending her arms, this slows the rotation of the earth. This is used as an indirect measurement of ocean temperature.
Finally, because of the homogeneous nature of water compared with land, we can measure SST by satellite. Once again, this provides us with an independent confirmation of the ARGOS system.
Finally, you seem to be confounding two types of ship-based systems when you say:
The type of ship-based system they are referring to is not the “measure the sea temperature in the cooling water inlet” system used to construct global temperature estimates. This system is subject to a host of errors, such as warming due to the inlet pipe and the huge variation in the depth of inlet pipes in various ships.
They are referring to the system of scientific point-sampling of vertical temperature profiles in the ocean. These are done by lowering a thermometer (actually a thermistor) into the ocean and recording the temperature at each depth as they go down. This system is extremely accurate, and the fact that the ARGOS system can approach this accuracy is a tribute to the ARGOS system.
w.
TENUC: You write all of this crap about unreliable ARGO data but are perfectly happy to accept ground based temperature data for; a continuing changing number of stations, location changes, changes in UHI, thermometer quality changes, different methods of recording, etc..? Unblelievable!
Tenuc (04:36:40) :
I am surprised that you think the Argo data (and apparently satellite altimetry and/or satellite ocean mass data as well) are very uncertain. As far as I can tell, this uncertainty is based your personal evaluation of the data, rather than on a series of published studies which show why the Agro and sattelite data are uncertain. If your analysis clearly shows high uncertainty in these data, then I urge you to consider publishing, or at the very least consider offering a post to Real Climate.
However, after reading your last and several earlier comments again, I wonder if you would assign the same level of doubt to these measurements had they showed a rapid accumulation of heat in the world’s oceans rather that a slight loss of heat. The researchers and groups of researchers who have actually used the Argo and satellite data to calculate the evolution of total ocean heat content since 2003 appear to have a somewhat different view on the quality of this data, since the uncertainty estimates in their published studies have been consistently low.
Since my curve fit model predicts the possibility of radiative forced warming over the next couple of decades, and since I believe the existing studies have accurately evaluated the reliability of the Argo data, then just for fun, I want to go way out on a limb with a bold prediction:
I predict the Argo based slight decline in ocean heat content trend through 2008 will reverse and turn positive if there is a significant increase in average surface temperature.