Modeling the Oddities

Guest Post by Willis Eschenbach

Today I came across an IPCC figure (AR4 Working Group 1 Chapter 2, PDF, p. 208) that I hadn’t noticed before. I’m interested in the forcings and responses of the climate models. This one showed the forcings, both at the surface and at the top-of-atmosphere (TOA), from the Japanese MIROC climate model hindcast of the 20th century climate.

Now, do you notice some oddities in these two figures? Here’s what caught my eye.

The first oddity I noticed was that the surface forcing from the long-lived greenhouse gases (LLGHG) was so small compared to the top-of-atmosphere LLGHG radiative forcing. At the end of the record, the TOA forcing from LLGHG was just over two watts per square metre (W m-2). The surface forcing from LLGHG, on the other hand, was only about 0.45 W m-2. I don’t understand that.

This inspired me to actually digitize and measure the surface vs TOA radiation for a few of the components. For each W m-2 of TOA radiative forcing from a given source, the corresponding surface forcing was as follows:

Aerosol Direct: up to 15 W m-2 (variable)

Land Use: 1.5 W m-2

Volcanic Eruptions: 0.76 W m-2

Solar: 0.72 W m-2

Cloud Albedo: 0.67 W m-2

LLGHG: 0.21 W m-2

With the exception of the Aerosol Direct these relationships were stable throughout the record.

I have no idea why in their model e.g. one W m-2 of TOA solar forcing has more than three times the effect on the surface as one watt of TOA greenhouse gas forcing. All suggestions welcome.

The next oddity was that the sum of the radiative forcings for “LLGHG+Ozone+Aerosols+LandUse” is positive, about 1.4 W m-2. The surface forcing for the same combination, on the other hand, was strongly negative, at about -1.4 W/m2. The difference seems to be in the Aerosol Direct figures. It seems they are saying the aerosols make little difference to the TOA forcings but a large difference to the surface forcings … which seems possible, but if so, why would “Land Use” not show the same discrepancy between surface and TOA forcing? Wouldn’t a change in land use change the surface forcing more than the TOA forcing? But we don’t see that in the record.

In addition, the TOA Aerosol Direct radiative forcing changes very little during the period 1950-2000, while the corresponding surface forcing changes greatly. How can one change and not the other?

The next (although perhaps not the last) oddity was that the total surface forcing (excepting the sporadic volcanic contribution) generally decreased 1850-2000, with the total forcing (including volcanic) at the end of the period being -1.3 W m2, and the total forcing in 1950 being -0.6 W m-2 … why would the total surface forcing decrease over the period during which the temperature was generally rising? I thought perhaps the sign of the forcing for the surface was the reverse of that for the TOA forcings, but a quick examination of the corresponding volcanic forcings shows that the signs are the same. So the mystery persists.

In any case, those are the strangenesses that I found. Anyone with ideas about why any of those oddities are there is welcome to present them. What am I missing here? There’s some part of this I’m not getting.

In puzzlement,

w.

PS – I’m in total confusion regarding the albedo forcings that go all the way back to 1850 … if I were a suspicious man, I might think they just picked numbers to make their output match the historical record. Do we have the slightest scrap of evidence that the albedo changed in that manner during that time? Because I know of none.

PPS – Does anyone know of an online source for the surface and TOA forcing data in those figures?

0 0 votes
Article Rating
84 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Ziiex Zeburz
June 5, 2011 1:46 am

With all due respect Sir, if you are confused what chances have we got ?
Perhaps it’s a Japanese thing ?

Les Johnson
June 5, 2011 1:58 am

Willis: I think you have that backwards. Or rather, the authors do. The top panel is TOA. The bottom panel, with the higher forcing, is the surface. According to the caption under the graph.
Of course, that means the legends or the captions are wrong. The graph legends say one thing, the caption under the graph another.
Either way, the authors cocked it up.

Allan M
June 5, 2011 2:04 am

I might think they just picked numbers to make their output match the historical record.
Well, it wouldn’t be the first time.

John Marshall
June 5, 2011 2:04 am

If solar forcings are not following model forecasts then the modelsa are wrong. What is the surprise?
LLGHG forcings lower than solar, I suggest you look again at this poor theory of GHG’s.
I prefer the Cosmic ray theory which now seems to be on the way to proof. All forcings can be explained using this. No need of GHG’g at all!

KnR
June 5, 2011 2:14 am

In situations like this the answer is simple , because the models said so

dr.bill
June 5, 2011 2:18 am

Willis,
The caption below the graphs doesn’t agree with the labels used on the graphs themselves. In the caption, it says that the top panel is the radiative component. Perhaps the caption is correct and the labels aren’t.
/dr.bill

Paul Maynard
June 5, 2011 2:29 am

Albedo
I have what might be a dumb question on albedo at the Arctic. A standard MMGW feedback argument is that with warming, the ice melts causing a change in albedo and so more warming as less incoming HF radiation is reflected back space.
Yet I remember from my O level physics that we regard the suns ray as travelling in parallel lines. If that is true, the surface at the Arctic is virtually parallel to the rays and thus they pass through the atmosphere with very limited opportunity to melt the ice.
Regards
Paul

Les Johnson
June 5, 2011 2:53 am

Willis: In my mind, the TOA should be less than surface. If the TOA is less, then that means heat is accumulating (less LWR loss to space). If TOA is higher, that should mean that there is a net cooling.
But I agree that something is wonky besides the captions. The thick black line on both charts should cancel out, as they are about the same value, with one positive and the other negative. Are they saying there was no warming?
At least they show cloud albedo as an increasing negative forcing. That is a first for these models, no?

Richard S Courtney
June 5, 2011 3:00 am

Willis:
You say;
“I have no idea why in their model e.g. one W m-2 of TOA solar forcing has more than three times the effect on the surface as one watt of TOA greenhouse gas forcing. All suggestions welcome.”
Choosing arbitrary values of ‘TOA solar forcing’ and ‘TOA greenhouse gas forcing’ to obtain a curve fit would be consistent with the known practice of choosing arbitrary values of ‘climate sensitivity’ and aerosol negative forcing to obtain a curve fit.
The models use wide ranges of ‘climate sensitivity’ and aerosol negative forcing to obtain reasonable hindcasting of global temperature. The difference between AOGCMS for used values of these parameters is a factor of ~2.5.
So, it seems reasonable to suggest (n.b. I merely “suggest” because I have no evidence) that
“solar forcing has more than three times the effect on the surface as one watt of TOA greenhouse gas forcing”
in the models is an effect of curve fitting to obtain an acceptable hindcast.
Richard

June 5, 2011 3:02 am

“The first oddity I noticed was that the surface forcing from the long-lived greenhouse gases (LLGHG) was so small compared to the top-of-atmosphere LLGHG radiative forcing.”
In the lower atmosphere there is too much CO2 and, above all, H2O.
If CO2 concentration increase, its effect is felt higher up in the troposphere.
IR radiation from the surface is not seen from space (IR windows excluded, of course).

P. Solar
June 5, 2011 3:16 am

It’s difficult to make any sense out of this incoherent mess with labels inverted but there are a couple of things to note.
The LLGHG surface forcing is probably being amplified by the hypothetical positive feedback “somewhere” in the the atmosphere. Remember the basis of all these models requires multiplying “what the science says” the CO2 forcing is by some voodoo science to produce the required TOA imbalance.
It appears that what you are noting is the “climate sensitivity” of this model.
Measuring this response of the model and pretending this is now “data” then allows you to multiply how much global temps will rise by the same number and scare the shit out of the populus.
In answer to your other question , yes they do actually make most of the forcings up as well since for most no data exist. I saw recently a post that looked into this (sorry can’t remember a link) . Most of them seem to based on handwavy estimates that thing must go up with increasing industrialisation, so they make them up to all look about the same and it fits the plan so, fine.
The one where there is some data , black carbon, actually bore no resemblance to real data and had the same hand-made, man-made century long trend.

June 5, 2011 3:16 am

wrt albedo change
Goldewijk, K. (2001), Estimating Global Land Use Change Over the Past 300 Years: the HYDE Database, Global Biogeochem. Cycles, 15(02), 417-433.
http://www.mnp.nl/hyde

P. Solar
June 5, 2011 3:23 am

Important error in your first line : the fourth assessment report is called AR4 *not* FAR which refers to First Assessment Report.
I was wondering why you bothered with something that old until I checked what you linked it to.

June 5, 2011 3:39 am

Willis Eschenbach says:
June 5, 2011 at 2:48 am

It’s just that, for the reason you cited as well as other reasons, the change in the actual albedo is not as large as you might initially think.

There is also the inconvenient fact that as the ice melts, the sea itself, which is pretty much always warmer than the ice, is no longer insulated and can radiate heat to be lost to space. Indeed, that could even make a low sea ice coverage (esp the Arctic), a negative feedback.

Julian in Wales
June 5, 2011 3:56 am

Have you considered contacting the authors and asking them to explain?

Ian W
June 5, 2011 4:50 am

“PS – I’m in total confusion regarding the albedo forcings that go all the way back to 1850 … if I were a suspicious man, I might think they just picked numbers to make their output match the historical record. Do we have the slightest scrap of evidence that the albedo changed in that manner during that time? Because I know of none.”
Please Willis such crude language!
The models are parameterized. Each forcing is a guessed value and its effect is a guessed value. The model is then run varying the guessed values until the output appears to match the real world. Any real world data that still does not match the model guessed value is then ‘adjusted’ and the original raw data discarded.

RobJM
June 5, 2011 4:50 am

Considering Cloud cover has decreased by 4% during the satellite period resulting in a large 0.9w/m2 positive forcing it seems strange to me that the model shows the cloud albedo having a very constant decline over the century resulting in a negative 0.6w/m2
forcing.
Talk about making things up!

son of mulder
June 5, 2011 4:53 am

The models can’t predict forward so how can they hindcast? If you force fit the hindcast to the historical actuals based on the preconception of how the model should work I’d suspect some of the input parameters/forcings would need to be pretty wierd. But hey it’s the “right answer” that counts not reality, science or chaotic behaviour that is important, just the message. Lesson is don’t look at data, it’s not designed for that.

Jack Green
June 5, 2011 5:02 am

These modelers are trying to simulate something that cannot be simulated because of lack of data. There simply is not enough quality data to do it. Now that being said we learn a lot about the system by trying. The modelers have been caught making up data and starting with the answer. Of course the tired old saying applies here. Garbage in Garbage out. Forcings = Fudge Factors Who decides what Fudge Forcing to use; that’s the question?

Richard S Courtney
June 5, 2011 5:16 am

Julian in Wales:
At June 5, 2011 at 3:56 am you ask the reasonable question;
“Have you considered contacting the authors and asking them to explain?”
But there is a problem with that.
IPCC AR4 Chapter 2 which Willis is questioning says:
“As an example, the temporal evolution of the
global and annual mean, instantaneous, all-sky RF and surface
forcing due to the principal agents simulated by the Model for
Interdisciplinary Research on Climate (MIROC) + Spectral
Radiation-Transport Model for Aerosol Species (SPRINTARS)
GCM (Nozawa et al., 2005; Takemura et al., 2005) is illustrated
in Figure 2.23.”
At the end of that Chapter, the references indicate that Figure 2.23 is illustrating information obtained from:
Nozawa, T., T. Nagashima, H. Shiogama, and S.A. Crooks, 2005: Detecting
natural influence on surface air temperature change in the early twentieth
century. Geophys. Res. Lett., 32, L20719, doi:10.1029/2005GL023540.
Then, looking for that paper one soon discovers it is behind a $25 pay wall.
It is not known if the unknown authors of IPCC AR4 Chapter 2 generated the graphs of their Figure 2.23 as their statement (above) suggests, or if they copied it from the reference?
So, without spending $25, whom does one ask? And how long will it take to get answers?
Richard

Michael Jankowski
June 5, 2011 5:32 am

I would say there’s some substantial “tuning” involved here. The caption to the figures says “as simulated.” The text that refers to this set of figures is at http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-9-5.html and says:
…Radiative forcing time series for the natural (solar, volcanic aerosol) forcings are reasonably well known for the past 25 years; estimates further back are prone to uncertainties (Section 2.7). Determining the time series for aerosol and ozone RF is far more difficult because of uncertainties in the knowledge of past emissions and chemical-microphysical modelling. Several time series for these and other RFs have been constructed (e.g., Myhre et al., 2001; Ramaswamy et al., 2001; Hansen et al., 2002). General Circulation Models develop their own time evolution of many forcings based on the temporal history of the relevant concentrations. As an example, the temporal evolution of the global and annual mean, instantaneous, all-sky RF and surface forcing due to the principal agents simulated by the Model for Interdisciplinary Research on Climate (MIROC) + Spectral Radiation-Transport Model for Aerosol Species (SPRINTARS) GCM (Nozawa et al., 2005; Takemura et al., 2005) is illustrated in Figure 2.23. Although there are differences between models with regards to the temporal reconstructions and thus present-day forcing estimates, they typically have a qualitatively similar temporal evolution since they often base the temporal histories on similar emissions data…

Steve McIntyre
June 5, 2011 5:49 am

The first oddity I noticed was that the surface forcing from the long-lived greenhouse gases (LLGHG) was so small compared to the top-of-atmosphere LLGHG radiative forcing. At the end of the record, the TOA forcing from LLGHG was just over two watts per square metre (W m-2). The surface forcing from LLGHG, on the other hand, was only about 0.45 W m-2. I don’t understand that.

Look at some of the original Ramanathan articles on this.

Artr
June 5, 2011 5:56 am

A question on the albedo of water. I thought that when the angle of incident was very low the albedo of water increased tremendously. In the high arctic and antarctic the sun is always low to the horizon so wouldn’t the change from ice to open water have little effect?
Just thinking.
ArtR

NicL_UK
June 5, 2011 6:01 am

Willis
I also had been examining this graph and had at first been confused by the labels!
Chapter 2.2. of IPCC AR4 Working Group 1 report says:
“This chapter also uses the term ‘surface forcing’ to refer to the instantaneous perturbation of the surface radiative balance by a forcing agent. Surface forcing has quite different properties than RF and should not be used to compare forcing agents.” (RF being radiative forcing)
PS May I ask what graph digitisation software you used? The one I tried was more bother than it was worth and I ended up digitising parts of the RF graph manually!

golf charley
June 5, 2011 6:16 am

Re post at Judith Curry’s site?

Pamela Gray
June 5, 2011 6:52 am

The largest forcing would, in my opinion, be SST combined with equatorial wind direction. In a word, El Nino. Which part of the model is a measure of the El Nino forcing on land temperature trend? El Nino (and the lack thereof) plays havoc with mathematical constructions of all these other parameters, and has by far, the greater energy potential.
For that matter, any other warm or cool ocean basin oscillation would also force land temperature changes. And we all know that long wave radiation from GHG under ideal conditions (calm surface) cannot affect SST’s to any degree worth putting into a model (the very weak signal from the skin only warmed surface could be safely ignored). And longwave heating of land air goes away at night (nocturnal radiative cooling) so we can ignore that too.
So we are talking about whether or not the sea surface is peeling back its layer of naturally warmed by the Sun surface, or is being silently still soaking up the shortwave IR deep into its upper layer, to be carried along the major currents, setting up on-shore weather systems that warm or cool the land.
To me, the secret force is in which way the wind blows over the sea.

Coldish
June 5, 2011 7:06 am

It seems clear to me from the IPCC text accompanying Fig 2.23 that the labels on the figure are correct, so the caption is wrong.
The co-ordinating lead authors (i.e. they chose the content) for that chapter of FAR (Ch 3, ‘Changes in Atmospheric Constituents and in Radiative Forcing’) were P Forster (University of Leeds, UK) and V Ramaswamy (NOAA, USA).

Dave Springer
June 5, 2011 7:08 am

Paul Maynard says:
June 5, 2011 at 2:29 am

Albedo
I have what might be a dumb question on albedo at the Arctic. A standard MMGW feedback argument is that with warming, the ice melts causing a change in albedo and so more warming as less incoming HF radiation is reflected back space.
Yet I remember from my O level physics that we regard the suns ray as travelling in parallel lines. If that is true, the surface at the Arctic is virtually parallel to the rays and thus they pass through the atmosphere with very limited opportunity to melt the ice.

Albedo at the poles makes little difference as there’s very little insolation in the first place. In the second place the ocean has an albedo near zero when the insolation is perpendicular to the surface but increases drastically as the rays move closer to parallel.
What makes a big difference is the variable sea ice extend in the arctic. Ice insulates the surface of the ocean. Where there is no ice cover the ocean can vent heat to space rapidly but when there is ice covering the surface it cannot. Thus arctic sea ice works opposite to the way ice works at lower latitudes. Less ice results in global cooling because the arctic “radiator” (it functions much the same as a radiator in an automobile with warm water from the tropics releasing heat when it circulates up to the poles via the oceanic conveyor belt) becomes more efficient. At lower latitudes where the sun’s rays are not nearly so oblique the albedo difference between ice or no ice is the dominant influence.

Dave Springer
June 5, 2011 7:14 am

Jer0me says:
June 5, 2011 at 3:39 am
“There is also the inconvenient fact that as the ice melts, the sea itself, which is pretty much always warmer than the ice, is no longer insulated and can radiate heat to be lost to space. Indeed, that could even make a low sea ice coverage (esp the Arctic), a negative feedback.”
Correct. Glad to see others here can work through it.

Don K
June 5, 2011 7:31 am

Jer0me says:
June 5, 2011 at 3:39 am
“There is also the inconvenient fact that as the ice melts, the sea itself, which is pretty much always warmer than the ice, is no longer insulated and can radiate heat to be lost to space. Indeed, that could even make a low sea ice coverage (esp the Arctic), a negative feedback.”
Another factor is that low angle radiation — and most solar radiation in the Arctic is going to be low angle — seems to be mostly reflected by water, not absorbed. To test this, the next time you come to a puddle/pool/lake, focus on a spot on the water more than a few body lengths in front of you. What you will almost certainly see is a reflection of what is on the other side of the water. Observe how close you have to get before objects/markings on the bottom start to be visible through the reflected light. Of course if the water is rough, the angles will be more favorable for absorbtion part of the time. But I still think that the amount of solar absorbtion by artic sea water may be being overestimated.

keith at hastings uk
June 5, 2011 7:39 am

You couldn’t make it up! Oh, well, uhm, hmmm, I suppose they just did, actually. Clang.

Theo Goodwin
June 5, 2011 7:48 am

Willis writes:
“PS – I’m in total confusion regarding the albedo forcings that go all the way back to 1850 … if I were a suspicious man, I might think they just picked numbers to make their output match the historical record. Do we have the slightest scrap of evidence that the albedo changed in that manner during that time? Because I know of none.”
Willis, I am surprised at you! Have you not learned that Earth existed in a Golden Age until the 1960s when hippies realized that capitalists had undertaken a plan to destroy it. This plan was exposed on the first Earth Day in 1970 and hippies such as James Hansen have made heroic efforts since then to end capitalism. So, Earth’s albedo was constant and perfect until 1970. /sarc
What the authors have proved is that they now own a supercomputer, or a timeshare, and they have figured out how to get a standard climate model to solve. Why they would publish this fact is a mystery.

LazyTeenager
June 5, 2011 8:01 am

JerOme reckons
__________
itself, which is pretty much always warmer than the ice,
__________
and what do you think the temperature of the water under the ice is Jerome?
I bet you, based on some well known physics, that it’s the same temperature as the ice.

Michael D Smith
June 5, 2011 8:03 am

I hit a wall trying to find anything to match forcings on aerosols. I think there might be evidence for volcanoes, but many of these values have “hockey stick” written all over them, especially aerosols. If you look at the various charts, it seems that some are just fudge factors used to make the total of all others with CO2 forcing come out to the “correct” value. Unfortunately, even the “correct” value of forcing is not cooperating, so I suspect aerosols will take the blame (China’s coal plants are spewing too much SO2 and making it cooler, and once they install scrubbers, we’ll all roast to death in a fiery armageddon created directly from the sins of our carbon footprint, (send money) blah, blah).
If you look at the surface forcing, it goes with the negative of population (approx), but actual forcing should have been vastly worse in the 1960’s and 1970’s, should have leveled off or decreased (when we started scrubbing and reducing automotive smog, which indeed made a huge difference), then started increasing again with industrialization of China and India. Instead it is smooth, which I doubt.
I don’t see any evidence for highly negative aerosol forcing at all, it gets washed out too fast in the atmosphere, though persistently high emissions would support a somewhat higher level. Rather, it is merely a convenient scapegoat propping up the theory of high CO2 forcing and high sensitivity. In other words, if you reduce CO2 forcing to a reasonable level (that has some correspondence to reality), you don’t NEED to have an offsetting aerosol fudge factor to make the observed temperatures fit a model.
I’ve had a hard time finding sources, but I see from the report where I can look next. If you find any good data, please post a link.
Oops, now I see why there is no data. From page 208:
Determining the time series for aerosol and ozone RF is far
more difficult because of uncertainties in the knowledge of past
emissions and chemical-microphysical modelling. Several time
series for these and other RFs have been constructed (e.g., Myhre
et al., 2001; Ramaswamy et al., 2001; Hansen et al., 2002).
General Circulation Models develop their own time evolution
of many forcings based on the temporal history of the relevant
concentrations. As an example, the temporal evolution of the
global and annual mean, instantaneous, all-sky RF and surface
forcing due to the principal agents simulated by the Model for
Interdisciplinary Research on Climate (MIROC) + Spectral
Radiation-Transport Model for Aerosol Species (SPRINTARS)
GCM (Nozawa et al., 2005; Takemura et al., 2005) is illustrated
in Figure 2.23. Although there are differences between models
with regards to the temporal reconstructions and thus presentday
forcing estimates, they typically have a qualitatively similar
temporal evolution since they often base the temporal histories
on similar emissions data.”
Translation: It was model generated. Excellent! Sure saves a lot of time and instrumentation that way. Certainly it’s a far more robust way to get what you want…
I’d love to see the article “First Light on the Aerosol Hockeystick”… Once again, great stuff, Willis… Thanks.
http://wattsupwiththat.com/2011/05/09/first-light-on-the-ozone-hockeystick/
(Michael D Smith says:
May 9, 2011 at 5:50 pm
Just wait until you get to aerosols…)

Kelvin Vaughan
June 5, 2011 8:20 am

I’m not sure why but looking at the graphs makes me feel rather cold.

June 5, 2011 8:35 am

I noticed that the simulation stopped in 2000.
I wonder how it matched the lack of warming from 1998 to the present. With the high simulated GHG warming how does it account for the lack of warming ?
The simulation must fail miserably from 1998 to present.

ferd berple
June 5, 2011 8:44 am

“The model is then run varying the guessed values until the output appears to match the real world.”
Almost, but not true in fact. The models are run and the parameters varied until the models match what the experimenters believe to be the real world. Thus, any results that do not match what the experimenters believe to be true will be rejected, even if in fact they are true physically. As well, any results that match what the experimenters believe to be true will be incorporated into the models, even if in fact they are physically impossible.
This has long been recognized in medicine and other sciences and has resulted in double blind testing to eliminate contamination of the results by experimenter bias. Climate Science has not done this and as a result the models cannot be assumed to be modelling the real world. It is much more likely that the models are in fact simply modelling the experimenter’s belief about climate.
This same process is also true of the IPCC report. The report is not assembled using experimental controls. Rather it is assembled to match what the authors and governments believe to be true. Data that does not match there beliefs is rejected, and data that matches these beliefs is included, even if the source can clearly be shown to be no more than propaganda, with no basis in fact.
Thus the increasing reliance of the IPCC on non peer reviewed material and the recent policy announcement by the IPCC that they would not flag non peer reviewed material in the report, because it would be “too much work.” This policy directly contradicts the instructions given to the IPCC in the last review. As such, a more likely explanation is that the IPCC will not flag non peer reviewed material because they know it would weaken the case for AGW in the IPCC’s next report.

Alcheson
June 5, 2011 9:40 am

Concerining oddities, the CSU sea level page writes this:
” How often are the global mean sea level estimates updated?
We update the sea level data approximately bimonthly (every two months). The altimeter data are released by NASA/CNES as a 10-day group of files corresponding to the satellite track repeat cycle (10 days). There is also a two-month delay between the time the data are collected on the satellite to their final product generation (known as a final geophysical data record (GDR)). ”
Why is it their sea levels haven’t been updated since January?

wayne
June 5, 2011 9:48 am

Paul Maynard says:
June 5, 2011 at 2:29 am
Albedo
I have what might be a dumb question on albedo at the Arctic. A standard MMGW feedback argument is that with warming, the ice melts causing a change in albedo and so more warming as less incoming HF radiation is reflected back space.
Yet I remember from my O level physics that we regard the suns ray as travelling in parallel lines. If that is true, the surface at the Arctic is virtually parallel to the rays and thus they pass through the atmosphere with very limited opportunity to melt the ice.
—-
Paul, your statement would agree with what NASA via MODIS (I think) that albedo does not change meaningfully whether there is ice or not in the arctic. NASA was surprised in the story. There is a NASA article on that but I have no idea where. I read it around last summer.

conor
June 5, 2011 9:57 am

First, the paper “H. Shiogama, and S.A. Crooks, 2005: Detecting natural influence on surface air temperature change in the early twentieth century.” is available for download via GoogleScholar, but that doesn’t help much, because the paper just reported that they made a model, and doesn’t contain these time-evolution RF graphs…
…which are really interesting! When I first read AR4 I wondered why people weren’t making a bigger deal out of these time series. Also, take a look at spatial patterns of surface and TOA forcing! Weird…they say “Note that the spatial pattern of the forcing is not indicative of the climate response pattern.” presumably because surface temperatures are dependent on weather, not just surface heating; “It should be noted that a perturbation to the surface energy budget involves sensible and latent heat fluxes besides solar and longwave irradiance; therefore, it can quantitatively be very different from the RF, which is calculated at the tropopause, and thus is not representative of the energy balance perturbation to the surface-troposphere (climate) system. While the surface forcing adds to the overall description of the total perturbation brought about by an agent, the RF and surface forcing should not be directly compared nor should the surface forcing be considered in isolation for evaluating the climate response (see, e.g., the caveats expressed in Manabe and Wetherald, 1967; Ramanathan, 1981).”
Differences between the surface and TOA forcing are relevant to the predicted “tropospheric hotspot”. If the heat isn’t leaving the TOA and isn’t transmitted to the surface, it must be absorbed in the troposphere. Where’s the heat?
The AR4 is careful to surround these figures in caveats, but they seem to want to have it two ways: “…the location of maximum RF is rarely coincident with the location of maximum [climate, temperature] response (Boer and Yu, 2003b)…. Identification of different patterns of response is particularly important for attributing past climate change to particular mechanisms…”

Mark Hladik
June 5, 2011 9:57 am

Apologies in advance, my time to scan the comments was limited, so if this has been brought up before, then please excuse the redundancy:
I noticed right away this factor of “Cloud Albedo”, and there is one curve for LT (lower Troposphere) and one for TOA (Top of the Atmosphere).
I must be confused on what the ‘top of the atmosphere’ is; I was thinking that as an arbitrary figure, we might consider something between 100 and 200 km above the surface for the ‘top of the atmosphere’, since there is no “definable”, “demarcatible”, “fixed-in-time-and-space” ‘top of the atmosphere’.
So, whatever the ‘top of the atmosphere’ actually is, I was under the impression that there were, in fact, no clouds at all, by virtue of the fact that clouds are an LT and lower stratosphere phenomenon (when towering CB’s punch through the tropopause).
Call me confused, and/or misinformed!
Mark H.

Jeff Mitchell
June 5, 2011 10:26 am

Willis wrote:
“PS – I’m in total confusion regarding the albedo forcings that go all the way back to 1850 … if I were a suspicious man, I might think they just picked numbers to make their output match the historical record. Do we have the slightest scrap of evidence that the albedo changed in that manner during that time? Because I know of none.”
I’d side with the theory they made the numbers up. From 1850? Give me a break. I have an atlas that was published in 1880 that has the middle of Africa labeled “unknown interior”. Most of the surface of the planet wasn’t monitored the way it can be today. Before 1957 there were no satellites to measure cloud cover or ice extent. The reason you know of no historical record is most likely because there is none. The someone who provided the data needs to show where it came from.
Last, there is no “if” about it. You are a suspicious man. 🙂 With lots of good reasons to be. Its the scientific way.

John M
June 5, 2011 11:03 am

Mark H.
Google is your friend.

top of atmosphere:
a given altitude where air becomes so thin that atmospheric pressure or mass becomes negligible. TOA is mainly used to help mathematically quantify Earth science parameters because it serves as an upper limit on where physical and chemical interactions may occur with molecules in the atmosphere. The actual altitude used for calculations varies depending on what parameter or specification is being analyzed. For example, in radiation budget, TOA is considered 20 km because above that altitude the optical mass of the atmosphere is negligible. For spacecraft re-entry, TOA is rather arbitrarily defined as 400,000 ft (about 120 km). This is where the drag of the atmosphere starts to become really noticeable. In meteorology, a pressure of 0.1 mb is used to define this location. The actual altitude where this pressure occurs varies depending on solar activity and other factors

http://mynasadata.larc.nasa.gov/glossary.php?&letter=T

Alexander Harvey
June 5, 2011 11:05 am

Willis:
The difference between the two boundary forcings (surface and TOA) should be the volumetric forcing between the boundaries.
In the case of tropospheric aerosols, I beleive the direct effect is via absorption and the indirect via reflection. So the first should have little TOA effect.
Try subtracting surface from TOA forcings and see if the difference makes sense as an atmospheric heat uptake rate. I haven’t tried, so I can’t say but I think it should.
Alex

Alexander Harvey
June 5, 2011 11:11 am

Update
I think I should have said partially by absorption. That is the general case, I think it is almost totally by absorption for brown cloud.
Alex

Matt G
June 5, 2011 11:54 am

This is from skepticalscience (really should be called non-skeptiacalscience)
http://www.skepticalscience.com/earth-albedo-effect.htm
The skeptic argument…
It’s albedo
“Earth’s Albedo has risen in the past few years, and by doing reconstructions of the past albedo, it appears that there was a significant reduction in Earth’s albedo leading up to a lull in 1997. The most interesting thing here is that the albedo forcings, in watts/sq meter seem to be fairly large. Larger than that of all manmade greenhouse gases combined.” (Anthony Watts)
What the science says… (according to scepticalscience)
The long term trend from albedo is that of cooling. In recent years, satellite measurements of albedo show little to no trend.
Oh dear, the reason why global temperatures aren’t rising is exactly because there has been very little trend. The long term trend from albedo is that of cooling? Nonsense if they mean since the 1980’s, fair enough if they mean since about 2001.
http://img854.imageshack.us/img854/5246/globaltempvglobalcloudb.png
But wait, global temperatures are cooling since 2001, just as the albedo trend shows. Just like when albedo should show a warming trend since 1983 until this point.
http://www.woodfortrees.org/plot/uah/from:2001/normalise/plot/rss/from:2001/normalise/plot/gistemp/from:2001/normalise/plot/hadcrut3gl/from:2001/normalise
The only reason why there has not been more cooling is down to more El Nino’s so far during this period with one of the strongest recorded recently and this could change soon.
Cloud albedo, seems to be about 0.9w/m2 just since the 1980’s (satellite data) and the model shown in this topic looks like it is automatically persuming that increasing temperaure = decreasing cloud albedo. Can’t find any data sources going back this far, only ones during the satellite era.

maksimovich
June 5, 2011 1:51 pm

Look at some of the original Ramanathan articles on this.
Try Ramanathan 1974
http://i255.photobucket.com/albums/hh133/mataraka/ramanathanco2x.jpg?t=1242018280
They use a 1d RCM

June 5, 2011 2:15 pm

dr.bill says:
June 5, 2011 at 2:18 am
The caption below the graphs doesn’t agree with the labels used on the graphs themselves. In the caption, it says that the top panel is the radiative component. Perhaps the caption is correct and the labels aren’t.

It might be like the case of the absent-minded math professor. He says a, writes b, thinks it’s c while d is the correct one.
Otherwise resistance is futile, you will be assimilated. All your base are belong to us. You have no chance to survive make your time. Somebody set us up the bomb. For great justice.

don penman
June 5, 2011 2:24 pm

What does ” a low level of scientific understanding” mean? It is a phrase often used in this paper.I think that the tilt of the earths axis with respect to its orbit around the sun can’t be ignored at least at mid latitudes ,where we also see accumulation of snow every winter.Summer is warmer than winter at least at mid latitudes usually because the Sun is higher in the sky at noon in summer than winter.

John Bromhead
June 5, 2011 3:41 pm

“I’m not sure why but looking at the graphs makes me feel rather cold”, said Kevin Vaughn.
Me too. I think it’s all those icicles hanging off the volcanoes.

Colin Davidson
June 5, 2011 6:11 pm

I have a poor understanding of the climate system.
I use the figures in Kiehl & Tranberth 1997 (see http://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-1-1-figure-1.html ).
What would a hotter but equilibrium world be like? I assume that the conductive term (wrongly labelled in K&T as “thermals” will be the same, as the temperature difference between the surface and the air in contact with it is going to be the same, as there is no reason to assume otherwise.
The evaporative term increases by an unknown amount. Could be anything, but most agree that the lower limit is about 2% per DegC (the models go for this) and 7%per DegC. The radiative term increases iaw Stephan-Boltzmann.
Putting all that together I get a surface sensitivity to a change in Surface Forcing of between 0.09 and 0.15DegC/W/m^2.
From the cited IPCC diagrams cited by Willis the relationship between GHG caused Radiative Forcing and GHG caused Surface Forcing is about 4:1. This means a for a change in Radiative Forcing of 4W/m^2, the modellers expect a change in Surface Forcing of about 1W/m^2.
Putting all all that together, I would expect a change in Surface Temperature of between 0.095 and 0.15 DegC for a doubling of CO2.
I simply cannot get the necessary increase in Back Radiation of between 16 and 26 W/m^2 required to bring the surface temperature up by 3DegC.
As I say, I don’t understand the climate system.

Bill Illis
June 5, 2011 6:50 pm

The biggest difference is TOA versus the surface.
GHGs provide a large positive forcing at the surface but once you get to the TOA, there is less positive forcing there.
It is important to understand that there is not a standard definition of TOA and different groups/charts use different atmospheric levels for the TOA (sounds wrong but that is true). But you could think of it as GHGs provide substantial warming at the surface but they actually provide cooling in the stratosphere (close to the TOA) since they are holding back radiation that would normally warm the stratosphere.
At different levels, there is different impact and different forcings from the various different forcing types. Volcanoes are a good example. They have both strongly positive and strongly negative forcings depending on the atmospheric level in question.
Everything should be shown for the surface so that the wiggle room is reduced.
For example, the 3.7 watts/m2 increased forcing from doubled GHGs is for the 255K temperature level (which is sometimes described as TOA and sometimes not, and the chart shown above is certainly not at the 255K level – it is higher up it seems).
In addition, there is really no doubled GHG forcing number for the surface alone. It should actually be higher than +3.7 if it is to deliver +1.2C at the surface, but I have never seen an estimate of this.

Charlie Foxtrot
June 5, 2011 11:36 pm

Ref. LazyTeenager says:
June 5, 2011 at 8:01 am
The water below the ice is not the same temperature as the surface of the ice. Remember, sea water can’t be below the freezing point of salt water, whereas the ice surface might be -40 deg. F (or C).
The liquid ocean water, no colder than 28.4F, or -2C, can radiate large amounts of heat into space, whereas ice and snow do not due to a couple of factors. Ice and snow have low heat content, and are pretty good insulators, preventing the heat in the warmer ocean water under the ice from radiating much heat.
During winters when the ice cover is lower than normal, the far north experiences warmer weather because the heat from the ocean warms the air above it.

June 6, 2011 12:02 am

I found a good article on surface albedo
It includes a table with water albedo as follows:
Small Zenith Angle: Albedo = 0.03 – 0.10
Large Zenith Angle: Albedo = 0.10 – 1.00
http://www.eoearth.org/article/Albedo?topic=54300

June 6, 2011 1:03 am

I think you need to scrounge through Chapter 8 (Climate Models and Their Evaluation) but more likely Chapter 9 (Understanding and Attributing Climate Change) which goes into some detail on each area including why aerosols are negative etc.
That said, I notice that the bottom graph has GHG’s at 2.1 w/m2. That’s about right for TOA if you accept CO2 doubling = 3.7 w/m2, then a 40% rise in CO2 should be close to 60% (whatever ln2 comes to) of 3.7 or about 2 watts/m2. Looks to me like the bottom chart is TOA.
Aerosols are negative because they float in the atmosphere and intercept downward SW (9.2.2.2) and I think you’ll find an explanation of land use in the same section.
The TOA numbers should be higher than the surface numbers. The TOA number is calculated as if that is where downward LW originates from. Of course that isn’t true. Downward LW from GHG’s absorbing and re-radiating can originate from GHG molecules at ANY altitude, and they can emitt in ANY direction. The net total of “downward” LW due to GHG’s exceeds the net total of “upward” LW due to GHG’s by a theoretical CO2 doubling = 3.7 w/m2. If ALL the CO2 and other GHG’s existed at the TOA and had that effect, then that’s the number you would get at TOA. But they DON’T exist there, they are mixed throughout the atmosphere, so 3.7 (theoretical) or 2.1 (measured/calculated by whatever means) is usefull for modeling purposes, it is otherwise an imaginary construct (IMHO). Chapter 2 as well as other references (sorry, its almost 1:00AM, or I’d go hunt for them) make it clear that the TOA number (RF = Radiative Forcing) and the surface response are two different things.
Sensitivity is calculated at 1 degree = 3.7 w/m2 = CO2 doubling, but that is at the “effective black body temperature of earth” which is about -20 C. You will have to search AR3 to find that reference though, they just skip over it in AR4 as far as I can tell. Since average surface temp is +15 C, and temp varies by w/m2 = CT^4 (Stefan-Boltzman) the SURFACE temp sensitivity is only about 0.6 C. Nice of them to sort of skip by that explanation, is it not? Further, they seem to say (I haven’t found anywhere that they say it explicitly, but it is implied) that since much of the LW from GHG’s gets absorbed BEFORE it gets to the surface, the surface forcing should be less than the TOA forcing. Hence 0.6 degrees at surface adjusted for…however much they seem to think doesn’t make it all the way down… should be even less.

June 6, 2011 1:55 am

If you google NASA GISS FD data series (sorry, in a rush) you will get their model of the global averages for TOA, mid and Surface radiation fluxes for Short Wave and Long Wave going back to about 1983. You can also look at Palle’s work on albedo and three papers in Science from 2005 (Pinker, Wild and Wielicki) – all of this is referenced and discussed in ‘Chill: a reassessment of global warming theory’. The actual peer-reiviewed science lit tells us that atmospheric transparency increased after 1980 and cloud cover diminished by 4% (until 2001) thus increasing the flux at the surface by several Watts – I reckoned about 4x the RF from carbon dioxide, and thus posed the question to the scientific community….how come they believe ‘most’ of the warming is GHGs? The increased flux of SW warms the ocean far more powerfully than the downwelling LW.
Furthermore, the global ‘dimming’ from 1950-1980 due to sulphur emissions is shown by these papers to be a localised phenomenon – and the dimming was global and clearly caused by clouds and natural aerosols – from 1980, the greater transparency was noted in areas not subject to human pollution – and again due to clearing of natural aerosols and reduced cloud.
The model assumptions were built in before this science was published and have yet to be corrected. The IPCC figures for negative forcing due to sulphur emissions are clearly wrong.
There was a phase shift in 2001 – when cloud cover came back by 2%, and the albedo measurements (using other sources such as ‘earthshine’ from lunar measurements) confirm this shift – since when the ocean heat content rise has levelled off.
I wish I had time to pursure these issues…I did write a book on it, but the silence has been deafening…..these issues are absolutely central to the whole question of reliability of models and their relation to real-world data.

Colin Davidson
June 6, 2011 3:23 am

This is in response to Davidmhoffer’s post 0103,6Jun11 above.
” The TOA number is calculated as if that is where downward LW originates from. Of course that isn’t true. Downward LW from GHG’s absorbing and re-radiating can originate from GHG molecules at ANY altitude, and they can emitt in ANY direction. ”
I agree. At wavenumber 670 for example, over half the surface emissions are absorbed by the atmosphere within 1m. (see John Nicol’s calculation at Figure 6, http://www.middlebury.net/nicol-08.doc ) The implication is that emissions by the atmosphere to the surface at that frequency are mostly coming from below 1m. The figures for wavenumbers 650 and 700 are 25m and 50m respectively. Across the main CO2 band, nearly all surface photons are extiguished within 200m, and reciprocally, nearly all the back radiation reaching the surface is coming from 200m or below. A doubling of CO2 roughly halves the heights.
“Sensitivity is calculated at 1 degree = 3.7 w/m2 = CO2 doubling, but that is at the “effective black body temperature of earth” which is about -20 C. You will have to search AR3 to find that reference though, they just skip over it in AR4 as far as I can tell. Since average surface temp is +15 C, and temp varies by w/m2 = CT^4 (Stefan-Boltzman) the SURFACE temp sensitivity is only about 0.6 C. ”
And so it would be for a dry surface, which is not what we live on. Earth is a water planet. Three fifths of the flux from the surface absorbed into the atmosphere is evaporated water, one fifth is Radiation, and one fifth Conduction (Figures from Kiehl & Trenberth 1997, http://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-1-1-figure-1.html ). The sensitivity at the surface is heavily damped by the increase in evaporation with temperature, a consideration which does not apply to the atmosphere.
The water aspects of our planet – relative humidity, evaporation rate and clouds are big unknowns with large effects on temperature. What measurements exist suggest that the modellers assumptions on all 3 are significantly wrong.

Alexander K
June 6, 2011 3:29 am

Willis, you summed this paper up in an earlier post, “Models All the Way Down”.
Construct the model, then look for the data to provide the parameters that might work.

David L. Hagen
June 6, 2011 5:15 am

Willis
On your query, you might find interesting the detailed quantitative calculations by Ference Miskolczi of the global atmospheric absorbance from all the greenhouse gases as a function of altitude from surface to the top of atmosphere for the last 60 years.
The Stable Stationary Value of the Earth’s Global Average Atmospheric Planck-Weighed Greenhouse-Gas Optical Thickness, Energy & Environment, Special Issue: Paradigms in Climate Change Vol 21 No. 4 2010, August pp 243-262.
See the latest results at:
Poster presentation at the European Geosciences Union General Assembly, Vienna, 7 April 2011

Brian H
June 6, 2011 5:16 am

Somehow, the modellers have convinced themselves (and many others) that complexified elaborated stupidity isn’t stupid.
How stupid can you get? They seem determined to find out, by direct experimentation, no less!

June 6, 2011 5:57 am

From AR4 Chapter 9, Executive Summary
“The net aerosol forcing over the 20th century from inverse estimates based on the observed warming likely ranges between –1.7 and –0.1 W m–2. The consistency of this result with forward estimates of total aerosol forcing (Chapter 2) strengthens confidence in estimates of total aeroso lforcing, despite remaining uncertainties.”
One has to admire the unmittigated gall of announcing that their estimate ranges somewhere between almost zero and the same approximate amount (but negative) attributed to GHG’s, followed by claiming that the consistancy of the result strengthens confidence. The traditional definition of chutzpah (Yes your honor, I murdered my parents, but I’m asking for leniency because I am an orphan) doesn’t even come close.

June 6, 2011 6:10 am

Of course this gem from Chapter 8, may be even more astounding. They claim to be modeling the climate within hundredths of degrees, and tenths of watts/m2, but look at their own evaluation of their own models:
“Calculation of the global mean RMS error, based on the monthly mean fields and area-weighted over all grid cells, indicates that the individual model errors are in the
range 15 to 22 W m–2, whereas the error in the multi-model mean climatology is only 13.1 W m–2. Why the multi-model mean fi eld turns out to be closer to the observed than the fields in any of the individual models is the subject of ongoing research;”
SAY WHAT? No single model got closer than 15 to 22 w/m2? But averaged together they were only out by 13 w/m2? So they were ALL wrong by a whopping amount, but averaged together they were less wrong? How about calculating the error in the error range? 13 +/- what? 8? 10? 6? Oh wait…that’s the average of 19 of the 23 models in AR4. What happened to the other 4? Why were they excluded? I don’t know, I didn’t bother to dig into it. The results claimed are so ridiculous that even if the other 4 models were excluded to make the results look better….they’d just be extra ridiculous by having them included.

Richard S Courtney
June 6, 2011 7:16 am

davidmhoffer:
Concerning your comments on the AR4, aerosols and “chutzpah”, I think you would be interested in this review comment I provided to the first draft of the AR4.
“Page 2-4 Chapter 2 Line 2 of the draft says nitrous oxide is the “fourth most important greenhouse gas” and Page 2-3 Chapter 2 Lines 50 and 51 (wrongly) say methane is “the second largest RF contributor” (assuming that the effect of water vapour is ignored as is the convention in this Chapter except for Section 3.2.8.). But the draft does not state the third largest contributor.
Before Page 2-4 Chapter 2 Line 2, the draft needs to be amended to include the RF of particles of sulphate aerosols combined with soot that is the second largest RF contributor.
1. CO2 has RF of 1.63 Wm^-2,
2. particles of sulphate aerosols combined with soot have RF of 0.55 Wm^-2
(ref. Jacobson MZ, Nature, vol. 409, 695-697 (2000))
3. methane has RF of 0.48 Wm^-2.
4. and nitrous oxide has RF of 0.16 Wm^-2.
The authors of this chapter seem to be ignorant of the warming effect of sulphate aerosols combined with soot particles. But their correct statement that nitrous oxide is the “fourth most important greenhouse gas” implies that they are choosing to deliberately ignore the warming effect of sulphate aerosols combined with soot particles.”
I add that the anthropogenic sulphate and the soot particles are mostly simultaeneously emitted from the same sources.
Richard

June 6, 2011 7:34 am

Richard S Courtney;
Illuminating. I’m curious about the sulphate particles emitted from the same source as soot? I’d guess that diesel fuel consumption is/was a major contributor? Has that changed since the regulations forcing the use of low sulphur diesel? What other sources? I’d guess coal next, but again, scrubbers on coal stacks have reduced both would they not?
Re: Fourth most important vs fourth largest contributor
I read some material some time ago where (I think it was AR4 but it has been a while) presented the various GHG’s calculated as both “current” forcing and (I forget the term they used) “long term”. “Warming Potential” perhaps? In any event, they did some fancy math to show that Methane broke down over time into other molecules, and so over 100 years the total warming was the combination of both the initial molecules and the final molecules, and then came up with a number many times that of CO2, making it “more important”. There were other gasses as well, but I don’t recall which ones. In any event, is that the distinction in this case of “most important” vs “largest contributor”?

Richard S Courtney
June 6, 2011 7:59 am

davidmhoffer:
Thankyou for your post to me at June 6, 2011 at 7:34 am .
We seem to be going off topic so feel free to continue our discussion by emailing me.
For now, I respond to your questions with brief answers.
‘Black carbon’ or ‘soot’ is unburnt fuel. It mostly derives from coal-fired power stations and the increase to emissions from China and India makes your references to emission regulations in the West largely irrelevant: the West has been ‘cleaning up its act’ but China and India have not.
I cannot say what distinctions the IPCC AR4 authors made between “most important greenhouse gas” and “largest RF potential”. They used both terms interchangeably in all three drafts so I assumed their text intended them to be the same thing. Of course, that assumption may have been wrong.
Richard

MarkW
June 6, 2011 1:38 pm

According to one regular poster, it’s a form of numerology to just hunt for an equation that will fit your available data.
What does one call it when one has an assumed equation, and then fiddles with the data in order to make the output of the equation fit the assumed pattern?

MarkW
June 6, 2011 1:44 pm

LazyTeenager says:
June 5, 2011 at 8:01 am
—-
The ice, being a pretty good insulator, will always be somewhere between the temperature of the air and the temperature of the water. When artic air gets down to -50F, what temperature do you think the ice and the water get to?

BenfromMO
June 6, 2011 2:07 pm

“Considering Cloud cover has decreased by 4% during the satellite period resulting in a large 0.9w/m2 positive forcing it seems strange to me that the model shows the cloud albedo having a very constant decline over the century resulting in a negative 0.6w/m2

Does anyone happen to have the references for this?
I saw one reference that showed a lowering forcing from albedo in the last 20 years, but it was just whether it was increasing or decreasing and to tell the truth this would fit that graph – (kind of). But that in itself would be huge and would add to my source material greatly.
To explain what this means….
If you combine this with the forcing from LLGHG…
You get 1.4 (LLGHG by itself) W/m2
– 0.9 W/m2
=
0.5 W/M2 which is very small in comparison to say what is shown here.
And if you subtract out the assumed negative there…
you get -0.1 W/M2 for long-lived GHG.
Of course, this would seem to show that its possible that the CO2 is completely over-run with negative feed-backs that actually make CO2 make the planet cooler. (from just cloud impacts.)
Of course, this does not include the rest of stuff such as aerosols, etc, but the point still remains that this is a very large issue to say the least. And I tend to agree with the assessment that there is no way we know what the clouds were like in say 1850. That initial point is just simply put a guess that may not even be educated in the least. The conclusion obviously is that no one still knows what clouds do, and without that information the models are still worthless and no matter how fast of computers we use, they will still be worthless until we figure out the mechanisms for clouding.
But then again, I think we all have been saying this as sceptics for years…the cloud cover issues have not been taken into consideration with the models and therefore as such fine-tuning the CO2 effects is nearly impossible without knowing what drives cloud formation in the first place.
I still have no idea myself what the actual effect of additional CO2 is. But I am willing to bet its a very small part of the energy equation compared to say solar or cloud cover. And I am also not willing to bet the economy on terrible ideas for energy. Small steps are fine, such as better efficiency…but huge steps such as wind boondangles…screw that.

Frank
June 6, 2011 2:47 pm

Willis: Try this 1981 Ramanathan paper. The role of ocean-atmosphere interactions in the C02 climate problem. V Ramanathan – J. Atmos. Sci, 918 (1981) http://www.cfa.harvard.edu/~wsoon/ChristopherMonckton08-d/Ramanathan81.pdf
See Figure 2 for the change in radiative forcing for 2X CO2 with height. The overlap between the absorption of water vapor and carbon dioxide becomes more important lower in the atmosphere. At wavelengths where water vapor absorbs 9 out of 10 photons emitted by the surface, doubling CO2 produces only a small change in outgoing radiation.
If you double the concentration of an absorber like CO2, how much can that reduce transmission? If 98% of the photons are already being absorbed at a particular wavelength, doubling the absorber will increase absorption to 99%. If 1% of the photons are being absorbed, doubling the absorber will increase this to 2%. Neither of these changes produces a big decrease in transmitted energy. If 50% of the photons are being absorbed at a wavelength, doubling the absorber will increase absorption to 75% and reduce transmission by 25%. The biggest change in radiative forcing upon doubling CO2 occurs at wavelengths where about half of the photons are able to escape to space. scienceofdoom.com/2011/03/12/understanding-atmospheric-radiation-and-the-“greenhouse”-effect-–-part-nine/ especially comment 1.
Also see: judithcurry.com/2010/12/11/co2-no-feedback-sensitivity/

June 6, 2011 4:12 pm

<LazyTeenager says June 5, 2011 at 8:01 am
JerOme reckons
__________
itself, which is pretty much always warmer than the ice,
__________
and what do you think the temperature of the water under the ice is Jerome?
I bet you, based on some well known physics, that it’s the same temperature as the ice.”
Is that the well known physics that ice exists because it is colder than water?
Obviously if the water were as cold as the ice it would be ice, the ocean would be frozen to the bottom.
Obviously open water is warmer than ice, otherwise it would be ice (on the surface).
Lazy indeed! (Recommended study: high school physics.)

Colin Davidson
June 6, 2011 5:11 pm

There seems to be much confusion over the term “Radiative Forcing” (RF). It is defined at http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-2.html . It has nothing to do with absorption of photons from the surface. It also has nothing to do with the Top Of The Atmosphere (TOA). To paraphrase, it is the imbalance at the TROPOPAUSE resulting from changes in emissions from the atmosphere to space.
I have a lot of problems with this. Firstly, the Tropopause is a very variable beastie. In Polar Latitudes it’s altitude is about 8.5km. At 40Deg Latitude it is around 11km. In the 30N to 30S region it is about 17km. The significance of this is that in Polar latitudes, over 30% of the atmosphere is above the tropopause, and over 20% of the atmosphere is above the Tropopause outside the 30N to 30S region, within which 10% of the atmosphere lies above the Tropopause.
A photon emitted by a CO2 molecule has to evade the overlying CO2 molecules to successfully escape to Space. We know that at the Surface, 50% of photons will be absorbed within 1m, 25m and 50m for wavenumbers 670, 650 and 700 respectively. I’m not sure where that translates to in the upper atmosphere, but I would think it to be well above the Tropopause. And a doubling of the gas concentration simply moves the emission height further out. In the stratosphere, where higher = warmer.
I think all this means:
1. I am uncertain that the change in stratospheric temperatures has been correctly calculated. It would be nice to take a dekko at the back of the envelope.
2. The variability in height of the Tropopause means that the “Radiative Forcing” is highly latitude dependant. It will be much, much greater in the equatorial band , and negligible, if not negative, in the high latitudes. It would be nice to see the calculations.
3. The temperature of the Surface does not depend on “Radiative Forcing” at all. It depends on “Surface Forcing”. The inattention paid to this quantity is puzzling. No-one cares what the temperature of the tropopause is. Everyone is talking about Surface Temperature – so why aren’t we discussing that quantity, concentrating on the changes AT THE SURFACE?

Brian H
June 6, 2011 9:45 pm

Colin Davidson says:
June 6, 2011 at 3:23 am

At wavenumber 670 for example, over half the surface emissions are absorbed by the atmosphere within 1m.

Can’t put my finger on the ref, but have just seen a density calc that says the average distance between CO2 molecules at sea level is 3.9m, and at half that pressure is 4.9m.
So absorption within 1m seems “infinitely improbable.”

Colin Davidson
June 7, 2011 3:31 am

Brian H said:
“Can’t put my finger on the ref, but have just seen a density calc that says the average distance between CO2 molecules at sea level is 3.9m, and at half that pressure is 4.9m.”
CALCULATION OF DISTANCE BETWEEN CO2 MOLECULES AT THE SURFACE
Volume of CO2 molecules in 1m^3 of Air: ~400mL (400ppmV)
Volume of CO2 at STP containing ~6 x 10^23 molecules: 22400mL
Thus number of CO2 molecules in 1m^3 of air at STP = ~ 10^22
Average distance between CO2 molecules = ~0.05um

Colin Davidson
June 8, 2011 9:49 pm

“In addition, it ignores the radiation from above the tropopause.”
I have been unhappy for some time. I can’t square the tabled absorption figures for CO2 with the claim that the majority of emissions from CO2 to space are coming from below the Tropopause. Take the least active of the 3 wavenumbers I have cited, wavenumber 700. At the surface, 50% of photons are gone within 50m (the table says 2atmcm, which equates to about 50m), ie 0.5% of the atmosphere. At 17km, the equivalent amount of atmosphere is around 500m – a photon emitted to space from 17km has only a 50% chance of surviving the first 500m of its travel. And there’s all this overlying gas (another 9ish% of the atmosphere) also radiating.
I think that even at the equator, the emissions in the main CO2 bands (and we are talking only of about 18W/m^2 emitted by CO2 to Space) must mostly be from the Stratosphere. And if that is the case the main effect will be cooling, not warming…
And I agree – by using the qualifier “after allowing for stratospheric temperatures to readjust to radiative equilibrium”, there seems to be no necessity to quantify what happens in the Stratosphere.

Verified by MonsterInsights