Pielke Sr. on Zeke's zinger

Guest post by Dr. Roger Pielke Senior

Missing The Major Point Of “What Is Climate Sensitivity”

There is a post by Zeke on Blackboard titled Agreeing [See also the post on Climate Etc  Agreeing(?)].

Zeke starts the post with the text

“My personal pet peeve in the climate debate is how much time is wasted on arguments that are largely spurious, while more substantive and interesting subjects receive short shrift.”

I agree with this view, but conclude that Zeke is missing a fundamental issue.

Zeke writes

“Climate sensitivity is somewhere between 1.5 C and 4.5 C for a doubling of carbon dioxide, due to feedbacks (primarily water vapor) in the climate system…”

The use of the terminology “climate sensitivity” indicates an importance of the climate system to this temperature range that does not exist. The range of temperatures of  “1.5 C and 4.5 C for a doubling of carbon dioxide” refers to a global annual average surface temperature anomaly that is not even directly measurable, and its interpretation is even unclear, as we discussed in the paperPielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.

This view of a surface temperature anomaly expressed by “climate sensitivity” is grossly misleading the public and policymakers as to what are the actual climate metrics that matter to society and the environment. A global annual average surface temperature anomaly is almost irrelevant for any climatic feature of importance.

Even with respect to the subset of climate effects that is referred to as global warming, the appropriate climate metric is heat changes as measured in Joules (e.g. see). The  global annual average surface temperature anomaly is only useful to the extent it correlates with the global annual average climate system heat anomaly [most of which occurs within the upper oceans].  Such heating, if it occurs, is important as it is one component (the “steric component”) of sea level rise and fall.

For other societally and environmentally important climate effects, it is the regional atmospheric and ocean circulations patterns that matter. An accurate use of the terminology “climate sensitivity” would refer to the extent that these circulation patterns are altered due to human and natural climate forcings and feedbacks. As discussed in the excellent post on Judy Curry’s weblog

Spatio-temporal chaos

finding this sensitivity is a daunting challenge.

I have proposed  definitions which  could be used to advance the discussion of what we “agree on”, in my post

The Terms “Global Warming” And “Climate Change” – What Do They Mean?

As I wrote there

Global Warming is an increase in the heat (in Joules) contained within the climate system. The majority of this accumulation of heat occurs in the upper 700m of the oceans.

Global Cooling is a decrease in the heat (in Joules) contained within the climate system. The majority of this accumulation of heat occurs in the upper 700m of the oceans.

Global warming and cooling occur within each year as shown, for example, in Figure 4 in

Ellis et al. 1978: The annual variation in the global heat balance of the Earth. J. Geophys. Res., 83, 1958-1962.

Multi-decadal global warming or cooling involves a long-term imbalance between the global warming and cooling that occurs each year.

Climate Change involves any alteration in the  climate system , which is schematically illustrated  in the figure below (from NRC, 2005)

which persists for an (arbitrarily defined) long enough time period.

Shorter term climate change is referred to as climate variability.  An example of a climate change is if a growing season 20 year average  of 100 days was reduced by 10 days in the following 20 years.  Climate change includes changes in the statistics of weather (e.g. extreme events such as droughts, land falling hurricanes, etc), but also include changes in other climate system components (e.g. alterations in the pH of the oceans, changes in the spatial distribution of malaria carrying mosquitos, etc).

The recognition that climate involves much more than global warming and cooling is a very important issue. We can have climate change (as defined in this weblog post) without any long-term global warming or cooling.  Such climate change can occur both due to natural and human causes.”

It is within this framework of definitions that Zeke and Judy should solicit feedback in response to their recent posts.  I recommend a definition of “climate sensitivity” as

Climate Sensitivity is the response of the statistics of weather (e.g. extreme events such as droughts, land falling hurricanes, etc), and other climate system components (e.g. alterations in the pH of the oceans, changes in the spatial distribution of malaria carrying mosquitos, etc) to a climate forcing (e.g. added CO2, land use change, solar output changes, etc).  This more accurate definition of climate sensitivity is what should be discussed rather than the dubious use of a global annual average surface temperature anomaly for this purpose.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Albert Frankenstein

Even more basically, it starts with the assumption that feedback is in fact positive for CO2. What if it isn’t?

In not too distant future graphics of the above type may have Geomagnetic field added, or at least I hope….
http://www.vukcevic.talktalk.net/NFC1.htm

AnonyMoose

Maybe someone is saying climate sensitivity when temperature sensitivity is intended?

Dave D

What would be the collective climate science response to the statement, “It could be the that atmospheric sensitivity to CO2 is essentially zero?”
I believe that scientists measured and listed the following heat adsorption figures: 1) Oxygen = X, Nitrogen = X, yes they were very close to each other; CO2 = Y > X, a mixture of 20% O2 and 80% N2 was then measured and yielded a value VERY close to = Y.
I did NOT bookmark this article, but I believe it was published in Nature nearly a year ago. Has it since been debunked? Has it been buried? Is it totally unfounded? I would think this type of study would be central to resolving whether any Greenhouse gas theory around CO2 even applies? Science has the tools to quantify and repeat or discredit this type of thing – is anyone aware of any results?
Experts, more learned men/women than me and Faux-Mountebanks, all are welcome to reply! Dave

commieBob

yabut if you put more joules into the ocean, it gets warmer. In any event, how are you going to measure those joules if not with a thermometer. (commieBob now ducks and hides under his desk) 😉

Rob R

This is a much more rational way of looking at the issue. Thanks Roger.

George E. Smith

Well I have been trying to discover for years, just who or what first described Climate Sensitivity in terms of its relation between teh global mean surface Temeprature, and the logarithm of the atmospheric CO2 abundance.
I was under the impression that this was a creation of the late Dr Stephen Schneider at Stanford..
I take a dim view of people who say that some relationship is “logarithmic”, unless they can demonstrate either by measurement or theory, that such a relationship is mainatained over at least several orders of magnitude (three min) of the variable whose logarithm tracks the other variable.
My point is simply that just because a relationship, is non-linear; does not automatically mean it is logarithmic. When somebody says it IS logarithmic, that to me means that a CO2 increase from 280 ppm to 560 ppm produces the same Temperature increase, as going from 1 ppm to 2 ppm. That IS alogarithmic relationship.
Mathematical curve fitting experimental data, to some function is a well practised art. In the Mauna Loa era, we have somewhat less than 1/3 of one doubling of the ML CO2 number, from 325 ppm to today’s 390 ppm.
That data can easily be fitted to an ordinary power series, over some restricted range, and to as accurate a fit to the data, as can say a logarithmic equation.
Usually one looks at logarithmic relationships; when some theoretical basis for believing it is logarithmic, exists, such as radioactive decay for example, or the Current-Voltage relationship of a Semiconductor diode junction at constant Temperature.
GHG absorptance of LWIR radiation has its roots in Beer’s Law relating the absorptance of chemical solutions to the molar concentration.
But one must not confuse optical absorptance with energy transmission.
The energy transmitted by an optically absorbing medium, can be orders of magnitude higher than (1-a) where (a) is the optical absoptance for some spectral region, and can have a Beer’s Law relationship.
That transmitted energy will likely be in a quite different spectrum, from that represented by the Beer’s law absorption.
This situation exists in the CO2 GHG absoption case. CO2 has well understood infra-red absorption spectra; but it is also well known that the absorbing CO2 molecule, seldom gets a chance to re-emit the photon that sent it to an excited state. The energy will become thermalized through molecular collisions that occur thousands of times faster than spontaneous emissions of the decay of the resonance absorption modes. and that energy simply raises the Temeprature of the main atmospheric gases. They in turn radiate a purely thermal (BB-like) continuum spectrum, that just happens to overlap the same range as the absorption band (of lines) for CO2.
That radiation is emitted isotropically from every level in the atmosphere, from the very surface to the end of the atmosphere, and only in the highest coldest rarest atmospheric regions does the CO2 get a chance to spontaneously emit its own characteristic band of frequencies. And at every level in the main body of the atmosphere, about half the emitted radiation is emitted upwards towards space, and half downwards to the surface. It matters not whether the surrounding layers are transmissive or absorptive. The transmitted portions of the spectrum, still travel about half in each direction, and the absorbed portions simply lead to a subsequenct emission of some thermal spectrum, that likelwise splits in two hemispheres.

bob

Temperature, being an intensive variable, which means that it is possible to measure the temperature of the earth’s systems that vary with the seasons, the sun’s cycles, and orbital parameters etc. These would be the atmosphere, the oceans, the ice caps and the solid earth’s surface to a depth that changes with the seasons.
Possible yet difficult, so using the anomaly to measure the changes rather than the absolute is acceptable scientifiacally.
To replace that metric with a conglomeration of statistical measures, which it would probably be impossible to agree on which metrics to monitor in the data, seems without meric. Considering that finding longterm data trends for some of the suggested metrics, for example the geographic range of malaria carrying mosquitos, and agreeing on their validity, would be difficult.
What would the units be?
mosquito bite*hurricane * droughts per ppm CO2* rainforest acres cut down * solar variability
I mean we can track the changes in those variables, but to define climate sensitivity in terms of them is difficult.
If I am misunderstanding the point here then you can have my most sincere
Roseanna Roseannadanna
“never mind”

Carrick

Roger Pielke:

The range of temperatures of “1.5 C and 4.5 C for a doubling of carbon dioxide” refers to a global annual average surface temperature anomaly that is not even directly measurable, and its interpretation is even unclear, as we discussed in the paper

Why do you think that it is an indirect measurement is an issue? There are plenty of quantities that get measured indirectly in meteorology, climate and other sciences. (More that get measured indirectly than not, I’d think.)
The question of interpretation (and utility which is really what you are driving at) depends on what you are planning on using the number for. We live in the surface layer, most of the land biosphere does too, so knowing how the temperature responds to CO2 does act as a proxy for the impact of CO2 emissions on humans (changes in growing seasons, shifts in dry belts etc).
Are there other numbers? Sure, and any policy maker needs to read further than just global mean temperature change.
You are right to say that there is perhaps too much fixation on CO2 sensitivity, but your more general definition of climate sensitivity could be quantified by including a list of global values, which would have to include global mean temperature, as well mean precipitation, mean ocean pH and so forth.

Max Albert

One question: Wouldn’t it be rather difficult to empirically demonstrate a cause-and-effect relationship between the broad definition of “weather statistics and other climate components” and “a climate forcing,” particularly on a global scale? I mean, we’ve yet to do that even where it comes to a single component, say heat content vs CO2.
You’re talking a lot of time and scientific shoe leather, which is fine by me, but I wouldn’t expect your definition to be embraced by the warmist camp anytime soon. They’re in too big a hurry.

R. Gates

I agree that global average annual temperature is not an incredibly useful metric of “climate” sensitivity, but only of temperature sensitivity. But, if one understand the implications of that temperature sensitivity, one can see the implications for the longer-term (but not daily) statistics of weather. For example, if we know that at 2 degrees C of global warming, we will get 7 to 10 C above the Arctic circle, than that also tells you to expect sea ice and permafrost to decline. This decline in sea ice has implications for atmospheric circulation patterns which of course affect longer-term weather patterns. What it comes down to is that experts can understand the potential implications of global temperature changes and thus sensitivity to GH gas increases measured in this manner can be very useful in making future potential scenarios based on that sensitivity. Those scenarios could be completely wrong of course, but they require some measurement of sensitivity to output anything at all.

Roger Andrews

I agree that OHC is the metric we should be using to measure the AMOUNT of global warming, assuming we can ever come up with a reliable way of measuring it. But the physical impacts of added heat (I hasten to add, if any) are felt at the land surface, which is where we humans and all the animals, trees, plants and crops we depend on live. So we can’t dismiss surface temperature as “almost irrelevant”. It’s the basic metric that measures the IMPACT of global warming, which is not the same thing as the amount. (And it’s also the more important. No one would care about global warming if it wasn’t going to do anything.)

Paddy

Bishop Hill has an relevant post regarding climate forcings that quotes another blogger’s question to Freeman Dyson and his answer about important climate issues:
“1. What questions should intelligent lay-people be asking about climate, and to whom should these questions be directed?”
“Answer to your question”
“1. Both for humans and for wildlife, water is more important than temperature. Intelligent lay-people should be asking questions about water. What are the trends over the last hundred years for total rainfall, total water vapor in the atmosphere, total volume of ice on the continents, volume of fresh water in aquifers, distribution of droughts and floods, and last but not least, sea-level? Why did the Sahara change from green six thousand years ago to desert today? What are the implications for public health and natural ecology of the changes in flow and abundance of water?”
http://bishophill.squarespace.com/blog/2011/2/28/some-correspondence-with-freeman-dyson.html

higley7

I keep coming back to the fact that the IPCC altered a thermodynamic constant for CO2, bumping a 0.12 degree effect with doubling to 1.2—1.5 deg by multiplying the constant by 12-fold, all the while marveling at how consistent this constant had been while actually changing it at the same time – quite backhanded. Then, they used water vapor as a positive forcing factor to arrive at much higher values for warming with doubling, here, specifically pretending that atmospheric convection did not exist as part of a global heat engine, a powerful negative feedback machine.
Wh is so little made of this duplicity and fraud?

Ken Harvey

“Climate Sensitivity is the response of the statistics of weather (e.g. extreme events such as droughts, land falling hurricanes, etc), and other climate system components (e.g. alterations in the pH of the oceans, changes in the spatial distribution of malaria carrying mosquitos, etc)”
Hmmm. Changes in the spatial distribution of malaria may conceivably occur as a result of climate sensitivity, but the emphasis must be on the “may”. Certainly malaria has moved well southwards in Southern Africa since the mid ‘seventies, but without any detectable change in climate. The southern drift that has occurred can all be attributed to anthropological causes. The population has grown tremendously over that time period resulting in considerably greater movement of people and consequent greater carrying of the insect. Anti malaria preparations that worked unfailingly in the mid seventies are no longer nearly as effective. Governmental chemical actions to fight the the mosquito spread, have in some parts (Zimbabwe in particular) either become less effective or have broken down altogether.
I feel that the scientific community should avoid this type of bland statement, if for no other reason than that there will no lack of Governments who will happily claim compensation for the damage done to them by the West’s nasty CO2.

There is enough disagreement about the temperature variation and getting such obscure things involved like “land falling hurricanes” will only escalate the difficulty in defining what is going on.
Temperature should not be the problem that it, but the way it is used is a problem. Variations and trends in the different regions would be a better, but more complicated way. Most of the warming has been in the Arctic region. That is not a global effect, but the polar regions are important to climate.
Not sure what is the best way to define sensitivity, but less subjective is better than more subjective and weather events are very, very subjective.

Espen

Great post by dr. Pielke!
Carrick: Why do you think that it is an indirect measurement is an issue?
“Global mean temperature” is not only indirect, it’s also wrong. Even if we covered the earth with a grid of more than a billion temperature sensors of previously unmatched quality, the arithmetic mean of those readings would be a bad measure of “global warming”, since +10 C in the arctic corresponds to much less incremental athmospheric heat content than +10C in the tropics.

Dr. Pielke,
I agree with you that the surface temperature response to doubling CO2 (e.g. standard climate sensitivity) is not a full description of the effects of a climate forcing, but rather a limited subset of those effects. However, discussing how the full effects (which are characterized by even more uncertainty than sensitivity itself) should be communicated to the public and policy makers is a separate issue from what our best understanding of the science says about climate sensitivity as generally defined. It hardly amounts to a wittily alliterative “zinger” on my part that I referred to the current notion of climate sensitivity, rather than your more encompassing definition. Even were we to broaden the definition, we would still need a term for the surface temperature response to forcings, as it is important piece to the puzzle.

Lew Skannen

Judy Curry’s weblog article Spatio-temporal chaos is exactly what I have been looking for for years!
I am glad that someone like Judy Curry has expressed so well the unavoidable facts about chaotic systems and how they relate to climate.

Paul Coppin

I have to take exception to the semi-definition of Climate Change. There is the persistent predication that the Climate is static, and as a consequence, any alteration of this stasis constitutes “Climate Change”. The climate is not static; climate is a complex of events that isn’t even in equilibrium, let alone static. Acknowledging and accepting “climate change” as a definable, measurable paremeter is a mugg’s game, and we’ve already been beat to death with it. There is NO universally agreeable definition for climate change that has any useful scientific value whatsoever.

rpielke

Thanks for the feedback that has been sent. I have just two further comments here. First, while ocean heat content requires the measurement of temperature, it also includes mass so that the proper unit of global warming (Joules) can be calculated. This is not true with the two dimensional analyses of surface temperatures which are used to calculate global averge surface temperature anomaly. There is no mass associated with this determination of heat.
Secondly, even with respect to surface temperature, it is a flawed metric. For example, what height is this temperature measured at (e.g. the surface, 2m etc). This matters in terms of the anomaly values that are computed as we have shown, for example, in our recent paper
Steeneveld, G.J., A.A.M. Holtslag, R.T. McNider, and R.A Pielke Sr, 2011: Screen level temperature increase due to higher atmospheric carbon dioxide in calm and windy nights revisited. J. Geophys. Res., 116, D02122, doi:10.1029/2010JD014612.
http://pielkeclimatesci.files.wordpress.com/2011/02/r-342.pdf
There is the issue that anomalies in absolute humidity alter the anomalies in temperature even if the total heat of a volume of air does not change, as we showed in our paper
Pielke Sr., R.A., C. Davey, and J. Morgan, 2004: Assessing “global warming” with surface heat content. Eos, 85, No. 21, 210-211.
http://pielkeclimatesci.wordpress.com/files/2009/10/r-290.pdf
These are just two of a number of issues with this data set. We can add all of the seminal work that, Joe Daleo, Anthony Watts and surface project volunters have done on the quality of the siting, as well as the failure of the land part of the surface temperature record to clearly define how they massage the data in order to create what they call a “homogenized” analysis.
We will have more to report on the surface temperature data set soon. My recommendation, however, is to scrap the use of surface temperatures to indirectly diagnose global warming and replace with upper ocean heat content change which is a direct measure of climate system heat changes.

mkelly

Until the temperatures get above the Holocene Optimum everything we are talking about is within normal variation with or without CO2.

Orson

Zeke’s problem is formulating “climate change sensitivity” in terms laid out by the IPCC authorities.
In an important sense, Zeke is parsing a different problem from the narrower one Pielke, Sr., attacks above. Zeke and Lucia’s friends attempt to declare and decide on the range of probable and improbable problems pertinent to climate change science. Zeke hopes to move beyond debate to zero-in on genuine scientific problems; this makes agreement (or “consensus”) a priority before progress can be made.
Yet all of this is highly speculative, dependent upon the context of one’s selective knowledge of the literature, and besides the point of genuine scientific criticism. “How many angels can dance on the head of a pin” theology speculations, but dressed up as science. Furthermore, Zeke shows us that Linzen’s lessons about the corrupt fatuities of climate science have not yet been learned by his cohorts. (SEE “Climate Science – is it currently designed to answer questions,” Nov 2008) Instead, they must be papered over and denied as genuine problems.
This is not science, but politics IPCC-style, plain and simple.

etudiant

It is striking that the level of water vapor in the upper atmosphere (300mB level) has dropped steadily and substantially (20+%) in the past60 years: http://www.climate4you.com/ see the greenhouse gases section, yet there has been no effort afaik to evaluate either the causes or the consequences of this change.
At a minimum, it seems inconsistent with the idea of increasing CO2 raising the atmospheric water vapor level.
More generally, if this kind of change in the Earth’s primary greenhouse gas has so little measurable effect, why would a comparable change in a vastly less abundant trace gas make a difference?

Eric (skeptic)

R Gates: you said “I agree that global average annual temperature is not an incredibly useful metric…” You made up a new term there. Perhaps you meant global average temperature anomaly?
I would ask for support for “For example, if we know that at 2 degrees C of global warming, we will get 7 to 10 C above the Arctic circle,…” but your answer is probably climate models. Are those the same models that predicted that AO would increase? The ones that were trumpeted as validated when AO was increasing in the 90’s And now that AO is decreasing, those models are quietly tossed aside for new speculations that ice loss causes negative (decreased) AO? Bottom line: your claim is an unknown. The decline in sea ice was a well known prediction in the 90’s yet those models predicted the opposite of what is now happening.

George E. Smith

Related to this issue that Roger raises, is a featured article in the Jan 2011 issue of Physics Today, which is a free Journal of the American Institute of Physics. It’s a quite extensicve article authored by Peter Humbug; who lurks around c-R. It’s a very readable article and one I think everyone should find and read. It has been mentioned here at WUWT before.
In the very first sentence of this paper, we learn that the Temperature of the earth would be nearly 800,000 K after a billion years of absorbing sunlight with no means of escape. Well I dare say that if the earth is actually 4.5 billion years old, then it’s Temperature woul likely be closer to 3 million Kelvins, than 800,000.
Of course it wouldn’t have any of its present atmosphere, and no oceans of course (of H2O). That 800,000 Kelvins, the mean molecular velocity would likely be greater than the escape velocity, so all of those molecules would be gone; and since mass goes down faster than surface area, as the surface layers of the earth ablated, the incoming solar energy rate, would not decline as fast as the earth mass does, so the Temperature would likely continue to increase indefinitely, until the whole earth had evaporated and blown away.
But of course that model of earth (Peter is a model maker evidently) assumes it has no way of getting rid of any energy.
That’s a strange assumption for a model; because later in the paper, he plots a graph, that purports to be an actual theoretical calculation for the Radiance for LWIR emissions from the earth. Well he calls it “flux”, and he has it overlaid with actual measured “flux” from the AIRS Satellite Instrument.
According to his graph, the earth actually CAN radiate a nearly infinite amount of energy. His actual plotted data, both measured and calculated, has a peak value of close to 0.12 Watts per (square metre-Steradian) at around 550 cm^-1 wavewnumber and again at around 750 cm^-1 on the other side of the CO2 hole. The Radiance drops to about 0.07 on the same scale at the Ozone hole at 1050 cm^-1, and then at 1250 cm^-1, it drops to about 0.2 or less for the water hole.
He has similar graphs for both Mars, and Venus; since his theory is applicable to any planet. Of course the detail numbers are different for different planets.
so how is it that such a graph predicts a near infininte amount of radiated energy.
Well let me repeat, the Units of his “flux” are WATTS PER SQUARE METRE PER STERADIAN; or if you prefer W/(m^2.sr). You get the idea; those are units of Radiance th3e radiant equivalent to “candle power”.

Stephen Richards

Zeke has an agenda to drive. He believes that by making those statements they must be true. Unfortunately, he and his fellow AGWs have provided no empirical evidence for their ‘believe’ and probably never will. I just wish, like Willis, that htese people would provide the science with their absolute statements so that we can all become believers like them.

don penman

I don’t think co2 has had that much effect on temperature in the last hundred years.
wattsupwiththat.com/2011/01/31/some-people-claim-there’s-a-human-to-blame-gw-tiger

To the reader from Huntsville: this is the one you may whish to consider:
http://www.vukcevic.talktalk.net/LFC2.htm

James Sexton

Dave D says:
February 28, 2011 at 11:04 am
What would be the collective climate science response to the statement, “It could be the that atmospheric sensitivity to CO2 is essentially zero?”
I believe that scientists measured and listed the following heat adsorption figures: 1) Oxygen = X, Nitrogen = X, yes they were very close to each other; CO2 = Y > X, a mixture of 20% O2 and 80% N2 was then measured and yielded a value VERY close to = Y.
======================================================
I’m not familiar with the way this is expressed, there may be a problem with the labeling, but, if one were to look at the IR absorption spectrum, http://joannenova.com.au/globalwarming/graphs/CO2_infrared_sorption_Tom.jpg
then one would see the overlap of nitrous oxide and H2O vs. CO2. Note how little unique absorption CO2 provides.

I think many people, including Peilke, misunderstand the purpose of Zeke’s questions.
Before you start a discussion about what you disagree about, it’s important to find out what you agree about.
It would be a good thing, if Peilke and others explained what they agreed about.
This is not about talking a poll and deciding the truth. It’s about seeing what agreement there is and what disagreement.

geo

It seems to me there is some psychological need in the human system generally for “one number-itis” to track and explain important issues. And, further, that as soon as there is even moderate complexity in the system being tracked by “one number-itis” that all manner of bad things begin happening both from a communications perspective and a general framework of even thinking about the issue perspective.
Climate sensitivity expressed in global temperature anamoly seems just another example of this.
At some level the urge to simplify is understandable. Over-simplifying is no friend to anyone however. 4 or 5 metrics ought to be few enough to still have an understandable framework to explain to laymen without introducing the really quite significant communication issues (and fights about same) that having one introduces.

George E. Smith

Related to this issue that Roger raises, is a featured article in the Jan 2011 issue of Physics Today, which is a free Journal of the American Institute of Physics. It’s a quite extensive article authored by Peter Humbug; who lurks around c-R. It’s a very readable article and one I think everyone should find and read. It has been mentioned here at WUWT before.
In the very first sentence of this paper, we learn that the Temperature of the earth would be nearly 800,000 K after a billion years of absorbing sunlight with no means of escape. Well I dare say that if the earth is actually 4.5 billion years old, then it’s Temperature would likely be closer to 3 million Kelvins, than 800,000.
Of course it wouldn’t have any of its present atmosphere, and no oceans of course (of H2O). At that 800,000 Kelvins, the mean molecular velocity would likely be greater than the escape velocity, so all of those molecules would be gone; and since mass goes down faster than surface area, as the surface layers of the earth ablated, the incoming solar energy rate, would not decline as fast as the earth mass does, so the Temperature would likely continue to increase indefinitely, until the whole earth had evaporated and blown away.
But of course that model of earth (Peter is a model maker evidently) assumes it has no way of getting rid of any energy.
That’s a strange assumption for a model; because later in the paper, he plots a graph, that purports to be an actual theoretical calculation for the Radiance for LWIR emissions from the earth. Well he calls it “flux”, and he has it overlaid with actual measured “flux” from the AIRS Satellite Instrument.
According to his graph, the earth actually CAN radiate a nearly infinite amount of energy. His actual plotted data, both measured and calculated, have a peak value of close to 0.12 Watts per (square metre-Steradian) at around 550 cm^-1 wavenumber and again at around 750 cm^-1 on the other side of the CO2 hole. The Radiance drops to about 0.07 on the same scale at the Ozone hole at 1050 cm^-1, and then at 1250 cm^-1, it drops to about 0.2 or less for the water hole.
He has similar graphs for both Mars, and Venus; since his theory is applicable to any planet. Of course the detail numbers are different for different planets.
So how is it that such a graph predicts a near infininte amount of radiated energy.
Well let me repeat, the Units of his “flux” are WATTS PER SQUARE METRE PER STERADIAN; or if you prefer W/(m^2.sr). You get the idea; those are units of Radiance; the radiant equivalent to “candle power”.
What they specifically ARE NOT, is units of SPECTRAL RADIANCE., so that presumably is the calculated and measured radiance at each and every possible frequency (wave number) within the range , of roughly 100 to 2000 cm^-1 as plotted, and there is an infinite number of such frequencies (the thermal spectrum is a continuum) so the total radiated energy is infinite.
The author also states that planets with Temperatures between about 50 K and 1000 K emit IR spectra in the range of 5-50 microns in the far infrared; which he calls the Thermal IR (fine with me).
This thermal IR spectrum is presumably somewhat black body like; modified by some appropriate (spectral) emissivity.
This is a rather weird piece of information, since we know that for planet earth which roughly averages 288 K and emits 390 W/m^2 total radiant emittance, the corresponding BB spectrum would contain 98% of its energy between the wavelengths of 5.0 microns to 80 microns, with the spectral peak at about 10 microns.
Wow ! that an even longer spectral range, than the author claims for the whole planetary Temperature range from 50 K to 1000 K.
The way I figure it (I do not have a Teracomputer), the 1000 K spectrum, would have its peak emittance at 2.88 microns, and 98% of the energy would be between 1.44 microns, and 23 microns; whereas the 50 K planet, would have 98% of its energy between about 29 microns, and 460 microns, with a peak at 58 microns.
Apparently my stick in the sand is not quite as accurate as his Teracomputer.
Well I could go on, and mention a few other glaring questionable statements; but I would not want anyone to be put off from reading the entire paper.
The most important sentence in the whole paper is the second to last sentence which says:-
“The precise magnitude of the resulting warming depends on the fairly well known amount of amplification by water-vapor feedbacks and the less known amount of cloud feedback.”
The final sentence says:- “There are indeed uncertainties in the magnitude and impact of anthropogenic global warming, but the basic radiative physics of the anthropogenic greenhouse effect is unassailable.”
I would suggest to the author that those two statements are quite incompatible. The radiative Physics cannot be unassailable so long as the cloud feedback is unknown; and it certainly is unknown. Even the most battle hardened AGW warrior can’t tell you how the clouds work to control the Temperature of the earth.
It’s a shame, that such a respected Teramodeller, can write a paper like this, and even more surprising that Physics Today chose to publish it.
But do read it anyway; he does address some of the myths; such as the CO2 bands being saturated; but I think his explanation leaves a lot to be desired. There’s that “shoulders” of the CO2 absorption band explanation of what adding more CO2 does. Nothing about the same amount of absorption occurring in a thinner atmosphere layer, if the CO2 abundance increases.

nc

So how many 100’s of billions have been spent and the dicussions

nc

So how many 100′s of billions have been spent and the arguments still cover the basic principles, amazing.
Mod please delete first post, hit post when attempting to correct typing.

Bruce of Newcastle

Apologies to Zeke but 2XCO2 has been empirically measured by Dr Spencer and others and found to be about 0.4-0.6 C.
I’ve cross checked this a couple of ways, one using SST’s and another by difference after solar effects are controlled for, and a value around 0.6 C seems to be the number. The feedback must therefore be negative not positive. None of this is modelling, it is a straight analysis of recorded and easily available data.
It may be the problem is climate scientists seeming to ignore the effects of the sun, even though these are easy to see even if you plot HadCRUT vs SCL (or etc) yourself.

err bruce, ya cant get a sensitivity from observations that way.
Start with Schwartz and avoid his mistakes.

James sexton.
see the stratosphere.

Here’s a nice little presentation that will give you some ideas.. using Ocean heat content
http://www.newton.ac.uk/programmes/CLP/seminars/120812001.html

Carrick

Espen:

“Global mean temperature” is not only indirect, it’s also wrong. Even if we covered the earth with a grid of more than a billion temperature sensors of previously unmatched quality, the arithmetic mean of those readings would be a bad measure of “global warming”, since +10 C in the arctic corresponds to much less incremental athmospheric heat content than +10C in the tropics.

Sorry but this description of how global mean temperature is calculated is wrong, and I’m pretty sure Roger would not endorse this explanation. Rather than launch into a tirade or a sermon about how to calculate it correctly, I encourage you to read some of the posts of Zeke on Lucia’s blog.
The short answer to why you don’t need one billion sensors comes from the sampling theory: This theory is the basis for digitization of audio and video signals. If there were an error with this theory, we would viscerally see it and hear it. If you ask how many samples you need to fully capture the Earth’s temperature field, based on a correlation length of 500-km (this is a smaller number than is quoted by Hansen’s group for the correlation length), that works out to around 2000 instruments world wide. By comparison, there are nearly 10,000 fixed instruments located on about 30% of the surface of the Earth from land-based systems alone. The oceans are less well covered, but the correlation lengths go way up.
If there are problems, they are from other sources (at least post 1950).
Roger Pielke Sr:

Secondly, even with respect to surface temperature, it is a flawed metric. For example, what height is this temperature measured at (e.g. the surface, 2m etc). This matters in terms of the anomaly values that are computed as we have shown, for example, in our recent paper …

I will raise to you the same challenge that I have to others. If it matters in a substantial fashion, this error should rear its ugly head in the comparison of surface based measurements to satellite based ones.
It does not.
(There is a difference in the long-term trends among the series, mostly between surface versus satellite, but I believe this is explained based on the difference in where the satellites are sampling the atmosphere—at roughly 8000-m up–compared to 1-2 m for surface measurements. Even if we posit that the difference is due to an error in accuracy of the surface measurements, that still just amounts to 10% of the total observed trend in temperature.)

Al Tekhasski

Zeke wrote:
“… surface temperature response to doubling CO2 (e.g. standard climate sensitivity) is not a full description of the effects of a climate forcing, but rather a limited subset of those effects. However, discussing how the full effects (which are characterized by even more uncertainty than sensitivity itself) should be communicated to the public and policy makers is a separate issue from what our best understanding of the science says about climate sensitivity as generally defined.”
This is what really pisses me off. Maybe before trying to “communicate” some “generally defined” concepts to “the public”, one needs to communicate to himself that change in globally-averaged surface temperature IS NOT A PROXY FOR CHANGES IN HEAT CONTENT of climate system. It was shown mathematically on JCurry blog that a three-dimensional FIELD of temperature cannot be meaningfully characterized by a single number (“global temperature”) when the field is not spherically symmetrical. It is quite obvious and can be shown with simple examples that global average can go up while system is globally cooling, and global average can go down even if system is gaining heat. More importantly, the global average temperature can go up or can go down even if the heat content does not change at all. Please communicate this to yourself first.

Roy Clark

The concept of CO2 induced ‘climate sensitivity’ is based on a fundamental misunderstanding of climate energy transfer. Over the last 50 years or so, the atmospheric CO2 conentration has increased by about 70 ppm to 380 ppm. This has produced an increase in the downward long wave IR (LWIR) atmospheric flux at the Earth’s surface of about 1.2 W.m-2. Over the last 200 years these numbers go up to 100 ppm and 1.7 W.m-2.
There is no such thing as a ‘global average temperature’. Each point on the Earth’s surface interacts with the local energy flux to produce a local surface temperature. This changes continuously on both a daily and a seasonal time frame. The solar flux at the surface can easily vary from zero to 1000 W.m-2. The night time LWIR emission can vary between zero and 100 W.m-2, depending on cloud cover and humidity.
Under full summer sun conditions the ‘dry’ land cooling balance is about 200 W.m-2 in excess LWIR emission and 800 W.m-2 in moist convection at a surface temperature that can easily can reach 50 C.
In terms of a daily energy budget, the solar flux can reach up to ~25 MJ.m-2.day-1. This is balanced by about ~15 MJ.m-2.day-1 in moist convection and ~10 MJ.m-1.day-1 in LWIR emission. The heat capacity of the ground is ~1.5 MJ.m-3 and ~3 MJ of the total daily flux is stored and released by the daily surface heating. At 1.7 W.m-2, the total daily LWIR flux increase from CO2 is ~0.15 MJ.m-2.day-1. This corresponds to about 2.5 minutes of full summer sun or the evaporation of a film of water 65 micron thick over an area of 1 m^2. It is simply impossible to detect the surface temperature change from an increase of 100 ppm in atmospheric CO2 concentration.
Over the oceans it is even easier. The LWIR emission from CO2 is absorbed within 100 micron of the ocean surface and is coupled directly into the surface evaporation. An increase of 65 micron per day in water evaporation is just too small to detect in the wind and surface temperature driven variations in ocean surface evaporation. The long term average uncertainty in ocean evaporation measurements is 2.7 cm per year. It is impossible for a 100ppm increase in CO2 concentration to cause any change in ocean temperatures.
The other problem is that the surface temperature was not usually measured before satellite observations and the meteorological surface air temperature (MSAT) measured in an enclosure at eye level above the ground has been substituted for the surface temperature. The observed ‘climate sensitivity’ over the last 50 years or so is just the changes in ocean surface temperature with urban heat island effects added along with downright faudulent manipulation of the ‘climate averages’. Ocean surface temperatures have been decreasing for at least the last 10 years and will probably continue to do so for another 20.
The climate prediction models have been ‘fixed’ to predict global warming by using a set of fraudulent ‘radiative forcing constants’. The hockey stick has been used as ‘calibration’ to predict more hockey stick. This is just climate astrology.
The only climate sensitivity is from the sun. A sunspot index of 100 correponds to an increase in the solar ‘constant’ of about 1 W.m-2. The change in ellipticity of the Earth’s orbit over a Milankovitch ice Age cycle is about -1 W.m-2. Over 10,000 years this is sufficient to bring the Earth in or out of an Ice Age, just through the change in ocean heat content. A Maunder Minimum requires about 50 to 100 years with few sunspots. We are going to find out about real climate sensitivity if the sunspot index stays low.

Al Tekhasski

Carrick wrote:
“The short answer to why you don’t need one billion sensors comes from the sampling theory: This theory is the basis for digitization of audio and video signals. If there were an error with this theory, we would viscerally see it and hear it. If you ask how many samples you need to fully capture the Earth’s temperature field, based on a correlation length of 500-km (this is a smaller number than is quoted by Hansen’s group for the correlation length), that works out to around 2000 instruments world wide.”
Totally wrong. We could “viscerally see and hear” disconnects in video and audio signals, but it doesn’t affect much of information we receive. We might see some “snow” or “popping”, still old VHS tapes or vinyls could be well enjoyable. More, while information content is still fine, estimates of some average of signal could be over the roof, a technique that was used in early copy protection of VHS tapes. Even more, modern communication channels (such as USB) explicitly define “isochronous steams” where it is not necessary to retry blocks of signal if checksum does not match, and simply ignore the data block. These are the streams dedicated for video and audio information. So you could not be more wrong.
Regarding temperatures, you do need much higher density of sampling. As I reported elsewhere, there are many pairs of stations that are just 50-60km apart while exhibiting OPPOSITE century-long temperature trends. It means that you/we have no clue what is actually happening to climate-scale trends between stations that are 300-500 miles apart. It means that the field is undersampled. How badly? Opposite warming-cooling trends on a distance of 50km means that you need AT LEAST 25km REGULAR grid, which translates into about 800,000 stations world-wide.
So, your assertion that “it works out” to 2000 stations is unconditionally, blatantly wrong. Please get yourself familiar with Nyquist-Shannon-Kotelnikov-Whittaker sampling theorem, and study the effect called “aliasing” when sampling density does not conform to the sampling theorem.

rpielke

Carrick – An anomaly of +10C at cold temperatures has a smaller effect on outgoing radiation than an anomaly of +10C at warm temperatures. This is due to the Stefan-Boltzman relationship that this outgoing emission is proportional to the 4th power of temperature.
With respect to the relationship between surface and lower tropospheric temperatures, and their divergence from one another, please see our paper
Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841. http://pielkeclimatesci.wordpress.com/files/2009/11/r-345.pdf
Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841”, J. Geophys. Res., 115, D1, doi:10.1029/2009JD013655.
http://pielkeclimatesci.wordpress.com/files/2010/03/r-345a.pdf

Carrick

Alex, you and I aren’t going to see eye to eye on this, if for no other reason than you refuse to use an analytic approach to buttress your arguments, and you prefer to substitute ugly personal attacks for reasoned argument. Sorry but I just find that very boring.
My comments about the number of locations needed based on a correlation length of 500-km and a (band-limited) sampling period of 1 month/sample follow straight fro the sampling theorem. So, either there’s something wrong with the sampling theorem, or there’s something wrong with the statement that the correlation length for a 1-month band-limited signal is 500-km.
Those are the only two possibilities.
Your discussion about old VHS systems, etc are so incoherent or OT I have no idea what you are even trying to drive at, other than some petty desire to show me “totally wrong”.
Address the science, or expect to be ignored.

Bruce of Newcastle

steven mosher says:
February 28, 2011 at 2:44 pm
Steven – I appreciate your work very much, but sorry I respectfully disagree. Are you rejecting the correlation between SCL and temperature? Looks quite clear to me statistically and visually. Its there in the CET too, clearer post Laki. We can argue about the magnitude and causation but unless someone finds a new dataset that correlation isn’t going away.

johanna

Steven Mosher said:
It would be a good thing, if Peilke and others explained what they agreed about.
This is not about talking a poll and deciding the truth. It’s about seeing what agreement there is and what disagreement.
—————————————————————
It is perfectly reasonable to ask Dr Pielke what his views on a particular topic are, and decide whether or not you agree with them, and debate them. But, there are no ‘others’, at least in a real scientific discussion. Convenient as it would be for those who do not buy the IPCC view of the world to form a nice homogeneous bloc, it is not the case. Given the vast range of disciplines and often uncharted territory involved, there would be something very wrong if it was the case.

Carrick

Roger Pielke, thanks for the reply.
First to follow up on my previous comment, I was unintentionally a bit misleading in stating the difference as 10% (sorry that was my age-degenerated recollection at work).
Using a robust estimator (minimum of absolute deviations) for the 1980-2010 period inclusive, I get 1.62°C/century GISTEMP, 1.57°C/centuryand HADCRUT3vGL for the surface measurements and 1.32°C/century UAH and 1.67°C/century RSS for satellite. I think a more accurate statement would be the difference in trend could be as large as 25% (though RSS largely agrees with the surface temperature set).
How much of the difference between the sets is due to boundary layer physics versus surface/elevated measurements is of real interest to me. I have seen your papers and have found them very interesting in this context (especially since I occasionally am involved in the collection of boundary layer data myself). I’ll sit down and give it another read tonight.

mpaul

steven mosher says:
February 28, 2011 at 1:47 pm
“It would be a good thing, if Peilke and others explained what they agreed about.”
Mosh, how about this:
(1) Its gotten warmer since the last mini ice age
(2) Dumping large amounts of CO2 and other by-products of fossil fuel combustion into the atmosphere is hazardous to human life and to the common welfare
(3) We should be aggressively pursuing alternative sources of cheap clean energy as a national priority
(4) Climate science has been corrupted by politics and some prominent climate scientist are willing to falsify results and cherry pick data in order to achieve power, fame and career advancement.
(5) Prominent climate scientists have systematically overstated confidence in results and the degree of scientific consensus
(6) The climate science community is unwilling and unable to enforce a meaningful code of professional conduct.
(7) Global mean surface temperature is meaningless and attempts at measuring it are illusory.
(8) Modeling of the climate with any degree of predictive power (beyond a few weeks) is well beyond the current state of technology
(9) Current estimate of climate sensitivity are unsupported by direct evidence
(10) We have not been able to characterize natural variability in the climate and as such we are unable to determine the significance of the anecdotal warming

Philip Mulholland

vukcevic says:
February 28, 2011 at 10:51 am

In not too distant future graphics of the above type may have Geomagnetic field added, or at least I hope….

I agree, let’s hope it also includes:-
Chemical weathering of igneous rocks, in particular those containing plagioclase feldspars and the pedological processes that create mature regoliths which release calcium and magnesium cations into the hydrosphere.
The organic and inorganic sequestering of oxidised sulphur and carbon, occurring as water soluble anions, leading to an increase in the lithic reservoir of sulphate and carbonate rocks.
Adjustments in the areal extent of the hydrosphere to take account of tectonic and isostatic changes in relief and coastal processes of sedimentary erosion and accretion.
Changes in ocean basin morphology associated with continental drift, deep ocean crustal sinking, the growth of sedimentary fans, the development of submarine ridges by tectonic accretion, volcanic extrusion of submarine lavas and the impact of all of these on deep water mobility.
Changes in the organisation of deep marine currents in response to alterations in the rate of formation of polar cold dense bottom water that is intimately associated with the growth and destruction of continental icecaps. The competing formation of warm dense tropical bottom water associated with the growth and destruction of carbonate ramps and reefs in epeiric seas that are sensitive to gross sea level changes, particularly those sea level falls associated with the growth of said continental icecaps.
And so on, and so on…
One of the Apollo astronauts, on his return to earth, was asked what is the biggest difference between the earth and the moon? He said “on earth everything moves”.