Guest post by Dr. Roger Pielke Senior
Missing The Major Point Of “What Is Climate Sensitivity”
There is a post by Zeke on Blackboard titled Agreeing [See also the post on Climate Etc Agreeing(?)].
Zeke starts the post with the text
“My personal pet peeve in the climate debate is how much time is wasted on arguments that are largely spurious, while more substantive and interesting subjects receive short shrift.”
I agree with this view, but conclude that Zeke is missing a fundamental issue.
Zeke writes
“Climate sensitivity is somewhere between 1.5 C and 4.5 C for a doubling of carbon dioxide, due to feedbacks (primarily water vapor) in the climate system…”
The use of the terminology “climate sensitivity” indicates an importance of the climate system to this temperature range that does not exist. The range of temperatures of “1.5 C and 4.5 C for a doubling of carbon dioxide” refers to a global annual average surface temperature anomaly that is not even directly measurable, and its interpretation is even unclear, as we discussed in the paperPielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.
This view of a surface temperature anomaly expressed by “climate sensitivity” is grossly misleading the public and policymakers as to what are the actual climate metrics that matter to society and the environment. A global annual average surface temperature anomaly is almost irrelevant for any climatic feature of importance.
Even with respect to the subset of climate effects that is referred to as global warming, the appropriate climate metric is heat changes as measured in Joules (e.g. see). The global annual average surface temperature anomaly is only useful to the extent it correlates with the global annual average climate system heat anomaly [most of which occurs within the upper oceans]. Such heating, if it occurs, is important as it is one component (the “steric component”) of sea level rise and fall.
For other societally and environmentally important climate effects, it is the regional atmospheric and ocean circulations patterns that matter. An accurate use of the terminology “climate sensitivity” would refer to the extent that these circulation patterns are altered due to human and natural climate forcings and feedbacks. As discussed in the excellent post on Judy Curry’s weblog
finding this sensitivity is a daunting challenge.
I have proposed definitions which could be used to advance the discussion of what we “agree on”, in my post
The Terms “Global Warming” And “Climate Change” – What Do They Mean?
As I wrote there
Global Warming is an increase in the heat (in Joules) contained within the climate system. The majority of this accumulation of heat occurs in the upper 700m of the oceans.
Global Cooling is a decrease in the heat (in Joules) contained within the climate system. The majority of this accumulation of heat occurs in the upper 700m of the oceans.
Global warming and cooling occur within each year as shown, for example, in Figure 4 in
Ellis et al. 1978: The annual variation in the global heat balance of the Earth. J. Geophys. Res., 83, 1958-1962.
Multi-decadal global warming or cooling involves a long-term imbalance between the global warming and cooling that occurs each year.
Climate Change involves any alteration in the climate system , which is schematically illustrated in the figure below (from NRC, 2005)
which persists for an (arbitrarily defined) long enough time period.
Shorter term climate change is referred to as climate variability. An example of a climate change is if a growing season 20 year average of 100 days was reduced by 10 days in the following 20 years. Climate change includes changes in the statistics of weather (e.g. extreme events such as droughts, land falling hurricanes, etc), but also include changes in other climate system components (e.g. alterations in the pH of the oceans, changes in the spatial distribution of malaria carrying mosquitos, etc).
The recognition that climate involves much more than global warming and cooling is a very important issue. We can have climate change (as defined in this weblog post) without any long-term global warming or cooling. Such climate change can occur both due to natural and human causes.”
It is within this framework of definitions that Zeke and Judy should solicit feedback in response to their recent posts. I recommend a definition of “climate sensitivity” as
Climate Sensitivity is the response of the statistics of weather (e.g. extreme events such as droughts, land falling hurricanes, etc), and other climate system components (e.g. alterations in the pH of the oceans, changes in the spatial distribution of malaria carrying mosquitos, etc) to a climate forcing (e.g. added CO2, land use change, solar output changes, etc). This more accurate definition of climate sensitivity is what should be discussed rather than the dubious use of a global annual average surface temperature anomaly for this purpose.

Related to this issue that Roger raises, is a featured article in the Jan 2011 issue of Physics Today, which is a free Journal of the American Institute of Physics. It’s a quite extensicve article authored by Peter Humbug; who lurks around c-R. It’s a very readable article and one I think everyone should find and read. It has been mentioned here at WUWT before.
In the very first sentence of this paper, we learn that the Temperature of the earth would be nearly 800,000 K after a billion years of absorbing sunlight with no means of escape. Well I dare say that if the earth is actually 4.5 billion years old, then it’s Temperature woul likely be closer to 3 million Kelvins, than 800,000.
Of course it wouldn’t have any of its present atmosphere, and no oceans of course (of H2O). That 800,000 Kelvins, the mean molecular velocity would likely be greater than the escape velocity, so all of those molecules would be gone; and since mass goes down faster than surface area, as the surface layers of the earth ablated, the incoming solar energy rate, would not decline as fast as the earth mass does, so the Temperature would likely continue to increase indefinitely, until the whole earth had evaporated and blown away.
But of course that model of earth (Peter is a model maker evidently) assumes it has no way of getting rid of any energy.
That’s a strange assumption for a model; because later in the paper, he plots a graph, that purports to be an actual theoretical calculation for the Radiance for LWIR emissions from the earth. Well he calls it “flux”, and he has it overlaid with actual measured “flux” from the AIRS Satellite Instrument.
According to his graph, the earth actually CAN radiate a nearly infinite amount of energy. His actual plotted data, both measured and calculated, has a peak value of close to 0.12 Watts per (square metre-Steradian) at around 550 cm^-1 wavewnumber and again at around 750 cm^-1 on the other side of the CO2 hole. The Radiance drops to about 0.07 on the same scale at the Ozone hole at 1050 cm^-1, and then at 1250 cm^-1, it drops to about 0.2 or less for the water hole.
He has similar graphs for both Mars, and Venus; since his theory is applicable to any planet. Of course the detail numbers are different for different planets.
so how is it that such a graph predicts a near infininte amount of radiated energy.
Well let me repeat, the Units of his “flux” are WATTS PER SQUARE METRE PER STERADIAN; or if you prefer W/(m^2.sr). You get the idea; those are units of Radiance th3e radiant equivalent to “candle power”.
Zeke has an agenda to drive. He believes that by making those statements they must be true. Unfortunately, he and his fellow AGWs have provided no empirical evidence for their ‘believe’ and probably never will. I just wish, like Willis, that htese people would provide the science with their absolute statements so that we can all become believers like them.
I don’t think co2 has had that much effect on temperature in the last hundred years.
wattsupwiththat.com/2011/01/31/some-people-claim-there’s-a-human-to-blame-gw-tiger
To the reader from Huntsville: this is the one you may whish to consider:
http://www.vukcevic.talktalk.net/LFC2.htm
Dave D says:
February 28, 2011 at 11:04 am
What would be the collective climate science response to the statement, “It could be the that atmospheric sensitivity to CO2 is essentially zero?”
I believe that scientists measured and listed the following heat adsorption figures: 1) Oxygen = X, Nitrogen = X, yes they were very close to each other; CO2 = Y > X, a mixture of 20% O2 and 80% N2 was then measured and yielded a value VERY close to = Y.
======================================================
I’m not familiar with the way this is expressed, there may be a problem with the labeling, but, if one were to look at the IR absorption spectrum, http://joannenova.com.au/globalwarming/graphs/CO2_infrared_sorption_Tom.jpg
then one would see the overlap of nitrous oxide and H2O vs. CO2. Note how little unique absorption CO2 provides.
I think many people, including Peilke, misunderstand the purpose of Zeke’s questions.
Before you start a discussion about what you disagree about, it’s important to find out what you agree about.
It would be a good thing, if Peilke and others explained what they agreed about.
This is not about talking a poll and deciding the truth. It’s about seeing what agreement there is and what disagreement.
It seems to me there is some psychological need in the human system generally for “one number-itis” to track and explain important issues. And, further, that as soon as there is even moderate complexity in the system being tracked by “one number-itis” that all manner of bad things begin happening both from a communications perspective and a general framework of even thinking about the issue perspective.
Climate sensitivity expressed in global temperature anamoly seems just another example of this.
At some level the urge to simplify is understandable. Over-simplifying is no friend to anyone however. 4 or 5 metrics ought to be few enough to still have an understandable framework to explain to laymen without introducing the really quite significant communication issues (and fights about same) that having one introduces.
Related to this issue that Roger raises, is a featured article in the Jan 2011 issue of Physics Today, which is a free Journal of the American Institute of Physics. It’s a quite extensive article authored by Peter Humbug; who lurks around c-R. It’s a very readable article and one I think everyone should find and read. It has been mentioned here at WUWT before.
In the very first sentence of this paper, we learn that the Temperature of the earth would be nearly 800,000 K after a billion years of absorbing sunlight with no means of escape. Well I dare say that if the earth is actually 4.5 billion years old, then it’s Temperature would likely be closer to 3 million Kelvins, than 800,000.
Of course it wouldn’t have any of its present atmosphere, and no oceans of course (of H2O). At that 800,000 Kelvins, the mean molecular velocity would likely be greater than the escape velocity, so all of those molecules would be gone; and since mass goes down faster than surface area, as the surface layers of the earth ablated, the incoming solar energy rate, would not decline as fast as the earth mass does, so the Temperature would likely continue to increase indefinitely, until the whole earth had evaporated and blown away.
But of course that model of earth (Peter is a model maker evidently) assumes it has no way of getting rid of any energy.
That’s a strange assumption for a model; because later in the paper, he plots a graph, that purports to be an actual theoretical calculation for the Radiance for LWIR emissions from the earth. Well he calls it “flux”, and he has it overlaid with actual measured “flux” from the AIRS Satellite Instrument.
According to his graph, the earth actually CAN radiate a nearly infinite amount of energy. His actual plotted data, both measured and calculated, have a peak value of close to 0.12 Watts per (square metre-Steradian) at around 550 cm^-1 wavenumber and again at around 750 cm^-1 on the other side of the CO2 hole. The Radiance drops to about 0.07 on the same scale at the Ozone hole at 1050 cm^-1, and then at 1250 cm^-1, it drops to about 0.2 or less for the water hole.
He has similar graphs for both Mars, and Venus; since his theory is applicable to any planet. Of course the detail numbers are different for different planets.
So how is it that such a graph predicts a near infininte amount of radiated energy.
Well let me repeat, the Units of his “flux” are WATTS PER SQUARE METRE PER STERADIAN; or if you prefer W/(m^2.sr). You get the idea; those are units of Radiance; the radiant equivalent to “candle power”.
What they specifically ARE NOT, is units of SPECTRAL RADIANCE., so that presumably is the calculated and measured radiance at each and every possible frequency (wave number) within the range , of roughly 100 to 2000 cm^-1 as plotted, and there is an infinite number of such frequencies (the thermal spectrum is a continuum) so the total radiated energy is infinite.
The author also states that planets with Temperatures between about 50 K and 1000 K emit IR spectra in the range of 5-50 microns in the far infrared; which he calls the Thermal IR (fine with me).
This thermal IR spectrum is presumably somewhat black body like; modified by some appropriate (spectral) emissivity.
This is a rather weird piece of information, since we know that for planet earth which roughly averages 288 K and emits 390 W/m^2 total radiant emittance, the corresponding BB spectrum would contain 98% of its energy between the wavelengths of 5.0 microns to 80 microns, with the spectral peak at about 10 microns.
Wow ! that an even longer spectral range, than the author claims for the whole planetary Temperature range from 50 K to 1000 K.
The way I figure it (I do not have a Teracomputer), the 1000 K spectrum, would have its peak emittance at 2.88 microns, and 98% of the energy would be between 1.44 microns, and 23 microns; whereas the 50 K planet, would have 98% of its energy between about 29 microns, and 460 microns, with a peak at 58 microns.
Apparently my stick in the sand is not quite as accurate as his Teracomputer.
Well I could go on, and mention a few other glaring questionable statements; but I would not want anyone to be put off from reading the entire paper.
The most important sentence in the whole paper is the second to last sentence which says:-
“The precise magnitude of the resulting warming depends on the fairly well known amount of amplification by water-vapor feedbacks and the less known amount of cloud feedback.”
The final sentence says:- “There are indeed uncertainties in the magnitude and impact of anthropogenic global warming, but the basic radiative physics of the anthropogenic greenhouse effect is unassailable.”
I would suggest to the author that those two statements are quite incompatible. The radiative Physics cannot be unassailable so long as the cloud feedback is unknown; and it certainly is unknown. Even the most battle hardened AGW warrior can’t tell you how the clouds work to control the Temperature of the earth.
It’s a shame, that such a respected Teramodeller, can write a paper like this, and even more surprising that Physics Today chose to publish it.
But do read it anyway; he does address some of the myths; such as the CO2 bands being saturated; but I think his explanation leaves a lot to be desired. There’s that “shoulders” of the CO2 absorption band explanation of what adding more CO2 does. Nothing about the same amount of absorption occurring in a thinner atmosphere layer, if the CO2 abundance increases.
So how many 100’s of billions have been spent and the dicussions
So how many 100′s of billions have been spent and the arguments still cover the basic principles, amazing.
Mod please delete first post, hit post when attempting to correct typing.
Apologies to Zeke but 2XCO2 has been empirically measured by Dr Spencer and others and found to be about 0.4-0.6 C.
I’ve cross checked this a couple of ways, one using SST’s and another by difference after solar effects are controlled for, and a value around 0.6 C seems to be the number. The feedback must therefore be negative not positive. None of this is modelling, it is a straight analysis of recorded and easily available data.
It may be the problem is climate scientists seeming to ignore the effects of the sun, even though these are easy to see even if you plot HadCRUT vs SCL (or etc) yourself.
err bruce, ya cant get a sensitivity from observations that way.
Start with Schwartz and avoid his mistakes.
James sexton.
see the stratosphere.
Here’s a nice little presentation that will give you some ideas.. using Ocean heat content
http://www.newton.ac.uk/programmes/CLP/seminars/120812001.html
Espen:
Sorry but this description of how global mean temperature is calculated is wrong, and I’m pretty sure Roger would not endorse this explanation. Rather than launch into a tirade or a sermon about how to calculate it correctly, I encourage you to read some of the posts of Zeke on Lucia’s blog.
The short answer to why you don’t need one billion sensors comes from the sampling theory: This theory is the basis for digitization of audio and video signals. If there were an error with this theory, we would viscerally see it and hear it. If you ask how many samples you need to fully capture the Earth’s temperature field, based on a correlation length of 500-km (this is a smaller number than is quoted by Hansen’s group for the correlation length), that works out to around 2000 instruments world wide. By comparison, there are nearly 10,000 fixed instruments located on about 30% of the surface of the Earth from land-based systems alone. The oceans are less well covered, but the correlation lengths go way up.
If there are problems, they are from other sources (at least post 1950).
Roger Pielke Sr:
I will raise to you the same challenge that I have to others. If it matters in a substantial fashion, this error should rear its ugly head in the comparison of surface based measurements to satellite based ones.
It does not.
(There is a difference in the long-term trends among the series, mostly between surface versus satellite, but I believe this is explained based on the difference in where the satellites are sampling the atmosphere—at roughly 8000-m up–compared to 1-2 m for surface measurements. Even if we posit that the difference is due to an error in accuracy of the surface measurements, that still just amounts to 10% of the total observed trend in temperature.)
Zeke wrote:
“… surface temperature response to doubling CO2 (e.g. standard climate sensitivity) is not a full description of the effects of a climate forcing, but rather a limited subset of those effects. However, discussing how the full effects (which are characterized by even more uncertainty than sensitivity itself) should be communicated to the public and policy makers is a separate issue from what our best understanding of the science says about climate sensitivity as generally defined.”
This is what really pisses me off. Maybe before trying to “communicate” some “generally defined” concepts to “the public”, one needs to communicate to himself that change in globally-averaged surface temperature IS NOT A PROXY FOR CHANGES IN HEAT CONTENT of climate system. It was shown mathematically on JCurry blog that a three-dimensional FIELD of temperature cannot be meaningfully characterized by a single number (“global temperature”) when the field is not spherically symmetrical. It is quite obvious and can be shown with simple examples that global average can go up while system is globally cooling, and global average can go down even if system is gaining heat. More importantly, the global average temperature can go up or can go down even if the heat content does not change at all. Please communicate this to yourself first.
The concept of CO2 induced ‘climate sensitivity’ is based on a fundamental misunderstanding of climate energy transfer. Over the last 50 years or so, the atmospheric CO2 conentration has increased by about 70 ppm to 380 ppm. This has produced an increase in the downward long wave IR (LWIR) atmospheric flux at the Earth’s surface of about 1.2 W.m-2. Over the last 200 years these numbers go up to 100 ppm and 1.7 W.m-2.
There is no such thing as a ‘global average temperature’. Each point on the Earth’s surface interacts with the local energy flux to produce a local surface temperature. This changes continuously on both a daily and a seasonal time frame. The solar flux at the surface can easily vary from zero to 1000 W.m-2. The night time LWIR emission can vary between zero and 100 W.m-2, depending on cloud cover and humidity.
Under full summer sun conditions the ‘dry’ land cooling balance is about 200 W.m-2 in excess LWIR emission and 800 W.m-2 in moist convection at a surface temperature that can easily can reach 50 C.
In terms of a daily energy budget, the solar flux can reach up to ~25 MJ.m-2.day-1. This is balanced by about ~15 MJ.m-2.day-1 in moist convection and ~10 MJ.m-1.day-1 in LWIR emission. The heat capacity of the ground is ~1.5 MJ.m-3 and ~3 MJ of the total daily flux is stored and released by the daily surface heating. At 1.7 W.m-2, the total daily LWIR flux increase from CO2 is ~0.15 MJ.m-2.day-1. This corresponds to about 2.5 minutes of full summer sun or the evaporation of a film of water 65 micron thick over an area of 1 m^2. It is simply impossible to detect the surface temperature change from an increase of 100 ppm in atmospheric CO2 concentration.
Over the oceans it is even easier. The LWIR emission from CO2 is absorbed within 100 micron of the ocean surface and is coupled directly into the surface evaporation. An increase of 65 micron per day in water evaporation is just too small to detect in the wind and surface temperature driven variations in ocean surface evaporation. The long term average uncertainty in ocean evaporation measurements is 2.7 cm per year. It is impossible for a 100ppm increase in CO2 concentration to cause any change in ocean temperatures.
The other problem is that the surface temperature was not usually measured before satellite observations and the meteorological surface air temperature (MSAT) measured in an enclosure at eye level above the ground has been substituted for the surface temperature. The observed ‘climate sensitivity’ over the last 50 years or so is just the changes in ocean surface temperature with urban heat island effects added along with downright faudulent manipulation of the ‘climate averages’. Ocean surface temperatures have been decreasing for at least the last 10 years and will probably continue to do so for another 20.
The climate prediction models have been ‘fixed’ to predict global warming by using a set of fraudulent ‘radiative forcing constants’. The hockey stick has been used as ‘calibration’ to predict more hockey stick. This is just climate astrology.
The only climate sensitivity is from the sun. A sunspot index of 100 correponds to an increase in the solar ‘constant’ of about 1 W.m-2. The change in ellipticity of the Earth’s orbit over a Milankovitch ice Age cycle is about -1 W.m-2. Over 10,000 years this is sufficient to bring the Earth in or out of an Ice Age, just through the change in ocean heat content. A Maunder Minimum requires about 50 to 100 years with few sunspots. We are going to find out about real climate sensitivity if the sunspot index stays low.
Carrick wrote:
“The short answer to why you don’t need one billion sensors comes from the sampling theory: This theory is the basis for digitization of audio and video signals. If there were an error with this theory, we would viscerally see it and hear it. If you ask how many samples you need to fully capture the Earth’s temperature field, based on a correlation length of 500-km (this is a smaller number than is quoted by Hansen’s group for the correlation length), that works out to around 2000 instruments world wide.”
Totally wrong. We could “viscerally see and hear” disconnects in video and audio signals, but it doesn’t affect much of information we receive. We might see some “snow” or “popping”, still old VHS tapes or vinyls could be well enjoyable. More, while information content is still fine, estimates of some average of signal could be over the roof, a technique that was used in early copy protection of VHS tapes. Even more, modern communication channels (such as USB) explicitly define “isochronous steams” where it is not necessary to retry blocks of signal if checksum does not match, and simply ignore the data block. These are the streams dedicated for video and audio information. So you could not be more wrong.
Regarding temperatures, you do need much higher density of sampling. As I reported elsewhere, there are many pairs of stations that are just 50-60km apart while exhibiting OPPOSITE century-long temperature trends. It means that you/we have no clue what is actually happening to climate-scale trends between stations that are 300-500 miles apart. It means that the field is undersampled. How badly? Opposite warming-cooling trends on a distance of 50km means that you need AT LEAST 25km REGULAR grid, which translates into about 800,000 stations world-wide.
So, your assertion that “it works out” to 2000 stations is unconditionally, blatantly wrong. Please get yourself familiar with Nyquist-Shannon-Kotelnikov-Whittaker sampling theorem, and study the effect called “aliasing” when sampling density does not conform to the sampling theorem.
Carrick – An anomaly of +10C at cold temperatures has a smaller effect on outgoing radiation than an anomaly of +10C at warm temperatures. This is due to the Stefan-Boltzman relationship that this outgoing emission is proportional to the 4th power of temperature.
With respect to the relationship between surface and lower tropospheric temperatures, and their divergence from one another, please see our paper
Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841. http://pielkeclimatesci.wordpress.com/files/2009/11/r-345.pdf
Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841”, J. Geophys. Res., 115, D1, doi:10.1029/2009JD013655.
http://pielkeclimatesci.wordpress.com/files/2010/03/r-345a.pdf
Alex, you and I aren’t going to see eye to eye on this, if for no other reason than you refuse to use an analytic approach to buttress your arguments, and you prefer to substitute ugly personal attacks for reasoned argument. Sorry but I just find that very boring.
My comments about the number of locations needed based on a correlation length of 500-km and a (band-limited) sampling period of 1 month/sample follow straight fro the sampling theorem. So, either there’s something wrong with the sampling theorem, or there’s something wrong with the statement that the correlation length for a 1-month band-limited signal is 500-km.
Those are the only two possibilities.
Your discussion about old VHS systems, etc are so incoherent or OT I have no idea what you are even trying to drive at, other than some petty desire to show me “totally wrong”.
Address the science, or expect to be ignored.
steven mosher says:
February 28, 2011 at 2:44 pm
Steven – I appreciate your work very much, but sorry I respectfully disagree. Are you rejecting the correlation between SCL and temperature? Looks quite clear to me statistically and visually. Its there in the CET too, clearer post Laki. We can argue about the magnitude and causation but unless someone finds a new dataset that correlation isn’t going away.
Steven Mosher said:
It would be a good thing, if Peilke and others explained what they agreed about.
This is not about talking a poll and deciding the truth. It’s about seeing what agreement there is and what disagreement.
—————————————————————
It is perfectly reasonable to ask Dr Pielke what his views on a particular topic are, and decide whether or not you agree with them, and debate them. But, there are no ‘others’, at least in a real scientific discussion. Convenient as it would be for those who do not buy the IPCC view of the world to form a nice homogeneous bloc, it is not the case. Given the vast range of disciplines and often uncharted territory involved, there would be something very wrong if it was the case.
Roger Pielke, thanks for the reply.
First to follow up on my previous comment, I was unintentionally a bit misleading in stating the difference as 10% (sorry that was my age-degenerated recollection at work).
Using a robust estimator (minimum of absolute deviations) for the 1980-2010 period inclusive, I get 1.62°C/century GISTEMP, 1.57°C/centuryand HADCRUT3vGL for the surface measurements and 1.32°C/century UAH and 1.67°C/century RSS for satellite. I think a more accurate statement would be the difference in trend could be as large as 25% (though RSS largely agrees with the surface temperature set).
How much of the difference between the sets is due to boundary layer physics versus surface/elevated measurements is of real interest to me. I have seen your papers and have found them very interesting in this context (especially since I occasionally am involved in the collection of boundary layer data myself). I’ll sit down and give it another read tonight.
steven mosher says:
February 28, 2011 at 1:47 pm
“It would be a good thing, if Peilke and others explained what they agreed about.”
Mosh, how about this:
(1) Its gotten warmer since the last mini ice age
(2) Dumping large amounts of CO2 and other by-products of fossil fuel combustion into the atmosphere is hazardous to human life and to the common welfare
(3) We should be aggressively pursuing alternative sources of cheap clean energy as a national priority
(4) Climate science has been corrupted by politics and some prominent climate scientist are willing to falsify results and cherry pick data in order to achieve power, fame and career advancement.
(5) Prominent climate scientists have systematically overstated confidence in results and the degree of scientific consensus
(6) The climate science community is unwilling and unable to enforce a meaningful code of professional conduct.
(7) Global mean surface temperature is meaningless and attempts at measuring it are illusory.
(8) Modeling of the climate with any degree of predictive power (beyond a few weeks) is well beyond the current state of technology
(9) Current estimate of climate sensitivity are unsupported by direct evidence
(10) We have not been able to characterize natural variability in the climate and as such we are unable to determine the significance of the anecdotal warming
vukcevic says:
February 28, 2011 at 10:51 am
I agree, let’s hope it also includes:-
Chemical weathering of igneous rocks, in particular those containing plagioclase feldspars and the pedological processes that create mature regoliths which release calcium and magnesium cations into the hydrosphere.
The organic and inorganic sequestering of oxidised sulphur and carbon, occurring as water soluble anions, leading to an increase in the lithic reservoir of sulphate and carbonate rocks.
Adjustments in the areal extent of the hydrosphere to take account of tectonic and isostatic changes in relief and coastal processes of sedimentary erosion and accretion.
Changes in ocean basin morphology associated with continental drift, deep ocean crustal sinking, the growth of sedimentary fans, the development of submarine ridges by tectonic accretion, volcanic extrusion of submarine lavas and the impact of all of these on deep water mobility.
Changes in the organisation of deep marine currents in response to alterations in the rate of formation of polar cold dense bottom water that is intimately associated with the growth and destruction of continental icecaps. The competing formation of warm dense tropical bottom water associated with the growth and destruction of carbonate ramps and reefs in epeiric seas that are sensitive to gross sea level changes, particularly those sea level falls associated with the growth of said continental icecaps.
And so on, and so on…
One of the Apollo astronauts, on his return to earth, was asked what is the biggest difference between the earth and the moon? He said “on earth everything moves”.