Guest post by Dr. Roger Pielke Senior
Missing The Major Point Of “What Is Climate Sensitivity”
There is a post by Zeke on Blackboard titled Agreeing [See also the post on Climate Etc Agreeing(?)].
Zeke starts the post with the text
“My personal pet peeve in the climate debate is how much time is wasted on arguments that are largely spurious, while more substantive and interesting subjects receive short shrift.”
I agree with this view, but conclude that Zeke is missing a fundamental issue.
Zeke writes
“Climate sensitivity is somewhere between 1.5 C and 4.5 C for a doubling of carbon dioxide, due to feedbacks (primarily water vapor) in the climate system…”
The use of the terminology “climate sensitivity” indicates an importance of the climate system to this temperature range that does not exist. The range of temperatures of “1.5 C and 4.5 C for a doubling of carbon dioxide” refers to a global annual average surface temperature anomaly that is not even directly measurable, and its interpretation is even unclear, as we discussed in the paperPielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.
This view of a surface temperature anomaly expressed by “climate sensitivity” is grossly misleading the public and policymakers as to what are the actual climate metrics that matter to society and the environment. A global annual average surface temperature anomaly is almost irrelevant for any climatic feature of importance.
Even with respect to the subset of climate effects that is referred to as global warming, the appropriate climate metric is heat changes as measured in Joules (e.g. see). The global annual average surface temperature anomaly is only useful to the extent it correlates with the global annual average climate system heat anomaly [most of which occurs within the upper oceans]. Such heating, if it occurs, is important as it is one component (the “steric component”) of sea level rise and fall.
For other societally and environmentally important climate effects, it is the regional atmospheric and ocean circulations patterns that matter. An accurate use of the terminology “climate sensitivity” would refer to the extent that these circulation patterns are altered due to human and natural climate forcings and feedbacks. As discussed in the excellent post on Judy Curry’s weblog
finding this sensitivity is a daunting challenge.
I have proposed definitions which could be used to advance the discussion of what we “agree on”, in my post
The Terms “Global Warming” And “Climate Change” – What Do They Mean?
As I wrote there
Global Warming is an increase in the heat (in Joules) contained within the climate system. The majority of this accumulation of heat occurs in the upper 700m of the oceans.
Global Cooling is a decrease in the heat (in Joules) contained within the climate system. The majority of this accumulation of heat occurs in the upper 700m of the oceans.
Global warming and cooling occur within each year as shown, for example, in Figure 4 in
Ellis et al. 1978: The annual variation in the global heat balance of the Earth. J. Geophys. Res., 83, 1958-1962.
Multi-decadal global warming or cooling involves a long-term imbalance between the global warming and cooling that occurs each year.
Climate Change involves any alteration in the climate system , which is schematically illustrated in the figure below (from NRC, 2005)
which persists for an (arbitrarily defined) long enough time period.
Shorter term climate change is referred to as climate variability. An example of a climate change is if a growing season 20 year average of 100 days was reduced by 10 days in the following 20 years. Climate change includes changes in the statistics of weather (e.g. extreme events such as droughts, land falling hurricanes, etc), but also include changes in other climate system components (e.g. alterations in the pH of the oceans, changes in the spatial distribution of malaria carrying mosquitos, etc).
The recognition that climate involves much more than global warming and cooling is a very important issue. We can have climate change (as defined in this weblog post) without any long-term global warming or cooling. Such climate change can occur both due to natural and human causes.”
It is within this framework of definitions that Zeke and Judy should solicit feedback in response to their recent posts. I recommend a definition of “climate sensitivity” as
Climate Sensitivity is the response of the statistics of weather (e.g. extreme events such as droughts, land falling hurricanes, etc), and other climate system components (e.g. alterations in the pH of the oceans, changes in the spatial distribution of malaria carrying mosquitos, etc) to a climate forcing (e.g. added CO2, land use change, solar output changes, etc). This more accurate definition of climate sensitivity is what should be discussed rather than the dubious use of a global annual average surface temperature anomaly for this purpose.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Izen,
Now you’re capitalizing “robust.” Time for counseling.
Here’s the graph you wanted: click
Steven Mosher,
It is really amusing that trends do not differ after your subset selection. My concern is that the original historical set is screwed (by definition) in first place, and is not “random” with regard to surface topology. Therefore, your “random selection” of non-random set is not really random relative to the original field.
Also, my other concern is with stations that have 90+year-long “negative warming” trend. You have identified about 500 stations yourself, if I am not mistaken.
http://stevemosher.wordpress.com/2010/09/29/needs-chec/
The fact that your distribution of trends has a bell-shape curve speaks towards some sort of normal statistics, true. Your bell is asymmetrical. How do you know that if you would have proper amount of stations, the bell curve will not be centered at zero trend?
However, my deepest concern is that these downtrends do not have coherent explanation from the CO2-induced radiation imbalance theory of AGW. The idea fails miserably when many side-by-side pairs of stations show nearly monotonic but distinctly opposite centennial trend. If one has to explain “negative warming” around a certain station by various excuses like land or water regime change etc, the “positively-warming” trend for the next closest neighbor must be subject to the same kind of factors, which leaves the man-made warming completely unsupported and out of the picture.
So, you say, “spatial correlation”. I say “baloney” – my examples show that there is no spatial correlation for climate trends in station data even at several miles.
steven mosher says:
March 1, 2011 at 2:43 pm
espen and al dont believe in an MWP or an LIA.
too few thermometers to establish an average during those times.
Please don’t use carrick’s misinterpretation of what I wrote to make silly psychic readings, will you?
Espen, you said:
What is this supposed to imply? I left my secret decoder ring in the lab. You brought it up not me. If you didn’t mean it, or it wasn’t relevant, you shouldn’t have said it.
I also take it from your sarcastic response that you look at today’s ocean heat index forecast to decide what to wear to work today. 😉
sky:
Moderators, please delete the last line of my previous post – I intended to delete that question since too few understand what it means.
[done]
Steven Mosher:
No, and I’m not impressed if somebody shouts “Nyquist-Shannon-Kotelnikov-Whittaker-Schlongenknocker Theorem” either. 😉
I didn’t emphasize the greater correlation in the ocean data (again we’re talking about 1-month averaged temperatures, not hourly). But it was contained here.
It must be a queer accident that so many different ways of looking at the data are so self-consistent given that Al Tek thinks we need…
I remember the reconstruction, perhaps you could link it for the crowd, let them make their own determination? (This maybe a case of “the plot thickened as the crowd thinned” but we may have a few onlookers left who have an open mind and would find your work interesting).
Carrick says:
March 1, 2011 at 11:34 pm
Espen, you said:
Even if we covered the earth with a grid of more than a billion temperature sensors
What is this supposed to imply?
You are unbelievably persistent! Don’t tell me you haven’t seen conditional sentences with purely hypothetical antecedents before.
The meaning is simply that the precision of the measurements is irrelevant – the real problem is that mean global temperature anomaly is a bad measure of global warming. Again: A +10 C anomaly in Arctic Canada in January represents only a fraction of the excess heat of a +10 C anomaly over an equivalent area in Amazonas.
Carrick says:
March 1, 2011 at 11:34 pm
I strongly disagree with your conclusions. Zero-lag spatial correlation is not the same as low-frequency coherence, whose much-too-frequent lack is what produces highly inconsistent multidecadal “trends” at neighboring stations. Aside from unrecognized offsets introduced into the record by station moves and instrumentation changes, the primary culprit is UHI and land-use changes, which can be equally strong in corrupting “rural” records. You seem not to recognize this practical problem, nor the analytically indisputable requirement of sampling at half the shortest wavelength of consequence in the spatial field in order to avoid aliasing.. With cities occupying hundreds of square kilometers, that shortest wavelength is on the order of 10km. There simply is no adequate, world-wide, century-long data base for those of us interested in climatology rather than urbanolgy. And when it comes to secular trends, rather than multidecadal swings with which they’re often confused, satelletite data is much too short in duration. Spatial correlation of records, which is strongly influenced by subdecadal variations, is not the determiant.
Nothing that NCDC has done in estimating the effects of world-wide spatial undersampling comes remotely close to coming to grips with these issues. Along with others, they take data from a largely urban or otherwise corrupted data base as physically indicative of the globe. The vaunted trend “concordance” with satellite data is moot at best and is starting to show ever-increasing divergence, especially when considered on a regional level.
Carrick wrote:
“All this based on the cherry picking a few station locations, without providing any details how you did the analysis, what time periods the trends were calculated over or anything.
We won’t see eye to eye, because your beliefs are articles of faith, and data contrary to your beliefs simply gets dismissed instead of given the proper weight they deserve.”
I think you are expressing your own (and wrong) philosophy of research. You have expressed a belief that weather is 1/f noise, that it is spatially correlated including 100-year climate trends, and your sensors must show warming trend because you believe that man-made CO2 exerts radiative “pressure”. It is you who dismiss data that contradict your belief.
You call my examples as “cherry picking”. Yes, I specifically selected these examples, because it takes only one example to dismiss your belief system. After a short look I found a dozen. Some practitioners of climate data mathturbation declare that my examples are “statistically insignificant”. Obviously they failed to realize that weather is not a noise, and 100 years of consistently declining data at 500 stations are not a statistical aberration. Even with relatively poor accuracy of station thermometers, there is no way to reverse trends on these stations, although keepers of the data keep making continuous attempts to “re-analyze” and “correct” data sets to squeeze them into their belief.
You say, “without providing any details”. This is false. I provided several links to several posts of mine, to avoid repetition. If you follow my links, you would find references to exact GISS stations I am talking about, and pull out data charts. You are just trying to obfuscate the subject and hide behind your laziness. But since you value your time so much, I will re-post pointers to just one pair of stations, Pauls_Valley_4wsw and Ada, with some “mathturbation”. These stations are 55km apart. Time span is 1907 to 2009.
Pauls_Valley_4wsw:
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425746490030&data_set=1&num_neighbors=1
Trend equation: y = +0.0071x – 13.98, or warming 0.7C/100y
Ada:
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425746490040&data_set=1&num_neighbors=1
Trend equation y = -0.002x +3.87, or cooling of 0.2C/100 years
It is interesting that if I partition the Ada data into three climatologically significant (30 years) time blocks, 1907-1936, 1943-1972, 1980-2009, and average corresponding data, I get a nearly perfect regression line y = -0.0015x + 2.96 with R2=0.9985
To summarize: The fact that many close-by pairs of stations show opposite centennial trend is totally inconsistent with the concept of atmospheric radiative imbalance from man-made “excess of CO2”.
The fact that there are many close-by pairs of stations with opposite centennial trend means that you are looking for your warming signal in wrong measuring metrics.
These are my facts. What are yours? These fuzzy clouds of seasonal correlations from logically deficient Hansen-Lebedeff study? Do you know that Hansen averages everything in 200x200km box? This requires an assumption of uniform change (+ “noise”) in instrument reading, which is based on AGW belief. My facts show that this assumption is wrong. Does it carry any weight? I think the weight is devastating, but of course you would disagree and try to find some other goofy excuse to dismiss the facts.
Redefining Climate Sensitivity? Why not introduce a _new_ term encompassing climate changes due to overall temperature shifts, rather than attempting to redefine a term used in all of climate science?
This, in my personal opinion (yours may differ) is a clear attempt to move the goalposts (http://www.don-lindsay-archive.org/skeptic/arguments.html#goalposts); usually a sign that an argument on the original terms has been lost.
Pielke’s idea of using the ocean heat content as a metric is nothing new, and no revelation to the world’s climate scientists.
http://www.realclimate.org/index.php/archives/2005/05/planetary-energy-imbalance/
It is of course harder to make measurements of ocean heat content than to process the existing temperature data of the world’s weather stations.
Historically, the global average surface temperature and the ocean heat content have gone pretty much in the same direction.
http://i38.tinypic.com/zxjy14.png
http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.gif
The ocean heat content data base is newer, with data only since 1955.
Also the ocean heat content data base is more problematic than the temperature data base.
http://pielkeclimatesci.wordpress.com/2010/01/04/guest-weblog-by-leonard-ornstein-on-ocean-heat-content/
Bruce of Newcastle says:
February 28, 2011 at 2:34 pm
Apologies to Zeke but 2XCO2 has been empirically measured by Dr Spencer and others and found to be about 0.4-0.6 C.
You did not read your link carefully. Spencer did not actually claim to measure the long term climate sensitivity. His measurements, which are disputed are only for short term phenomena.
http://www.drroyspencer.com/2010/08/our-jgr-paper-on-feedbacks-is-published/
Unfortunately, there is no way I have found to demonstrate that this strongly negative feedback is actually occurring on the long time scales involved in anthropogenic global warming.
I’ve cross checked this a couple of ways, one using SST’s and another by difference after solar effects are controlled for, and a value around 0.6 C seems to be the number. The feedback must therefore be negative not positive. None of this is modelling, it is a straight analysis of recorded and easily available data.
If you have really cross checked this by using data, you are smarter than Roy Spencer. Maybe you can get a paper published.
It may be the problem is climate scientists seeming to ignore the effects of the sun, even though these are easy to see even if you plot HadCRUT vs SCL (or etc) yourself.
Climate scientists don’t ignore the effect of the sun. It’s irradience peaked in 1950.
eadler says:
March 3, 2011 at 7:04 am
“Historically, the global average surface temperature and the ocean heat content have gone pretty much in the same direction.”
One of the things that makes climate science highly tenuous is the manufacture of time series from scraps of data obtained by different instruments over short time-intervals at ever-shifting locations sparsely scattered around the globe. What makes it disreputable is the presentation of such data sausages as the “observed” global time-history of the physical variable, rather than very crude and highly incomplete estimates, whose uncertainty can exceed its range of variablity.
This is very much the case with NODC’s OHC series going back to the 1950’s that you point to. There is not a single location in the world where a research vessel or buoy has kept station for all those years, making at least four bathythermographic measurements a day. I doubt that the renowned oceanographic institutions at Woods Hole and Sothhampton obtained such comprehensive coverage even in their own back yard. I know that Scripps didn’t. And there are vast stretches of the Pacific and the Southern Ocean for which you will find no BT data whatsoever until the advent of the Argo program. Far more so than with historical SST data, the geographic coverage simply isn’t there.
It is only the unwary and the inexperienced that can buy into claims of reliable knowledge of climate variablity in the absence of adequate bona fide measurements.