Pielke Sr. on Zeke's zinger

Guest post by Dr. Roger Pielke Senior

Missing The Major Point Of “What Is Climate Sensitivity”

There is a post by Zeke on Blackboard titled Agreeing [See also the post on Climate Etc  Agreeing(?)].

Zeke starts the post with the text

“My personal pet peeve in the climate debate is how much time is wasted on arguments that are largely spurious, while more substantive and interesting subjects receive short shrift.”

I agree with this view, but conclude that Zeke is missing a fundamental issue.

Zeke writes

“Climate sensitivity is somewhere between 1.5 C and 4.5 C for a doubling of carbon dioxide, due to feedbacks (primarily water vapor) in the climate system…”

The use of the terminology “climate sensitivity” indicates an importance of the climate system to this temperature range that does not exist. The range of temperatures of  “1.5 C and 4.5 C for a doubling of carbon dioxide” refers to a global annual average surface temperature anomaly that is not even directly measurable, and its interpretation is even unclear, as we discussed in the paperPielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.

This view of a surface temperature anomaly expressed by “climate sensitivity” is grossly misleading the public and policymakers as to what are the actual climate metrics that matter to society and the environment. A global annual average surface temperature anomaly is almost irrelevant for any climatic feature of importance.

Even with respect to the subset of climate effects that is referred to as global warming, the appropriate climate metric is heat changes as measured in Joules (e.g. see). The  global annual average surface temperature anomaly is only useful to the extent it correlates with the global annual average climate system heat anomaly [most of which occurs within the upper oceans].  Such heating, if it occurs, is important as it is one component (the “steric component”) of sea level rise and fall.

For other societally and environmentally important climate effects, it is the regional atmospheric and ocean circulations patterns that matter. An accurate use of the terminology “climate sensitivity” would refer to the extent that these circulation patterns are altered due to human and natural climate forcings and feedbacks. As discussed in the excellent post on Judy Curry’s weblog

Spatio-temporal chaos

finding this sensitivity is a daunting challenge.

I have proposed  definitions which  could be used to advance the discussion of what we “agree on”, in my post

The Terms “Global Warming” And “Climate Change” – What Do They Mean?

As I wrote there

Global Warming is an increase in the heat (in Joules) contained within the climate system. The majority of this accumulation of heat occurs in the upper 700m of the oceans.

Global Cooling is a decrease in the heat (in Joules) contained within the climate system. The majority of this accumulation of heat occurs in the upper 700m of the oceans.

Global warming and cooling occur within each year as shown, for example, in Figure 4 in

Ellis et al. 1978: The annual variation in the global heat balance of the Earth. J. Geophys. Res., 83, 1958-1962.

Multi-decadal global warming or cooling involves a long-term imbalance between the global warming and cooling that occurs each year.

Climate Change involves any alteration in the  climate system , which is schematically illustrated  in the figure below (from NRC, 2005)

which persists for an (arbitrarily defined) long enough time period.

Shorter term climate change is referred to as climate variability.  An example of a climate change is if a growing season 20 year average  of 100 days was reduced by 10 days in the following 20 years.  Climate change includes changes in the statistics of weather (e.g. extreme events such as droughts, land falling hurricanes, etc), but also include changes in other climate system components (e.g. alterations in the pH of the oceans, changes in the spatial distribution of malaria carrying mosquitos, etc).

The recognition that climate involves much more than global warming and cooling is a very important issue. We can have climate change (as defined in this weblog post) without any long-term global warming or cooling.  Such climate change can occur both due to natural and human causes.”

It is within this framework of definitions that Zeke and Judy should solicit feedback in response to their recent posts.  I recommend a definition of “climate sensitivity” as

Climate Sensitivity is the response of the statistics of weather (e.g. extreme events such as droughts, land falling hurricanes, etc), and other climate system components (e.g. alterations in the pH of the oceans, changes in the spatial distribution of malaria carrying mosquitos, etc) to a climate forcing (e.g. added CO2, land use change, solar output changes, etc).  This more accurate definition of climate sensitivity is what should be discussed rather than the dubious use of a global annual average surface temperature anomaly for this purpose.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

113 Comments
Inline Feedbacks
View all comments
Espen
March 1, 2011 1:09 am

Carrick writes: Sorry but this description of how global mean temperature is calculated is wrong, and I’m pretty sure Roger would not endorse this explanation.
You completely missed my point. I see that Roger Pielke has elaborated my point already, but let me add: Mean surface temperature can be a highly misleading measure of the heat content of the near-surface atmosphere, since the enthalpy change when cold air is heated 10 C is much smaller than the enthalpy change when warm air is heated 10 C. Suppose, for instance, that 1 million square kilometers of Arctic Canada has a temperature anomaly of +10 C in January. Suppose that at the same time, the rest of the world has an anomaly of 0 C, except for 1 million square kilometers of Amazonas, which has an anomaly of -10 C, so the two anomalous areas cancel each other. But in reality, the total heat content of the near-surface atmosphere has dropped significantly compared to a situation with 0C anomaly everywhere.
But even if we had a good measure in Joules of the heat anomaly of the atmosphere, it wouldn’t really be a reliable measure of warming: As Dr. Pielke writes above: Global Warming is an increase in the heat (in Joules) contained within the climate system. The majority of this accumulation of heat occurs in the upper 700m of the oceans.

John Marshall
March 1, 2011 1:47 am

It also depends on the global systems being in equilibrium with regard to temperature. It never is!
There is also the anomaly problem. To state that there is an anomaly one has to know that there is a temperature difference from what is ‘normal’. We do not know what is normal and on a planet with a chaotic climate system we never will know.

izen
March 1, 2011 2:47 am

@-Espen says:
March 1, 2011 at 1:09 am
“But even if we had a good measure in Joules of the heat anomaly of the atmosphere, it wouldn’t really be a reliable measure of warming: As Dr. Pielke writes above: Global Warming is an increase in the heat (in Joules) contained within the climate system. The majority of this accumulation of heat occurs in the upper 700m of the oceans.”
Which is why the fevered arguments about atmospheric/surface temperature trends and variability whether for the present or from paleoclimate proxy records is all a bit of a red herring.
Measuring the Joule content, and change of the top 700m of the oceans is difficult, but there IS a good proxy indicator of it.
Sea level rise is at least in part due to thermal expansion. Subtract the contribution from the melting land ice and you have a good indicator of the rising energy content of the oceans for the TOTAL volume, not just the top 700m which is less than half the total.
The melting of land ice is also a result of extra Joules in the climate system, so the sea level rise from that does giive some measure of increased energy content as well.

March 1, 2011 4:18 am

Izen says:
“The graph you link to shows the sea level rise at the end of the last ice age… and shows that it has altered little in the last ~6000 years as I stated.”
“Altered little” was not what Izen stated. The goal posts have been moved again. Izen’s assertion was that the sea level is in “stasis.” [stasis, n: a period or state of inactivity or equilibrium.]
If it were not for moving the goal posts, the alarmist crowd would have nothing much to say.
Still having fun with your CAGW scam, Izen?

steveta_uk
March 1, 2011 4:49 am

Mark T, density is an intensive variable. By your argument, it is impossible to define the density of a composite such as concrete. Don’t see it myself.

Vince Causey
March 1, 2011 6:25 am

Izen says:
“The graph you link to shows the sea level rise at the end of the last ice age… and shows that it has altered little in the last ~6000 years as I stated.”
Are you sure about that? I believe the rate of SLR is about 2 to 3 mm pa – say 3mm. That’s 3m per millenium or 30m in the last 10,000 years. But I am sure sea levels have risen more than that. For example, the North Sea with its mean depth of 90 metres, was completely dry during the last ice age. Doesn’t that mean that sea levels have historically risen more than 3mm pa in the past?
Just wondering.

Carrick
March 1, 2011 7:50 am

MarkT

Wow, did you really miss Espen’s point THAT badly? It has nothing to do with sampling theory and he stated it pretty plainly. Temperature is an INTENSIVE VARIABLE and cannot be averaged as a normal variable, e.g., length. In other words, it doesn’t matter if you have a billion sensors or only two, the arithmetic mean is a meaningless number. Basic thermo 101.

Temperature measures a variable that affects humans directly, ocean temperatures do not. A 40°C day is going to affect you differently than a 20°C day regardless of what the ocean 4000-km away did. You guys don’t even know why we are interested in temperature? Talk about silly! It’s not “just thermodynamics”.
Espen claims that we need a billion thermometers, which is a risible argument, and that is besides the point???
Good grief, sorry I forgot my mind reading cap or my secret decoder ring so I would know to skip over Espen’s zingers.
Espen, see also my comment to MarkT, I think this serves to answer you both. Yes, ocean heat content is interesting. Though if we’re going on about Roger’s issues with “indirect measurements”, there are few climate metrics I can think of that are more indirect or harder to measure than ocean heat content.
Keep the arguments consistent, and when you check today’s ocean heat index to decide what to where that day, or use it to figure out what will grow in your area, then you will have made a point (but that would only be you need immediate psychiatric evaluation).
Personally if I wanted to pick another metric besides temperature to track, it would be TOA radiation balance. But guess which is still most important for measuring impact on people living in the surface atmospheric layer? (The whales no doubt appreciate your concern.)

Carrick
March 1, 2011 7:57 am

MarktT:

No, temperatures of gases that do not have an identical composition cannot be “averaged,” period. That’s the only problem.

This statement exposes just how much thermo 101 you don’t understand. Gas mixtures is not a problem.

Carrick
March 1, 2011 8:16 am

Al Tek:

One more time: mo matter how do you massage insufficient data, you cannot get more information out of this data set than it is already. Homogenize it, grid it, select subsets, it does not matter, you will not have any new information. That’s why all your “analyses” show about the same result, which is a garbage predefined by under-sampled wrongly-spaced initial station set. The only way to increase information from the field is to increase sampling density of the data acquisition system. When you do this and get the similar result to your current mathturbations, then you might be up to something.

Still up to the usual personal attacks I see. I take that as a substitute for ability to reason.
But again, as I predicted, there’s no way we’ll ever see eye to eye on this.
You are making a series of statements of faith. If they were statements of fact they could be established by analysis of the data.
Analysis of the data set shows up many interesting things.
For one, comparing the different sets obtained either by surface measurement or satellite show a striking similarity in patterns.
Secondly, if one looks at the latitudinal variation in measured temperature over time
it shows a distinct pattern of arctic amplification.
Third the measured data show a striking correlation with distance. Using 500-km for the correlation distance as input to the sampling theorem is probably overly conservative, rather than not nearly conservative enough.
If you knew half as much as you claim you knew, you would understand what the consequences of undersampling the data would be, and what effect that would have on the measurement set. One of the easiest ways to look for undersampling is to look at the frequency domain. For data that follow an approximate 1/f power law, aliasing will show up as a high-frequency plateau.
GISTEMP PSD. If there are aliasing issues contaminating the time-series, it is only for periods of less than a year. (Note: I don’t think GISTEMP is reliable for periods of less than a year for other reasons, so this is no shock to me.)
Mind you I’m not claiming the current data set is perfect, and am even cheered by the Berkeley project, because I too think it’s needed. But your criticisms fall far from the mark.

Spen
March 1, 2011 8:46 am

I read the link above to Judith Curry’s discussion on spatio-temporal chaos. It is certainly challenging and reminds me of Prime Minister Palmeston’s description of the intractable Shleswig Holstein question in the 19 century. He said only two people apart from himself had understood it. One was dead and the other was in an asylum.
So without implying that I am in an asylum or within the 1% who understand the theory I would ask this question. Global climate history over geological time has indeed varied but always within a relatively small range. This evidence points to the existence of boundary conditions, the principle of which surely would be in conflict with this chaos theory.

D. J. Hawkins
March 1, 2011 10:22 am

steveta_uk says:
March 1, 2011 at 4:49 am
Mark T, density is an intensive variable. By your argument, it is impossible to define the density of a composite such as concrete. Don’t see it myself.

Not true. Density is expressed as mass per unit volume. Temperature is not “per unit” anything. The plasma in Princeton University’s fusion reactor reached millions of degrees. If you stuck your hand in it you wouldn’t even have noticed (ignoring the very low pressure conditions leading to “vac bite”) because the plasma density was very low. Total heat content about that of a hot cup of coffee.
Now, you might make a case that the conditions on the face of the planet are such that there is not a lot (for engineers “a lot” usually means an order of magnitude or so) of difference from place to place so that very broadly air anywhere at 70F is the same as air anywhere else at 70F and in that sense averaging temperature might mean something to a rough order of magnitude (ROM). However, the devil is in the details and if you’re looking for sub 1F accuracy in your trends and analysis then air anywhere is not the same as anywhere else and you must look at enthalpy instead. And if you think the temperature record is a mess, you don’t want to contemplate the humidity record.

Bruce of Newcastle
March 1, 2011 10:56 am

steven mosher says:
February 28, 2011 at 2:52 pm
Here’s a nice little presentation that will give you some ideas.. using Ocean heat content: http://www.newton.ac.uk/programmes/CLP/seminars/120812001.html
—————————————————————-
steven mosher says:
February 28, 2011 at 11:21 pm
Bruce,
What I’m saying is that you cannot derive a sensitivity from a time series as you have attempted to.
—————————————————————-
Steven,
Thanks for replying. I’ll leave off the question of deriving 2XCO2 from time series (even though it is a reasonable question), because your link to Magne Aldrin’s presentation was bugging me.
After some thought, what I think is Dr Aldrin is statistically overestimating CO2 sensitivity. This is because (see slide 13, p 14 of the PDF) he seems to be underestimating solar sensitivity.
I say this because the correlation of solar cycle length to temperature suggests solar sensitivity is quite high. If as seems reasonable you can posulate this correlation is reflective of total solar sensitivity (since something to do with the Sun’s behaviour is causing the correlation) then SCL encompasses what Mr Rumsfeld might say the ‘known knowns’ of solar sensitivity plus the ‘unknown unknowns’ of solar sensitivity.
What I suspect is happening is Dr Aldrin is using a measure of solar forcing as an input to the model which reflects only the ‘known knowns’. Because of this the solar ‘unknown unknowns’ would contaminate the calculated CO2 sensitivity and make it too large. To misquote, saying CO2 is “the only thing we can think of” means a GCM approach will tend to estimate CO2 sensitivity on the high side if other sensitivities are underestimated or discounted. I can’t be certain of this without looking at how the code handles the solar radiative forcing input data, but it looks likely to me that it is underweight.
If you graph SCL vs temperature this shows a significant empirical correlation. I suspect this is comprised of the direct component (known known) as Dr Aldrin has in slide 13, and an unidentified component(s), possibly a positive feedback, that people such as Drs Svensmark and Rao and others have been linking to cosmic rays, UV etc. I don’t know enough to tell whether they are right, but this is about lifting the lid on the ‘unknown unknowns’.
This is the intrinsic problem with the modelling approach – if you inadvertently leave out a significant variable, the variable you are trying to measure will tend to be an over-estimate. I know this from multiple regression – if you leave a statistically significant variable out, your parameter vector will be too large since the regression is trying to minimise the residuals, particularly with noisy data. You learn to be wary of this. In this case both pCO2 and solar rose in the time series, especially in the last 50 years or so, so it is not surprising if some solar derived variance is misassigned to CO2. Dr Aldrin’s approach is effectively a form of multiple regression.

March 1, 2011 11:19 am

carrick, you mean if I shout Nyquist you’re not impressed?
Dont forget the resampling experiment I did where I randomly selected 1 station per
grid cell and got the same answer. The other thing Al forget’s is that SST which covers 70% of the globe is much less variable spatially. gosh why would that be

March 1, 2011 12:03 pm

Carrick: “Third the measured data show a striking correlation with distance. “
Third time: there is nothing striking at all. Look at the details of Hansen-Lebedeff algorithm (section 3):
http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf
Then look at any pair of temperature data. Inter-annual variation in any temperature series vastly exceeds long climatic trends, always, I hope you will not argue with this fact. Therefore, the correlation coefficient between a pair of time series is dominated by high-amplitude inter-annual variations, that’s why their correlation is “strikingly good”.
However, this has no bearing to the question at hand, about climate (30-100years) trends. As I submitted, there are many met stations (Crossbyton vs. Lubbock, Ada vs. Pauls Valley, etc) , 30-50 miles apart. I am absolutely positive that their inter-annual variations would correlate to near 100%. It is obvious, they see nearly the same weather and same seasons, and same skies. However, their 90-years trends are OPPOSITE. This is a documented hard fact, see GISS.
Now, you invoke the satellite argument. Satellites measure something that is fairly different from station’s temperature. The information comes form a vast layer of several km thick, so it does not have the necessary resolution. It is an integral parameter. Yes, it resembles the ground signal to some degree. However, despite all efforts to fudge output account for all corrections, there is still a difference of 0.4-0.6C between sats and grounds:
http://www.woodfortrees.org/plot/gistemp/from:2006/plot/hadcrut3vgl/from:2006/offset/plot/uah/from:2006/offset
This discrepancy amounts to the entire magnitude of “global warming”. [Note: in your example you have introduced various “offsets” to your time series, up to 0.21C. Why would you do so, to skew results by 30% of the entire “global warming” effect?]
More, even if satellites “measure” the same averages and do it correctly, or if there could be sufficient number of ground station, it still does not mean that the globe has radiative imbalance due to man-made CO2 increase as AGW theory asserts. As I mentioned, it can be shown that globally-averaged temperatures can go up or down while there is no global radiative imbalance at all.
As I see, you prefer to ignore these inconvenient facts. More, you continue to express nonsense – Berkeley project or not, the information from current scarce set of stations cannot be fundamentally improved, because the density of sampling grid is insufficient to represent the complex fractal-style topology of temperature field, and its crude boundary layer variability. But it as apparent that it is useless to argue this kind of details with you.
P.S. I’d like to express many thanks to Tamino who re-introduced the term “mathturbation”, which perfectly describes various mangling with temperature data sets including his own meaningless statistical efforts.

March 1, 2011 12:09 pm

Mosher: “Dont forget the resampling experiment I did where I randomly selected 1 station per grid cell and got the same answer. “
Apparently you didn’t grasp the correct meaning of concept of resampling. You cannot have sammples that you don’t have. You need to INCREASE the number of stations, not to decrease the already crippled set. And why do you think that your initial grid is random in first place?

March 1, 2011 12:35 pm

Carrick: “One of the easiest ways to look for undersampling is to look at the frequency domain. For data that follow an approximate 1/f power law, aliasing will show up as a high-frequency plateau.”
No, this would happen only if you undesample just a tiny bit, less than a factor of two. Ground station data suggest however that the land surface field is undersampled by a factor of 100. Try to find some decent sampling-aliasing java applet and educate yourself. Average of your restored signal could be anything, from -1 to +1.
Actually, we have been here before, at Blackboard “physicsts” thread, half a year ago.
http://rankexploits.com/musings/2010/physicists/
I don’t see any progress.

izen
March 1, 2011 12:53 pm

@-Spen says:
March 1, 2011 at 8:46 am
“I read the link above to Judith Curry’s discussion on spatio-temporal chaos. …. Global climate history over geological time has indeed varied but always within a relatively small range. This evidence points to the existence of boundary conditions, the principle of which surely would be in conflict with this chaos theory.”
No, chaos theory differentiates between the inherently unpredictable nature of any specific value at a specific time/place from initial conditions.
But it also defines boundary conditions if the curves of the non-linear functions that generate the chaos are definable.
If you play around with chaotic formula at all you seen find that the numbers produced are unpredictable, but the range or envelope of behavior is constrained by the driving variables. A result may be anywhere on the manifold of a strange attractor, but the shape of the manifold is precisely defined.
(thats badly expressed, hopefully someone with better math chops than I in this subject can clarify it!)

izen
March 1, 2011 12:59 pm

@-Vince Causey says:
March 1, 2011 at 6:25 am
” I believe the rate of SLR is about 2 to 3 mm pa – say 3mm. That’s 3m per millenium or 30m in the last 10,000 years. But I am sure sea levels have risen more than that. For example, the North Sea with its mean depth of 90 metres, was completely dry during the last ice age. Doesn’t that mean that sea levels have historically risen more than 3mm pa in the past?
Just wondering.
Yes, during the meltwater pulse 1A as shown on the graph that ‘SMOKEY’ so helpfully provided there was a rise of about 20m per 1000 years or around ten times the present rate. But that was during the height of the glacial-interglacial transition when the major northern ice-caps were melting. During the glacial transition there was about 120m of sea level rise over ~8000 years.
Once all those except Greenland and Antarctica had gone the sea level stabilized around 6000 years ago and has certainly not being rising for the last 6000 years at the rate of a foot per century. There is ROBUST archaeological, geological and direct observational records that constrain any rise over the last 6000 years to much less than the present observed rate of ~3mm per annum.
@-Smokey says:
March 1, 2011 at 4:18 am
“Still having fun with your CAGW scam, Izen?”
If the ‘C’ in CAGW stands for catastrophic then as I think I have stated before I am agnostic about whether the AGW will be catastrophic as that has more to do with how ROBUST a society is in the face of change rather than the magnitude of the change.
We may agree – at least to some extent – on how much of a ‘scam’ it is to claim that AGW will be catastrophic. I prefer the term ‘political froth’ to scam for such claims for or against the impact of AGW on modern technological societies. Clearly if we were all still hunter-gathers as the human population was during the last Eemian interglacial period any global warming – or the eventual cooling was not catastrophic because such societies – small bands without agriculture are much more robust in the face of changing climate.
I wonder if you could post the link again to the Post-Glacial Sea Level Rise graph, it does make the point rather well….
Sea level may be a better indicator of the global heat content of the climate that any manipulations of surface temperature data.

izen
March 1, 2011 1:00 pm

@-Vince Causey says:
March 1, 2011 at 6:25 am
” I believe the rate of SLR is about 2 to 3 mm pa – say 3mm. That’s 3m per millenium or 30m in the last 10,000 years. But I am sure sea levels have risen more than that. For example, the North Sea with its mean depth of 90 metres, was completely dry during the last ice age. Doesn’t that mean that sea levels have historically risen more than 3mm pa in the past?
Just wondering.
Yes, during the meltwater pulse 1A as shown on the graph that ‘SMOKEY’ so helpfully provided there was a rise of about 20m per 1000 years or around ten times the present rate. But that was during the height of the glacial-interglacial transition when the major northern ice-caps were melting. During the glacial transition there was about 120m of sea level rise over ~8000 years.
Once all those except Greenland and Antarctica had gone the sea level stabilized around 6000 years ago and has certainly not being rising for the last 6000 years at the rate of a foot per century. There is ROBUST archaeological, geological and direct observational records that constrain any rise over the last 6000 years to much less than the present observed rate of ~3mm per annum.

Espen
March 1, 2011 1:48 pm

Carrick says:
Espen claims that we need a billion thermometers
My oh my, I never said such a thing. If I say “Even if the moon were indeed made of Wensleydale cheese, it would be too old to taste good”, do you think I claim that “we need a moon made of Wensleydale cheese”?
but that would only be you need immediate psychiatric evaluation
Wow, now I really have to bow before your intellectual superiority.

March 1, 2011 2:30 pm

Bruce:
http://www.ecd.bnl.gov/steve/pubs.html#pres
Start with this slide set from schwartz. You’ll see a variety of approaches.

March 1, 2011 2:40 pm

Al,
what you fail to realize is that you could drive the sample size down to 60 stations or even fewer, randomly selected from over 40K stations on the land and still get the same answer. Which means you believe the unsampled areas must somehow be different. On what basis? Well, we can look at the whole world over 30 years.
Find any pockets or eddies where a geographically substantial portion of land exhibits
statistiscally different trends? nope. No “standing waves” of zero trend or negative trend.
Further you can sample the whole world ( UHA or RSS) and see that it doesnt differ ( in trend.. which is what we care about) from a sparsely sampled earth. The reason is simple. spatial correlation. Also, the hansen study is very much out of date and their are more recent studies that use daily data from many more stations to establish some slightly different correlation figures.
Do you believe in an LIA?

March 1, 2011 2:43 pm

carrick,
espen and al dont believe in an MWP or an LIA.
too few thermometers to establish an average during those times.

Bruce of Newcastle
March 1, 2011 4:17 pm

steven mosher says:
March 1, 2011 at 2:30 pm
Thanks, Steven, lots to go through. First one I’m fairly randomly looking at is:
Why hasn’t Earth warmed as much as expected? Schwartz, S. E., Sept. 22, 2010
I note two empirical options he lists are:
“Instrumental record ΔTemperature/(Forcing – Flux)
Satellite measmt.: [d(Forcing – Flux)/dTemperature]-1”
The latter is of course the method used by Drs Spencer and Braswell (2010) which I referred to.

sky
March 1, 2011 4:22 pm

Carrick says:
February 28, 2011 at 2:54 pm
“If you ask how many samples you need to fully capture the Earth’s temperature field, based on a correlation length of 500-km (this is a smaller number than is quoted by Hansen’s group for the correlation length), that works out to around 2000 instruments world wide. By comparison, there are nearly 10,000 fixed instruments located on about 30% of the surface of the Earth from land-based systems alone. The oceans are less well covered, but the correlation lengths go way up.”
Even if we accept the “average temperature” as a meaningful metric, the “correlation length” does NOT define the spatial sampling rate that is required for alias-free capture of the temperature field. Although I’d be delighted to see century-long records from even as few as 2000 thermometers uniformly distributed world-wide in locations unaffected by UHI and/or land use changes, we have nothing remotely resembling that in the GHCN data base. On the contrary, there are vast stretches of the continents where no credible “rural” record of adequate duration is to be found for much more than 1000km. Only the USA and Australia is reasonably adequately sampled.
The zero-lag spatial correlation does not even begin to address the issue of low-frerquency coherence between spatially separated points. While that coherence may be quite high in satellite data that sample over a swath tens of kilometers wide, it far too frequently fades into insignificance when temperatures inside an instrument shelter are measured. That’s what makes the “trends” of surface station records so unstable and inconsistent a metric. Th upshot is that we really have no accurate grasp of what the “global average temperature” has done in the last century.