Pielke Sr. on warm bias in the surface temperature trend – "provides evidence of the significant error in the global surface temperature trend analyses of NCDC"

Up_trendNew Paper Documents A Warm Bias In The Calculation Of A Multi-Decadal Global Average Surface Temperature Trend – Klotzbach Et Al (2009)

Guest post by  — Roger Pielke Sr.

When I served on the committee that resulted in the CCSP (2006) report on reconciling the surface and tropospheric temperature trends, one of the issues I attempted to raise was a warm bias in the construction of long term surface temperature trends when near surface land minimum temperatures (and maximum temperatures when the atmospheric boundary layer remained stably stratified all day, such as in the high latitude winter) were used.  This error will occur even for pristine observing sites. Tom Karl and his close associates suppressed this perspective as I document in

Pielke Sr., Roger A., 2005: Public Comment on CCSP Report “Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences“. 88 pp including appendices.

As a result of the poor treatment by Karl as Editor of the CCSP (2006) report, I decided to invesitgate this issue, and others, in a set of peer reviewed papers with colleagues which include

Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.

Pielke Sr., R.A., and T. Matsui, 2005: Should light wind and windy nights have the same temperature trends at individual levels even if the boundary layer averaged heat content change is the same?Geophys. Res. Letts., 32, No. 21, L21813, 10.1029/2005GL024407.

Lin, X., R.A. Pielke Sr., K.G. Hubbard, K.C. Crawford, M. A. Shafer, and T. Matsui, 2007:An examination of 1997-2007 surface layer temperature trends at two heights in Oklahoma. Geophys. Res. Letts., 34, L24705, doi:10.1029/2007GL031652.

Fall, S., D. Niyogi, A. Gluhovsky, R. A. Pielke Sr., E. Kalnay, and G. Rochon, 2009: Impacts of land use land cover on temperature trends over the continental United States: Assessment using the North American Regional Reanalysis.Int. J. Climatol., accepted

We now have a new paper accepted which documents further a warm bias in the use of multi-decadal global surface temperature trends to assess global warming.

It is

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., in press.

Our paper is also effectively discussed in my son’s weblog

Evidence that Global Temperature Trends Have Been Overstated

The abstract of the Klotzbach et al (2009) paper reads

“This paper investigates surface and satellite temperature trends over the period from 1979-2008. Surface temperature datasets from the National Climate Data Center and the Hadley Center show larger trends over the 30-year period than the lower-tropospheric data from the University of Alabama-Huntsville and Remote Sensing Systems datasets. The differences between trends observed in the surface and lower tropospheric satellite datasets are statistically significant in most comparisons, with much greater differences over land areas than over ocean areas. These findings strongly suggest that there remain important inconsistencies between surface and satellite records.”

We tested the following two hypotheses:

1. If there is no warm bias in the surface temperature trends, then there should not be an increasing divergence with time between the tropospheric and surface temperature anomalies [Karl et al., 2006]. The difference between lower troposphere and surface anomalies should not be greater over land areas.

2. If there is no warm bias in the surface temperature trends then the divergence should not be larger for both maximum and minimum temperatures at high latitude land locations in the winter.

Both were falsified.

The paper has the following text

“We find that there have, in general, been larger linear trends in surface temperature datasets such as the NCDC and HadCRUTv3 surface datasets when compared with the UAH and RSS lower tropospheric datasets, especially over land areas. This variation in trends is also confirmed by the larger temperature anomalies that have been reported for near surface air temperatures (e.g., Zorita et al., 2008; Chase et al., 2006; 2008, Connolley, 2008). The differences between surface and satellite datasets tend to be largest over land areas, indicating that there may still be some contamination due to various aspects of land surface change, atmospheric aerosols and the tendency of shallow boundary layers to warm at a greater rate [Lin et al., 2007; Esau, 2008; Christy et al., 2009]. Trends in minimum temperatures in northern polar areas are statistically significantly greater than the trends in maximum temperatures over northern polar areas during the boreal winter months.

We conclude that the fact that trends in thermometer-estimated surface warming over land areas have been larger than trends in the lower troposphere estimated from satellites and radiosondes is most parsimoniously explained by the first possible explanation offered by Santer et al. [2000]. Specifically, the characteristics of the divergence across the datasets are strongly suggestive that it is an artifact resulting from the data quality of the surface, satellite and/or radiosonde observations. These findings indicate that the reconciliation of differences between surface and satellite datasets [Karl et al., 2006] has not yet occurred, and we have offered a suggested reason for the continuing lack of reconciliation.”

What our study shows is that maps prepared by NCDC, as given below, are biased presentations of the surface temperature anomalies.

BIASED NCDC MAP OF SURFACE TEMPERATURE ANOMALIES

While additional research is required in order to determine the magnitude of the bias, we can use the analysis of trends using two levels near the surface from the Lin et al (2007) paper as an estimate. I reported on this in my weblog

where I wrote

Back of the Envelope Estimate of Bias in Minimum Temperature Measurements

To present a preliminary estimate, lets start with the value reported for the recent trend in the global average surface temperature.  The 2007 IPCC Report presents a global average surface temperature increase of  about 0.2C per decade since 1990 (see their Figure SPM.3). Their trend is derived from the average of the maximum and minimum surface temperatures; i.e.,

T(average) = [T(max) + T(min)]/2.

“From our papers (Pielke and Matsui 2005 and Lin et al. 2007), a conservative estimate of the warm bias resulting from measuring the temperature near the ground is around 0.21 C per decade (with the nightime T(min) contributing a large part of this bias) . Since land covers about 29% of the Earth’s surface (see), the warm bias due to this influence explains about 30% of the IPCC estimate of global warming. In other words, consideration of the bias in temperature would reduce the IPCC trend to about 0.14 degrees C per decade, still a warming, but not as large as indicated by the IPCC.

This is likely an underestimate, of course, as the value is not weighted for the larger bias that must occur at higher latitudes in the winter when the boundary layer is stably stratified most of the time even in the “daytime” . Moreover, the warm bias over land in the high latitudes in the winter will be even larger than at lower latitudes, as the nightime surface layer of the atmosphere is typically more stably stratified than at lower latitudes, and this magnifies the bias in the assessment of temperature trends using surface and near surface measurements. [not coincidently, this is also where the largest warming is claimed; e.g., see the map on Andy Revkin’s Dot Earth’s weblog].

Land is also a higher fraction of the Earth’s surface at middle and higher latitudes in the northern hemisphere and at the highest latitudes in the southern hemisphere (see).”

Our new paper Klotzbach et al (2009) provides evidence of the significant error in the global surface temperature trend analyses of NCDC, and well of other centers such as GISS and CRU, due to the sampling of temperatures at just one level near the surface. It is also important to recognize that this is just one error of a number that are in the NCDC, GISS and CRU data sets, as we have summarized in our paper

Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007:Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

149 Comments
Inline Feedbacks
View all comments
Mark T
August 14, 2009 10:42 am

KW (09:16:10) :
The absolute fact is that we don’t know anything with absolute certainty.

There’s a logical contradiction in this statement, btw. Think about it for a minute or two.
Mark

August 14, 2009 10:43 am

We live 500 feet elevation gain above a huge glacier river, surrounded by the most glaciated area on earth, supposedly, other than the north and south pole. We used to live right on the river. In the winter it has been minus 60F on the river location, minus 55F at 100 feet above the river on the runway and minus 30F at our house a mere 400 feet higher. On cold days at the house we will fly up, 5,000 feet to the ridge and it will be plus 30. We will camp there to be out of the cold. Right now it is 85F at the house and snowing on that same ridge.
Is this kind of what you are talking about in simple layman terms? I realize it is not a large area may be weather and not climate and one other question.
When does weather become climate?

AJ
August 14, 2009 10:46 am

According to the NCDC map, Canada is the coolest place on the planet 😉
AJ

Robert Bateman
August 14, 2009 10:47 am

Roger A. Pielke Sr (08:57:39) :
No problem there, Roger. Have a go at it with this 24-hour daily data from a Fire-weather station
http://mesowest.utah.edu/cgi-bin/droman/meso_2week.cgi?unit=0&hour1=0&type=1&var1=TMPF&day1=14&month1=08&year1=2009&stn1=WEAC1
Afternoon summer highs tend to peak abruptly whereas nightime low tend to form a gentle valley. Aug 9 has an average of 72.16 whereas the max-min/2 = 73. This gives a warming bias of .83 degrees F

George E. Smith
August 14, 2009 10:48 am

Well the issue that keeps on coming up with regard to these “surface” measurements, is this adequacy of the min/max average being identical to the true daily mean temperature. Roger asserts that this is the standard methodology.
And I will continue to insist that the average formed from (Tmax+Tmin)/2 will only be the true time average over the complete diurnal temperature cycle, in the very special case where the diurnal temperature function is a pure sinusoidal function, which in this case would have a period of 24 hours. And in that case the time of Tmax and Tmin would be separated by exactly 12 hours. In the current methodology, Tmax, and Tmin could occur at any time and they would still use the same average, even if those two events were just one hour apart.
If the function is not sinusoidal, but is a repetitive cyclic function then its Fourier series equivalernt must contain harmonics, so frequency components with periods of 12 hours or less would exist in the signal.
As a result a sampling regimen which takes on two sample per 24 hours, is undersampling by at least a factor of two, which is a Nyquist violation of sufficient extent to render the average value corrupted by aliassing noise.
For a sinusoidal signal of frequency (f), sampled ata rate 2f (exactly), the actual continuous function is not recoverable (generally), because for exa,p[oe those two samples could occur at the times of zero crossings, usggesting no signal at all. BUT, even in that particular degenerate case, the average will be correct. The min/max method would locate the positive and negative peaks, so in that case the complete continuous function is recoverable; but only for a pure sinusoid. The existence of even a second harmonic component with a period of 12 hours, would require four times daily sampoling minimum to correctly recover the average.
But for a general periodic function with a period of (P), the min/max samples will only yield the true average if f(t) = f(P-t)
But I am further unhappy with just the min max sampling because the long wave IR radiative effect of the temperature goes as T^4 not linearly with T, so periods with higher T radiate much more than a linear increase over periods of lower T. A true accounting of the radiative effect of a cyclic temperature change, would require taking the 4th power of the cyclic function and averaging that over the period. In all cases, that process yields a higher total radiation, than would be calculated from the average temperature.
So the min/max temperature sampling not only yiled an incorrect value for the true daily mean temperature; but that average temperature still does not yield the true total radiated energy for the day; and it ALWAYS underestimates the total radiation emitted.
For typical diurnal temperature ranges, the underestimate is small, but not compared to the effect of the hundredths of degrees anomalies that are reported in these studies. Taken over the yearly temperature cycle, the total emission calculated from the annual average temperature seriously underestimates the true total energy.
And when you add on to the simple temporal violation of the sampling theorem; the effect of gross spatial undersampling; the whole process of global temperature evaluation becomes a total farce.
And climatologists still insist that it is ok to sample at places 1200 km apart; their “anomalies” are “coherent” over that range. Total BS; it’s a complete violation of the Nyquist sampling theorem.

Robert Bateman
August 14, 2009 10:55 am

JimB (09:09:07) :
Skeptical Sam says there ain’t no stinkin’ sea levels rising.
Have you been to your favorite beach lately?
Stimulate your local economy and take a drive today.
The $ spent will be well invested.
Then you can Twitter your Congressman with a pic or two.
Write a story for the local news.

Nogw
August 14, 2009 10:57 am

Nobody will be able to oppose the fiercest and more sarcastic denier : Nature

E.M.Smith
Editor
August 14, 2009 11:07 am

Ellie in Belfast (23:56:36) : E.M. Smith over at http://chiefio.wordpress.com/ has been analysing the records contributing to GIStemp and has some very interesting interim conclusions that the bias is due to short-lived (in reporting terms) stations.
Mostly what I’ve found is that stations with reporting histories over 100 years long show almost no change over time (a couple of 1/10 C ) while those of shorter lives carry all the “warming signal” seen in the bulk data. If you focus in on those short lived stations, you see a large bolus of stations with very warm N.H. winter temperatures (tropical stations?) that come into the record, coincident with the AGW “warming”…
This means that the warming in the GIStemp data set (spread evenly over the year and over the globe) mis-states what the data actually say. the “warming signal” is carried only in a part of the GHCN data. It is not evenly distributed in the time domain (it is only in N.H. winters, not in summers) and it is not in all thermometers (the old, stable, set do not show the signal). A signal concentrated in time and space is averaged into the bulk data in GIStemp; (which is a filter that tries to suppress the impact of spacial distribution, but is not a perfect filter, so we get “global” warming when there is none.)
http://chiefio.wordpress.com/2009/08/13/gistemp-quartiles-of-age-bolus-of-heat/
BTW, in thinking about why the long lived stations might be so stable and have less UHI it occurred to me that they are all 100+ years old and there were not a lot of airports in 1908 … This screen will have removed all the “over the tarmac” thermometers into a group by themselves. Perhaps part of that “bolus of heat” is the dramatic growth of airports in the aviation age.
I can fairly easily produce the lists of station IDs by quartile, if anyone wants it. It would be fascinating to see if the warming pattern matches urban vs rural or airport vs non or a dozen other UHI related things.
At this point I can not speak to that issue, only speculate about it, but it would not be difficult to turn the lists of station IDs into spacial and “type of terrain” maps. It just takes more time… Anyone who want’s a list of stations, just leave a note on one of pages at my site and I’ll either send you the list or just put them up in a posting if there is enough interest. Right now it would just be a list of station IDs. With a bit of work I can match those to more station information (lat / long, terrain type flag, etc.) If demand is high enough I can do that too.

Douglas Taylor
August 14, 2009 11:16 am

The only reliable method of showing that the earth has warmed about 1 degree C in the last 100 or so years, are borehole measurings(abandoned oil wells etc). However the time resolution is lousy with this measurement. One solves the heat equation(diffusion equation) with a variety of surface boundary conditions.

August 14, 2009 11:21 am

E M Smith said
I can fairly easily produce the lists of station IDs by quartile, if anyone wants it. It would be fascinating to see if the warming pattern matches urban vs rural or airport vs non or a dozen other UHI related things.”
Yes please!
tonyb

Don S.
August 14, 2009 11:21 am

Mac asked: Given all the uncertainties and biases in the temperature record can anyone say with any certainty that this planet has actually warmed, or cooled, over the last 100 years?
: Maybe, but probably not. I feel. IMHO. But if it did, CO2 is the cause.

Don S.
August 14, 2009 11:23 am

T.: So we absolutely don’t know that we don’t absolutely know anything?
I’d concur with that if I knew how.

steven mosher
August 14, 2009 11:27 am

EM.
It would be cool to do a web interface so that people could select which type of sites they want to select for a gistemp run.
Also, WRT long stations. Karl’s reference station method has always given me doubts.. its the method by which short stations are stitched together to give a long station record.
Note: Hansen makes some arbitrary decisions on record lengths and overlaps etc. With GISTEMP running you can test the sensitivity of these
decisions. Remember, gavin is on record saying that all we need are 60 good stations for the WORLD.
so, select a screen.
1. Long uninterrupted record.
2. non airport
3. Rural ( by population and nightlights)
etc

E.M.Smith
Editor
August 14, 2009 11:49 am

Nogw (06:04:37) :
It would be a great thing to have that NCDC map above corrected after the findings of Surfacestations.org, which would show a wider bias, so closer to reality. Though it accounts only for the US, considering that their quality as one of the best of the world, the errors found can be extrapolated to less developed areas.

FWIW, I can take a list of station IDs and make a “decade average trend” table of data for any set of station IDs.
This means that whenever Anthony has a list of “good stations” vs “bad stations” I can make a table of annual and decade averages for each group.
We can directly inspect the trends in the data in the good vs bad cohorts.
All it takes is for Anthony to send me a list of station IDs for each group (whenever the station survey is done).

Don Keiller
August 14, 2009 12:04 pm

Another gem from the Met Office in response to their jusification for paying “performance bonuses”
First my question and then the reply
Dear Mr. Kidds, as a tax-payer I find Dr. Pope’s defence of Met Office staff bonuses both disingenuous and patronising.
Why should tax-payers pay bonuses for merely doing one’s job? And not very well I might add!
Long range forecasts have proved nothing short of comical. I suggest that instead of using GCMs with built in “warming”, you should try a more empirical approach which correlates global and regional climate with ENSO. PDO etc., rather than, the increasingly doubtful CO2 = warming hypothesis.
Secondly, to claim credit for excellent short term forecasts, is absurd. Rain-sensing RADAR, satellites and numerous automated weather stations (all of which WE paid for) make 1-2 day
forecasts a matter of simple observation and extrapolation in the vast majority of scenarios. Indeed it is analogous to me predicting what the weather will be like in a couple of hours by looking at the sky in the direction the wind is coming from.
The British Public deserve more than Dr. Pope’s self-serving comments.
Dr. D. Keiller,
Thank you for your email regarding Met Office staff bonuses.
In response, I hope it may help if I provide some background on the Met Office bonus scheme.
The Met Office has a corporate bonus scheme which is based on the delivery of 19 objectives within our four Key Performance Targets and five additional measures.
The level of the bonus paid depends on the number of these targets achieved, with different targets contributing different payments. A flat rate award is paid to all staff that have received a satisfactory performance mark in their annual review.
In 2008/09 the Met Office achieved 16 of the 19 objectives. As a result the bonus was reduced and paid at 74% of the total available award.
The payment relates to our performance between April 2008 and April 2009 and includes successes such as:
· accurate forecast of snow in February 2009
· establishment of the Flood Forecast Centre in partnership with the EA
· accurate prediction of the number of tropical storms in the north Atlantic.
· putting our climate change information on Google Earth,
· completion of climate change research in to the impacts on the Thames estuary
· development of a forecast in support of sufferers of SAD
· The provision of de-icing forecasts to airlines to keep there services moving – leading airlines that use this service have a proven reduction in icing-related delays of 85% and reduced icing costs of up to 30%.
Thank you for taking the time to contact the Met Office.
Yours sincerely
Mark Beswick on behalf of the Customer Feedback Manager
Met Office FitzRoy Road Exeter Devon EX1 3PB United Kingdom
Tel: 0870 900 0100 or +44 (0)1392 88 5680 Fax: +44 (0)1392 88 5681
E-mail: enquiries@metoffice.gov.uk http://www.metoffice.gov.uk

Gary Hladik
August 14, 2009 12:18 pm

If I understand correctly, with a near-ground warm bias in the temperature record the so-called “average temperature of the earth” can increase without causing the “signature” tropical troposphere “hot spot”. Or rather, the “hot spot” moves from the troposphere to ground level?
Are there any dangerous consequences of near-ground temps increasing faster/decreasing slower than the rest of the atmosphere?

August 14, 2009 12:20 pm

Mark T (10:42:32) :
KW (09:16:10) :
“This statement is false”

Gary Pearse
August 14, 2009 12:52 pm

Pierre Gosselin (03:28:25) :
“Scientists have uncanny ways of reaching the results they want to see. The same applies for Arctic sea ice projections. Back in July, 16 modellers projected Arctic sea ice for this September, with a majority projecting far lower levels than what is likely to occur”
I think the temp cell dots for the arctic will be turning blue this fall and winter
Nansen shows a big uptick in ice area (probably will be modified in a day or so).
http://arctic-roos.org/observations/satellite-data/sea-ice/ice-area-and-extent-in-arctic
Even so, there are several coroborating data sets: Cryosphere Today shows a small uptick in the arctic sea ice extent:
http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/current.365.jpg
And the Arctic Basin ice also shows a recent uptick (also compare with the beginning of the curve that is a year ago:
http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/recent365.anom.region.1.html
Also the DMI in the sidebar of WUWT shows that the arctic basin above 80N has dropped below freezing – it has been below the long term average for about 40% of the summer thaw period:
http://ocean.dmi.dk/arctic/meant80n.uk.php
And finally, it is 0C at Alert, Nunavut (about 82N and was -2C yesterday or the day before. Is that sailboat still waiting for the NW passage to break up – doesn’t look promising. They must be relatives of the Catlin Crew.

Dan
August 14, 2009 12:57 pm

Moderator
[perhaps, missed it first time through. action taken ~ctm]

Robert Bateman
August 14, 2009 1:07 pm

E.M.Smith (11:07:18) :
It gets even juicier for the warmist: If you select a shorter period of record, you get a lot more new record maximums. It’s also a convenient way to get rid of those pesky 1930’s record max temps that won’t go away.
It backfires during record low temps that keep popping up: Then they backpeddle to find ‘previous’ record lows to say ‘this is normal’. Albeit they run off the back end of their cherry-picked time span.
So, when they need the 100 year record, they remodel it with a heavy tilt.
When they need new record highs, they cherry-pick the span. When they need to bury the new record lows, they backpeddle.

Robert Bateman
August 14, 2009 1:10 pm

Gary Pearse (12:52:23) :
I checked the full history of those DMI sidebar years.
2009 is threatening to walk away with the trophy like Tiger Woods buries his competition. He gets ahead early and closes out with mind-boggling shots.

Dajida
August 14, 2009 1:13 pm

It’s not popular to explicitly invoke the Law of Exemptions when stating such truths as “All truth is relative.”

Nogw
August 14, 2009 1:17 pm

Dan (12:57:54) : Not at all!, you don’t know our technology, I will repeat it for you =, BTW, it is the same for all old cultures:
1.They smile at you
2.They bow at you
3.When you turn around, they fart at you.
This simply means, worst than you can imagine: Don’t take them seriously.

Nogw
August 14, 2009 1:20 pm

So, Dan (12:57:54) : Don’t take yourself too seriously. You know…mother nature will take care.

Evan Jones
Editor
August 14, 2009 1:36 pm

How is shade accounted for in the measurements and the calculation?
NOAA standards indicate that there should be no south shading (in the Northern hemisphere), as this would shade the sensor all day long. Also, the sensor shouldn’t be shaded at or around TMax (TMin doesn’t matter as much, as it’s usually at night).
The rest of the day doesn’t matter much, either, unless temperature is being measured hourly or continually (as it is, I believe, in Germany).
what is the accepted procedure for deriving the “average” surface temperature? Is it the average of the averaege of each day, the mean of the maximum/minimum each day, or the the mean of the maximum/minimum for the stated period?
Average of Min and Max, taken on a daily basis, averaged over each month.
Finally, how does the selection of the calculation method affect the outcome or is the outcome independent of the method of calculation?
A site bias can affect max, min, or both. Another problem is with adjustment procedures: Removing outliers, TOBS (Time of Observation) bias, SHAP (Station History Adjustment Procedure — or not), FILNET (combining all this plus a very controversial “fill-in” of missing data), and “final” (UHI) adjustment. Then there is homogenization. The results are then gridded to assure each station’s data is correctly “weighted”.
After all the dust clears, final trends are three to four times warmer than raw trends (since 1880).
is it the temperature of the solid surface of the land at the point of measuremen or is it the temperature of the atmosphere measured just above the surface?
Satellites use microwave proxies to calculate lower troposphere (et al) temperatures but do not measure surface temperatures, per se.
Surface station measurements are supposed to be taken from 5 – 6 feet off the ground. (A few are out of compliance.)