Pielke Sr. on the Muller testimony

Comments On The Testimony Of Richard Muller At the United States House Of Representatives Committee On Science, Space And Technology

By Dr. Roger Pielke Sr.

First, as posted on my son’s weblog in

Global Temperature Trends

the global temperature anomaly is essentially irrelevant in terms of climate change issues that matter to society and the environment. Even in terms of global warming, it is a grossly inadequate measure, as discussed, for example, in

Pielke Sr., R.A., 2003: Heat storage within the Earth system. Bull. Amer. Meteor. Soc., 84, 331-335.

Pielke Sr., R.A., 2008: A broader view of the role of humans in the climate system. Physics Today, 61, Vol. 11, 54-55.

The global average surface temperature, however, unfortunately, has become the icon of the IPCC community and in the policy debate. As my son wrote in his post

“The debate over climate change has many people on both sides of the issue wrapped up in discussing global average temperature trends. I understand this as it is an icon with great political symbolism. It has proved a convenient political battleground, but the reality is that it should matter little to the policy case for decarbonization.”

This political focus has resulted in Richard Muller’s testimony on his Berkeley Earth Surface Temperature project yesterday to The Science, Space and Technology Committee of the House Of Representatives. In his (in my view, premature) testimony he makes the following claims

“The world temperature data has sufficient integrity to be used to determine global temperature trends”

“…. we find that the warming seen in the “poor” stations is virtually indistinguishable from that seen in the “good” stations.”

“The Berkeley Earth agreement with the prior analysis surprised us, since our preliminary results don’t yet address many of the known biases”?

The contradictory statement in the last sentence from his testimony contradicts the first two sentences.

All his study has accomplished so far is to confirm that NCDC, GISS and CRU honestly used the raw observed data as the starting point for their analyses. This is not a surprising result. We have never questioned this aspect of their analyses.

The uncertainties and systematic biases that we have published in several peer-reviewed papers, however, remain unexplored so far by Richard Muller and colleagues as part of The Berkeley Earth Surface Temperature project. We summarized these issues in our paper

Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229

where the issues include:

  • a systematic bias in the use of multi-decadal trends in minimum air temperatures
  • the use of surface observing sites that are not spatially representative of the region
  • the failure to consider the variation of surface air temperature trends with height above the surface
  • the lack of incorporation of the effect of concurrent multi-decadal trends in the surface air absolute humidity
  • the absence of the statistical documentation of the uncertainty of each step in the adjustment of raw data to a “homogenized data set”  (e.g. time of observation bias; equipment changes; station moves)
  • the need to assess the absolute temperatures at which a temperature trend occurs, since a temperature anomaly at a cold temperature has less of an effect on outgoing long wave radiation that the same temperature anomaly at a warm temperature.

We have explored most of these issues in peer-reviewed papers and found them to be important remaining uncertainties and biases. Richard Muller and his colleagues have not yet examined these concerns, yet chose to report on his very preliminary results at a House Hearing. A sample of our papers include:

Fall, S., N. Diffenbaugh, D. Niyogi, R.A. Pielke Sr., and G. Rochon, 2010: Temperature and equivalent temperature over the United States (1979 – 2005). Int. J. Climatol., DOI: 10.1002/joc.2094

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841

Montandon, L.M., S. Fall, R.A. Pielke Sr., and D. Niyogi, 2011: Distribution of landscape types in the Global Historical Climatology Network. Earth Interactions, 15:6, doi: 10.1175/2010EI371

Steeneveld, G.J., A.A.M. Holtslag, R.T. McNider, and R.A Pielke Sr, 2011: Screen level temperature increase due to higher atmospheric carbon dioxide in calm and windy nights revisited. J. Geophys. Res., 116, D02122, doi:10.1029/2010JD014612.

Richard Muller should be examining the robustness of our conclusions, as part of his project.

Richard does appropriately acknowledges Anthony’s and Steve McIntyre’s contribution in his testimony where he writes

“Without the efforts of Anthony Watts and his team, we would have only a series of anecdotal images of poor temperature stations, and we would not be able to evaluate the integrity of the data. This is a case in which scientists receiving no government funding did work crucial to understanding climate change. Similarly for the work done by Steve McIntyre. Their “amateur” science is not amateur in quality; it is true science, conducted with integrity and high standards.”

This is well deserved recognition for both research colleagues. One does not need a “Ph.d.” by your name, to do world-class research!

Anthony Watts has prepared an excellent response to Richard Muller’s presentation in

Response_to_Muller_testimony.

and

Clarification on BEST submitted to the House

His insightful dissection of the problems with Richard Muller’s presentation and of NCDC’s inconsistent behavior (which I completely agree with) include the statements that

“NOAA’s NCDC created a new hi-tech surface monitoring network in 2002, the Climate Reference Network, with a strict emphasis on ensuring high quality siting. If siting does not matter to the data, and the data is adequate, why have this new network at all?”

“Recently, while resurveying stations that I previously surveyed in Oklahoma, I discovered that NOAA has been quietly removing the temperature sensors from many of the USHCN stations we cited as the worst (CRN4, 5) offenders of siting quality. For example, here are before and after photographs of the USHCN temperature station in Ardmore, OK, within a few feet of the traffic intersection at City Hall.”

“Expanding the search my team discovered many more instances nationwide, where USHCN stations with poor siting that were identified by the surfacestations.org survey have either had their temperature sensor removed, closed, or moved. This includes the Tucson USHCN station in the parking lot, as evidenced by NOAA/NCDC’s own metadata online database….”

He concludes with

“It is our contention that many fully unaccounted for biases remain in the surface temperature record, that the resultant uncertainty is large, and systemic biases remain. This uncertainty and the systematic biases needs to be addressed not only nationally, but worldwide. Dr. Richard Muller has not yet examined these issues.”

I completely agree with Anthony’s submission to the House committee in response to Richard Muller’s testimony. Richard Muller has an important new approach to analyze the surface temperature data. We hope he adopts a more robust and appropriate venue to present his results.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
73 Comments
Inline Feedbacks
View all comments
John McManus
April 1, 2011 9:28 am

Richard M:
I got one. [snip]

April 1, 2011 9:29 am

I don’t regard global temperature irrelevant but I do regard the current published graphs from NOAA, NASA, and the Met Office as falsifications of what the real temperature actually is. Muller seems to rely on the paper of Hansen et al. in the December issue of Reviews of Geophysics which cannot be believed. It is quite lengthy and tells us how they put together the current global temperature chart. There are some innovations like the use of satellite night light irradiance for urban corrections but the one thing missing is any reference to satellite temperature measurements. Satellites are more accurate and have a uniform coverage of both hemispheres and the ocean which cannot be said about any other source. From the beginning Hansen has been dead set about using satellite data and small wonder: satellites do not show the late twentieth century warming that Hansen used in 1988 to claim that anthropogenic global warming had arrived. All three global temperature curves (NASA, NOAA, and the Met Office) that show this warming are cooked. As in falsified. The collusion started in the late seventies and is still going on. To learn how it was done read my book “What Warming?” More recent fabrications involve an over-all raising of the temperatures of the twenty-first century by two tenth of a degree and fabrication of 2005 as having been warmer than the 1998 super El Nino. The only global warming within the last thirty one years was a step warming that started in 1998, raised global temperature by a third of a degree, and then stopped in 2002. There was no warming before or after that. It, and not some greenhouse effect was the cause of the very warm first decade of the our century.The eighties and the nineties were a period of temperature oscillations in step with the ENSO system in the Pacific. The twenty-first century started with a six year warm period, the twenty-first century high, that ended with the 2008 La Nina cooling. That La Nina indicated resumption of the oscillating climate that the super El Nino had interrupted. It was followed by the 2010 El Nino and we are presently half way into the next La Nina. I predicted all this in 2009. But GISS completely ignored El Ninos in their previous publications and even now gets rid of them with a running mean. Nevertheless, I note that Hansen has now taken note of my prediction and is using it to do his own prognostications for the next year without acknowledging it. Arctic warming is real but is not greenhouse. Its cause is warm Atlantic currents flowing north for more than a century now.

Sam Hall
April 1, 2011 9:31 am

Olen says:
April 1, 2011 at 6:48 am
(snip) The splendid reputation of NASA was used to promote warming to the unsuspecting public in a covert way.

NASA’s reputation started downhill in the last days of the Apollo project. PC mangers instead of engineers took control and trashed the old NASA. The entire thing needs to be disbanded.
You want to put a package in orbit, call Lockheed Martin or whoever. The Air Force can handle their own.

MarkW
April 1, 2011 9:37 am

Without knowing the station histories, it is impossible to determine when a station went from being a well sited station to a poorly sited station.
If a station’s siting does not change during the study period, then there is no reason to believe that the quality of the siting will introduce any trend into the data.
The good Dr. admits that they have not done any of this analysis, then he proceeds to proclaim that siting problems don’t introduce a bias to the trend.
I don’t see how anyone can even attempt to justify such a statement. If he was asked to comment on the data, the only proffessional answer would be to state that the data has not been analyzed sufficiently to render a judgement, and leave it at that.
If Judity Curry is actually defending the way this man is presenting his partial data, then shame on her as well.

Jim K
April 1, 2011 9:48 am

It seems as though the Muller – Berkley project is to confirm and or get Mann, Hansen etal. off the hook. Not get a better record.

bob paglee
April 1, 2011 10:40 am

Arno has it exactly right:
“Satellites are more accurate and have a uniform coverage of both hemispheres and the ocean which cannot be said about any other source. From the beginning Hansen has been dead set about using satellite data and small wonder: satellites do not show the late twentieth century warming that Hansen used in 1988 to claim that anthropogenic global warming had arrived.”
Temperature data developed from surface measurements over time is intolerably corrupted from urban heat islands that have grown enormously over time, and due to innumerable inappropriate measurement sitings, including many poorly estimated pure extrapolations. It is deeply disturbing to note such inaccurate and misleading (if not biased or false) testimony is being delivered to Congress.
A truly interested Member of Congress can easily find an excellent satellite-data derived global temperature chart covering the past 30 + years at Dr, Roy Spencer’s web site: DrRoySpencer.com. The latest chart thru Feb 2011 shows a change of minus 0.2C since 1979 when the satellite data began.
The data is quite noisy, and my own estimate of the slope from 1979 to the present represents a projected change of about 1.3C per century (at most), hardly a rate worthy of the current “warming” hysteria. Moreover, there is evidence of a current weakening of the Sun’s magnetic field that may be presaging a period of global cooling to come.

Bloke down the pub
April 1, 2011 11:21 am

If I remember correctly from previous posts, the data Muller was refering to was a double blind test run of just 2% of the total available. As such he could have no idea if a currently bad station has always been that way, or if it’s decline was more recent. Without that information his claims of parity between good and bad stations hold no water.

APACHEWHOKNOWS
April 1, 2011 11:42 am

Just ask the Goverment men of the Interior Department, Reservations are good for native Americans, just look at our data we have published.
No need for you to go and see the shacks, the massive unemployment, the poor health, the desolation, no just look at the data we Phd’s have at our offices in Washinton D.C..
All of us agree on this, just ask around.
Fancy Pants Liar this Dr. Muller.

NikFromNYC
April 1, 2011 11:45 am

The idea that station quality and urban heating significantly effect the global average plot is still a mere hypothesis. But if it changes the plot much or not is quite irrelevant since these global averages are truncated prior to 1880 as coverage greatly contracts as you carry back from 1900 to 1800 and contracts again back to 1700. Thus the global average is simply not long enough to say anything significant about trend changes, given that fluctuations (noise) obscure trends (signal).
Comparing the global average T to local proxy records is comparing apples to oranges, and naturally leads to hockey sticks since noise tends to dominate and cancel out and attenuate temperature signals, leading to horizontal handles. The magnitude of temperature variation is hard to calibrate once a proxy record is obtained as well and is a fairly subjective process especially since any given proxy is again a very local record such as a single site ice or mud core.
It’s very odd that proxy reconstructions, dozens of them, that show that recent warming is nothing new but has dual precedent (Roman and medieval periods) are simply ignored in alarmist circles rather than focused upon. In physics or chemistry or even mathematics and sometimes but not enough in biology and medicine it is *exactly* data that does not fit the central organizing theories that leads to great excitement and a flurry of speculation with great anticipation that such cracks in a theory might lead to huge new distinctions where before there was mere confusion. In climate science there is none of this at all! That seems to be because the data that doesn’t fit nulls and voids the entire edifice of their field, relegating it back to obscurity. This would be a huge drop in status above and beyond the drop in funding and investment opportunities. Loss of status is one of the deepest fears not just in humans but in all social animals.
I think the surface stations project has dragged on at least a couple of years too long now and us being used as propaganda, merely, by pedantically and bureaucratically citing the equivalent of parking tickets that point out technical siting violations. That the very longest running real thermometer records, single site records that invite little debate about statistical methodology, show no sign of either a greenhouse *nor* urban upswing in trend whatsoever (amen), makes me consider ongoing posts line these to be much sound and furry, signifying nothing. Those long running records are plotted here: http://oi49.tinypic.com/rc93fa.jpg . I note that since these were plotted the Central England chart has plunged back down to below the base trendline.

tonyb
Editor
April 1, 2011 12:01 pm

NikFrom NYC
The CET has been plunging for around five years. The mean average in 2010 at 8.83C was the same as the first year of the record in 1659.
It would be useful if you could bring your chart up to date. It is one I often refer to.
Tonyb

Bigdinny
April 1, 2011 12:01 pm

In my never ending quest for balance, I routinely check out the comments at RC to see what is happening on the “other side”. The lumps being doled out to Dr. Muller seem to be equal on both sides- what is interesting is the disparity of opinion. It appears that while no one now trusts him, the warmists think he is a skeptic and the skeptics think he is a warmist. It proves that you just can’t please everyone.

Phil
April 1, 2011 12:06 pm

Would it not be appropriate to also mention the issue of moist enthalpy (see http://pielkeclimatesci.wordpress.com/2005/07/18/what-does-moist-enthalpy-tell-us/)?
Pielke, R.A. Sr., C. Davey, and J. Morgan, 2004: Assessing “global warming”
with surface heat content. Eos, 85, No. 21, 210-211. http://www.climatesci.org/publications/pdf/R-290.pdf

Michael J. Dunn
April 1, 2011 12:14 pm

[snip – over the top criticism of BEST]

Hu McCulloch
April 1, 2011 12:27 pm

■the absence of the statistical documentation of the uncertainty of each step in the adjustment of raw data to a “homogenized data set” (e.g. time of observation bias; equipment changes; station moves)

TOBS is an important factor, but rather than trying to adjust for it using the Karl et al regression model, it should just be treated as a break in the series, ie in effect a new station, with a new offset, at the same location. See discussion at http://climateaudit.org/2007/09/24/tobs/ .
Similarly for station moves and equipment changes.
An even bigger factor is the “homogenization” adjustment itself, which as I understand it averages bad stations together with good, so that the good stations indeed don’t behave much differently than the bad. The problem is not the uncertainty of this adjustment, but that it is done at all.
I look forward to Muller’s paper analyzing these issues, but agree with Anthony that he should not have jumped the gun with Anthony’s data.
On the other hand, I hope Anthony will make his site evaluations public as soon as he settles on the final version, and not wait until he has his own analysis of the classifications.
REPLY: Hu, when our paper is accepted, we will publish an SI with the station classifications, as has always been our intent. – Anthony

Hu McCulloch
April 1, 2011 12:43 pm

In addition to comparing good USHCN stations to bad ones, it would be useful to compare good USHCN stations to US GHCN or CRU stations. The latter are largely airports, if Ohio is representative, while the former are almost all non-airports. Any difference would likely carry over to world GHCN/CRU averages.

Stephan
April 1, 2011 12:47 pm

Ok let’ s say we accept the initial results and all the others (I said I would accept the BEST analysis). Why in hell is ALL the data up to 1980 significantly, and I would say very significantly BELOW the 0C anomaly baseline. Eyeballing, it seems to comprise 80-90% of data up to 1980. There is something very wrong here. How can the baseline be correct? The data is actually “cold” but the baseline is warm?. In my view this is showing that the NORMAL temp is COLDER (80-90% of time). This is in ALL the global temperatures graphs produced by GISS NOAA, HADCRUT and BEST. The way the baseline has been calculated must be wrong! Maybe Im just plain stupid and maybe I ain’t explaining myself but I hope you get the gist of what I am saying. but I would like to get an explanation from Willis or E Smith or even Anthony please. Thanks if you have the time.

bob paglee
April 1, 2011 1:06 pm

Nik from NYC says:
“That the very longest running real thermometer records, single site records that invite little debate about statistical methodology, show no sign of either a greenhouse *nor* urban upswing in trend whatsoever (amen), makes me consider ongoing posts line these to be much sound and furry, signifying nothing. Those long running records are plotted here: http://oi49.tinypic.com/rc93fa.jpg . I note that since these were plotted the Central England chart has plunged back down to below the base trendline.”
Those historical charts showing temp data from 7 different areas are very interesting and the slope or temperature gradients seem extremely modest. The chart for central England is easiest to read so I printed it and replicated the temperature scale shown at the 1650 origin. Using my replicated scale to measure the height of the trend line at 1800, I read it as 9.2. I repeated this for the end point marked 2000 and read this as 9.5. This would indicate a rise of only 0.3 over that 200-year interval, or a gradient of 0.15 per century. This seems to be off reality by an order of magnitude. In an earlier post today I described how I had done the same for the 30-year satellite temp chart (data from UAH Huntsville) and estimated the temperature gradient to be about 1.3C per century, albeit over a much shorter period than the 200-year chart. Am I missing something?

Theo Goodwin
April 1, 2011 1:08 pm

bob paglee says:
April 1, 2011 at 9:19 am
“…. we find that the warming seen in the “poor” stations is virtually indistinguishable from that seen in the “good” stations.”
“In the land of the blind, the one-eyed man is king. In the land of the one-eyed, where some see only what they want to see, global warming is the emperor’s imperative diktat.”
Please tell me where my reasoning is mistaken. Whenever you find a spike in temperature in a station record and that spike was caused by a station move, you treat the pre-spike station and the post-spike station as different stations. Then you look for a trend in each set of data and those trends are your evidence for changes in temperature. However, in treating the one station as two, you are reading out of your own data all the evidence for matters such as encroaching UHI, changes that cause a site to become a poor site, station moves, and similar matters. The problem of UHI strikes me at this time as the clearest. UHI encroaches. It grows outward from city centers. As UHI encroaches on a station, it creates a spike that is actually the first step onto a plateau. You want to treat that first step as the first reading for a new station and you want to do so because you believe that only the trend matters. Let’s identify the “old station” as measurements “A through B” and the “new station” as measurements “C through D.” If time proves that the two stations show the same trend, your data will show that nothing changed. But something did change. The measurements “A through D” do not have the same trend as the newly created “old station” and the newly created “new station.” So, this method corrupts the data and systematically so.

Gaylon
April 1, 2011 1:30 pm

“James Sexton says:
April 1, 2011 at 7:26 am: “…Guys and gals, its been nearly 30 years since this became an issue. It won’t happen. Academia will ride this pony until it drops. There is no impetus to discern the truth. And there will be no epiphany for the charlatans.”
_________
Agreed, unfortunately we’ll all (all y’all) will have to be dead before any honest recognition of the true motives (political) and the true agenda (power) of this scam will be acknowledged. One day our great-great-great grandchildren will be sitting in a social studies class debating the validity of, “those who forget the past are doomed to re-live it” whilst reviewing this era with a chuckle and guffaw, probably in a warm underground bunker beneath a mile of glacial ice at what was once Atlanta. 😉

bob paglee
April 1, 2011 1:31 pm

Oops! I meant to say CURRENT reality. Actually, the gradient shown by the red line running from 1960 to 2000 is very close to my estimate of 1.3C, and this illustrates the risk of making temperature judgements over fairly short periods. Why don’t we just wait another 70 years before jumping to conclusions about global warming?

NikFromNYC
April 1, 2011 2:00 pm

“The CET has been plunging for around five years. The mean average in 2010 at 8.83C was the same as the first year of the record in 1659.
It would be useful if you could bring your chart up to date. It is one I often refer to.”
Where’s my oil money funding to pay for at least beer for the whole weekend it will take to re-plot all these damned graphs? It’s *hard* to format these things right so they don’t look all nerdy. Hours and hours of Photoshop to squash the scale dates to size in true screen-font format. Minneapolis also ended recently and the data from Wolfram Alpha doesn’t overlap it well enough to extend it reliably. I’ll get to it after tax time though. I’ve left out about 3-4 records of cities I’ve never heard of like De Bilt. A couple actually do show recent warming spikes too, but they have obscure names (to a US citizen). I like the impact of well known cities. If you know of any other continuous records I should add, let me know (nikwillmore@gmail.com). Paris exists but has a weird dual linear kinked shape. I loved that I could add Copenhagen! Early on, I merely posted far and wide the CET record and got clobbered for it since it was just one record. So I added the others and now the silence from alarmist circles is positively deafening!

rpielke
April 1, 2011 2:15 pm

Phil – Thank you for your comment. The paper
Fall, S., N. Diffenbaugh, D. Niyogi, R.A. Pielke Sr., and G. Rochon, 2010: Temperature and equivalent temperature over the United States (1979 – 2005). Int. J. Climatol., DOI: 10.1002/joc.2094
uses moist enthalpy.

Dr. Dave
April 1, 2011 2:28 pm

I am not so upset at how Dr. Muller chose to represent his own, incomplete research to Congress. I think it was foolish, inappropriate and disingenuous. I am furious that he chose to dismiss and diminish the work of Anthony Watts, et al, in his testimony to Congress. How can anyone believe that station siting and the UHI effect have no influence on trends? But he had his shot as dismissing the work of the Surface Station Project before Congress.
Anthony has been hosed by Tom Karl and now again by this Berkley fraud named Muller. My personal belief is that what Mr. Watts & team have uncovered is probably as damaging to the AGW fraud as climategate.

Owen Hobson
April 1, 2011 2:33 pm

I happened to hear Dr. Muller on a talk show program in Denver in the summer of 2009. He was on the program to discuss a number of topics, but it was stated clearly that he was a believer in AGW. When it came time to justify his global warming beliefs he gave as his primary reasons (1) because the IPCC said so and (2) because climate models said so. And that was it, although he added that there were “exaggerations on both sides” of the debate. So if the warmers believe that Dr. Muller is a skeptic, I believe they are incorrect. After listening to his rationale I could only guess that he wasn’t all that informed on the subject.

Stephan
April 1, 2011 2:46 pm

ALL the GISS, HADCRUT, BEST etc graphs show warming from 1980 on NOT before please explain.