In case you missed it, on Sunday Dr. Roger Pielke Sr.wrote a statement of support for Watts et al 2012. See: Comments On The Game Changer New Paper “An Area And Distance Weighted Analysis Of The Impacts Of Station Exposure On The U.S. Historical Climatology Network Temperatures And Temperature Trends” By Watts Et Al 2012
Today he has written another essay, showing that Watts et al 2012 and another recent paper, McNider et al 2012 have shown
“…evidence of major systematic warm biases in the analysis of multi-decadal land surface temperature anomalies by NCDC, GISS, CRU and BEST.”
The Summary:
- One paper [Watts et al 2012] show that siting quality does matter. A warm bias results in the continental USA when poorly sited locations are used to construct a gridded analysis of land surface temperature anomalies.
- The other paper [McNider et al 2012] (pdf here) shows that not only does the height at which minimum temperature observations are made matter, but even slight changes in vertical mixing (such as from adding a small shed near the observation site, even in an otherwise pristine location) can increase the measured temperature at the height of the observation. This can occur when there is little or no layer averaged warming.
Read the entire essay here
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
I’m really pleased to see the layer mixing mentioned. This is a significant effect of wind direction which is often a factor of the season as well (implication is this introduces a seasonal bias). This is something you pick up from watching wind socks at airports year after year and is what I called spring time hangar turbulence as that is when the wind would cross the runway after passing over the hangars. Nothing like rotors at 20′ AGL. It can be every bit as exciting as wake turbulence from commercial aircraft coming and going.
Pielke Pere, he knows where.
===============
“First they ignore you. Then they ridicule you, then they fight you….” MKG
with the deafening silence in the MSM following WUWT press release Sunday, I think they have gone back to ignoring you. Unfortunately, the consensus has to be changed.
I notice the McNider paper is not even obtainable for cash. WUWT?
Compelling title might be:
Two Game Changing Papers, by Dr. Roger Pilke Sr. (note caps and elimination of useless word like “recent”)
Correction:
One paper [Watts et al 2012] show (should be shows)
We shouldn’t forget that the results are corroborated by other evidence, such as
1) The continual upward revisions of recent warming on GISSTEMP (accompanied by downward revisions of the pre-1940 data)
2) The Steirou and Koutsoyiannis paper on weather station data homogenization, which showed up to half of global warming is artificial.
3) The Steig paper on Antarctic warming in generating greater warming than is justified by the raw data.
4) Issues of bias within the Australian and New Zealand temperature records.
Maybe Roger Pielke Sr. did not notice the latest developments.
Steve McIntyre seems to have changed his opinion on the value of Watts et al. (2012), which he co-authored.
http://climateaudit.org/2012/07/31/surface-stations/#comments
REPLY – Rubbish. You do not know what you are talking about. ~ Evan
See also an interesting post on the same serious problem in Watt et al. (2012)
http://rabett.blogspot.de/2012/07/bunny-bait.html
REPLY – Yeah, I do agree that adjusting the well sited station trends upward to match the poorly sited station trends DOES successfully “correct” the gross disparity between their respective raw data records. A “serious problem” in the Watts paper, indeed. But for whom? #B^j ~ Evan
Given the timely coincidence with Watts et al 2012, Mr Muller is just making a clown of himself.
Running kind of a family business is not easy nowadays, but this is a desparate business case and bad timing too.
for the mcnider paper: http://www.researchgate.net/publication/229149037_Response_and_sensitivity_of_the_nocturnal_boundary_layer_over_land_to_added_longwave_radiative_forcing
An analysis team led by Anthony Watts has shown that 70% of the USHCN temperature
stations are ranked in NOAA classification 4 or 5, indicating a temperature uncertainties greater than 2C or 5C, respectively. First sentence from Muller 2011?
I have seen some other statements from NOAA that equates station classification to degrees of uncertainty. Implicit in these discussions seems to be that as a station’s classification moves from 1 to 5, the standard deviation of the temperatures and their anomalies is expected to increase, but the mean bias is expected (assumed?) to be and remain zero.
It is true that BEST, by removing absolute temperature from the analysis and only looking trends within and between stations makes an effort to make the assumption moot.[1] However, the assumption of mean bias = 0 for all classification might be at the heart of NOAA adjustments that Watts-2012 addresses.
IF one can argue that if the mean bias of class 1 stations is zero and so is the mean bias of class 5’s, then these can be homogenized to reduce overall uncertainty.
Look at the physics of the thermometers related to the classification scheme to assume contamination that increases the uncertainty. It strains credulity to believe that the contamination is equally likely to be cooler contamination as it is hotter. My hypothesis is that the mean bias from the class of cat 5 stations is > (hotter) than cat 1 as sources for heating contamination are more common than cooling. Mean Bias of cat 5 stations cannot be assumed to be zero and cannot be assumed to be equal to cat 1 stations. When you remove the assumption that mean bias is not zero, then the homogenization become much less tenable. Mixing cat 5 with cat 1 will raise temperatures.
So what? Trends are all that matter, right. The problem is that a cat 5 station today was probably not installed as a cat 5. Another hypothesis: the vast majority of stations start out as cat M and become today as cat N, where M <= N and mean_bias(M) <= mean_bias(N). The bias grows with time — UHI (Macro and micro) growth over time.
Two hypothesies to be tested using Leroy-2010 methodologies.
[1] In my opinion, BEST does finesse the mean bias well, but their scalpel and suture commits an even larger bias by removing low frequency data content from the data processing and is therefore no more trustworthy than the NOAA adjustments.
@vvenema
I don’t see any change of opinion from McIntyre in your link. Secondly, nobody can understand what was written at Rabett Run, please explain. I see he posted some figures followed by a table…I can’t see what point he’s trying to make.
REPLY – Rab is saying that after NOAA/NCDC adjustments the trends of poor and well sighted stations are near-identical. We agree. In fact, we most heartily endorse this position. For that matter, we consider it a prime feature to be shouted from the rooftops . . . ~ Evan
Well I hope you know this means you and Dr. Pielke are off Mikey Mann’s Christmas card list.
Dick Mcnider’s paper can be obtained from
McNider, R.T., G.J. Steeneveld, B. Holtslag, R. Pielke Sr, S. Mackaro, A. Pour Biazar, J.T. Walters, U.S. Nair, and J.R. Christy, 2012: Response and sensitivity of the nocturnal boundary layer over land to added longwave radiative forcing. J. Geophys. Res., doi:10.1029/2012JD017578, in press. http://pielkeclimatesci.files.wordpress.com/2012/07/r-371.pdf
Some people are claiming that time of observation changes in rural stations may have messed up Anthony’s results, but if TOBS was a problem in Anthony’s current paper wouldn’t it have messed up his last one as well?
vvenema says: July 31, 2012 at 10:54 am
Steve McIntyre
http://climateaudit.org/2012/07/31/surface-stations/#comments
So McI is busy with TOBS (does anyone know where to find the definition/data?)
And unfortunately we have to wait for McI on Esper 2012 (who seems to have found the warming in the Middle Ages).
But at least we now know we do not have to waste our time with BEST.
From Pielke Sr’s post toward the end:
My Comment: There statement that there is “the tendency to cancel random errors” is shown in the Watts et al 2012 and McNider et al 2012 papers to be incorrect. This means their claim that “the large number of stations also greatly facilitates temporal homogenization since a given station may have several “near-neighbors” for “buddy-checks.” is erroneously averaging together sites with a warm bias.
Hasn’t this been a major point that the temperature records show warming with certainty?
>>vvenema says:
>>July 31, 2012 at 10:54 am
>>Maybe Roger Pielke Sr. did not notice the latest developments.
Still waiting for you to point out developments…
Did you read the links you posted?
In my humble opinion, climate science has no better scientist than Pielke, Sr. When genuine science replaces activist/clownish science, climate science will look just like Pielke Sr’s work.
Pielke Sr’s endorsement of Watts’ work (and of other papers) is the gold standard at this time.
Joseph Murphy says:
July 31, 2012 at 12:08 pm
>>vvenema says:
>>July 31, 2012 at 10:54 am
>>Maybe Roger Pielke Sr. did not notice the latest developments.
Still waiting for you to point out developments…
Did you read the links you posted?
……………………………………………..
I did and certainly did not “discover” any issues of S McI altering his percieved “value” of the Watts paper.
The other link does nothing of the sort either.
In the Fall, Watts, Nielsen‐Gammon, Jones, Niyogi, Christy, and Pielke Sr paper (2011), in Figure 4, the trend in the raw data is about 0.2°C per decade. The trend in the data corrected for differences in the time of observation is 0.3°C per decade. (The rest of the homogenization does not change the mean temperature much.)
Thus the difference between the trends in the raw data and the one in the homogenized (adjusted) the manuscript Watts et al. (2012) found are most likely due to forgetting to correct for the time of observation bias (TOB). This is an important issue, which is why McIntre is having some doubts about the manuscript.
As http://rabett.blogspot.de/2012/07/bunny-bait.html points out: “There is practically no time of observation bias in urban-based stations which have taken their measurements punctually always at the same time, while in the rural stations the times of observation have changed.”
Thus the differences in the trends for the different station quality classes are likely due to forgetting to correct for the TOB. Another likely problem with the analysis of Watts et al. (2012) is that the classification of the stations was performed at the end of the study period. Stations that were poor at the end, but average in the beginning will show an artificially stronger trend. Similarly stations that were good at the end, but average at the beginning will slow a weaker trend (or even negative trend). This selection bias may well explain the differences found in the trends for the various quality classes. The very least, these are issues, which a rigorous scientific paper would discus.
For now, I will not study this manuscript any further, expecting that it will never be submitted. If it is, I am happy to review it more closely, knowing how much Anthony Watts likes blog review.
I have a small quibble with the phrase “poorly sited locations”. It is not that the stations are poorly sited, they are generally ideally sited. But only for aviation. They are inappropriately selected for use in climate analysis.
Pilots really do want to know the conditions about 5 feet above an acre of asphalt. It is unfortunate for the climate scientists that this may be only casually related to the conditions a mile away.
I wonder whether Muller is loathing the day he decided to get into climate, and to engage with the critical community 🙂 Everything could be nice and cushy… now he has to read blog posts and deal with the whole enchilada that trails it.
Rank this, NOAA:
http://www.dailycamera.com/news/ci_21192520
http://www.esrl.noaa.gov/psd/boulder/Boulder.mm.precip.html
@Bill Parsons , July 31, 2012 at 1:18 pm
Yep, them there thunderstorms is knowed as a negative feedback.
🙂
vvenema –
I doubt anybody “forgot” to account for TOBS issues in Anthony’s study. The study was not about TOBS, it was about siting issues. No claims have been made that Anthony’s paper is the last word. There is plenty of room for work on TOBS to be done on the now more accurately sorted raw data.
It is just that now if you want accurate results (Station quality 1 & 2) you will start with a raw trend that is much less steep.