The Berkeley Earth Surface Temperature project puts PR before peer review

UPDATE: see this new story

BEST: What I agree with and what I disagree with – plus a call for additional transparency to prevent “pal” review

=======================================================

Readers may recall this post last week where I complained about being put in a uncomfortable quandary by an author of a new paper. Despite that, I chose to honor the confidentiality request of the author Dr. Richard Muller, even though I knew that behind the scenes, they were planning a media blitz to MSM outlets. In the past few days I have been contacted by James Astill of the Economist, Ian Sample of the Guardian, and Leslie Kaufman of the New York Times. They have all contacted me regarding the release of papers from BEST today.

There’s only one problem: Not one of the BEST papers have completed peer review.

Nor has one has been published in a journal to my knowledge, nor is the one paper I’ve been asked to comment on in press at JGR, (where I was told it was submitted) yet BEST is making a “pre-peer review” media blitz.

One willing participant to this blitz, that I spent the last week corresponding with, is James Astill of The Economist, who presumably wrote the article below, but we can’t be sure since the Economist has not the integrity to put author names to articles:

The full article is here. Apparently, Astill has never heard of the UAH and RSS Global Temperature records, nor does he apparently know that all the surface temperature records come from one source, NCDC.

Now compare that headline and subtitle to this line in the article:

It will be interesting to see whether this makes it past the review process.

And, The Economist still doesn’t get it. The issue of “the world is warming” is not one that climate skeptics question, it is the magnitude and causes.

I was given a pre-release draft copy of one of the papers, related to my work as a courtesy. It contained several errors, some minor (such as getting the name of our paper wrong i.e. Fell et al in several places, plus a title that implied global rather than USA) some major enough to require revision (incorrect time period comparisons).

I made these errors known to all the players, including the journal editor, and the hapless Astill, who despite such concerns went ahead with BEST’s plan for a media blitz anyway. I was told by a BEST spokesperson that all of this was “coordinated to happen on October 20th”.

My response, penned days ago, went unheeded as far as I can tell, because I’ve received no response from Muller or the Journal author. Apparently, PR trumps the scientific process now, no need to do that pesky peer review, no need to address the errors with those you ask for comments prior to publication, just get it to press.

This is sad, because I had very high hopes for this project as the methodology is looked very promising to get a better handle on station discontinuity issues with their “scalpel” method. Now it looks just like another rush to judgement, peer review be damned.

Below is my response along with the draft paper from BEST, since the cat is publicly out of the bag now, I am not bound by any confidentiality requests. Readers should note I have not seen any other papers (there may be up to 4, I don’t know the BEST website is down right now) except the one that concerns me.

My response as sent to all media outlets who sent requests for comment to me:

===========================================================

In contradiction to normal scientific method and protocol, I have been asked to provide public commentary to a mass media outlet (The Economist) on this new paper. The lead author,  Dr. Richard Muller has released me from a previous request of confidentiality on the matter in a written communication on 10/14/2011. 10/15/2011 at 4:07PM PST in an email.  The paper in question is:

Earth Atmospheric Land Surface Temperature and  Station Quality [Tentative title, may have changed] by Muller et al 2011, submitted to the AGU JGR Atmospheres Journal, which apparently has neither completed peer review on the paper nor has it been accepted for publication by JGR.

Since the paper has not completed peer review yet, it would be inappropriate for me to publicly comment on the conclusions, especially in light of a basic procedural error that has been discovered in the methodology that will likely require a rework of the data and calculations, and thus the conclusions may also change. The methodology however does require comment.

The problem has to do with the time period of the data used, a time period which is inconsistent with two prior papers cited as this Muller et al paper being in agreement with. They are:

Fall et al (2011), Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends J. Geophys. Res.

http://pielkeclimatesci.files.wordpress.com/2011/07/r-367.pdf

and

Menne et al  (2010), On the reliability of the U.S. surface temperature record, J. Geophys. Res.

Both papers listed above (and cited by Muller et al) do an analysis over a thirty year time period while the Muller et al paper uses data for comparison from 1950 – 2010 as stated on lines 142-143:

“We calculated the mean temperature from 1950 to the present for each of these sites, and subtracted the mean of the poor sites from the OK sites.”

I see this as a basic failure in understanding the limitations of the siting survey we conducted on the USHCN, rendering the Muller et al paper conclusions highly uncertain, if not erroneous.

There is simply no way siting quality can be established as static for that long. The USHCN survey was based on photographs and site surveys starting in of 2007, plus historical metadata. Since the siting of COOP stations change as volunteers move, die, or discontinue their service, we know the record of siting stability to be tenuous over time. This is why we tracked only from 1979 and excluded stations whose locations were unknown prior to 2002. 1979 represented the practical limit of which we assumed we could reasonably ascertain siting conditions by our survey.

We felt that the further back the station siting changes occurred, the more uncertainty was introduced into the analysis, thus we limited meaningful comparisons of temperature data to siting quality to thirty years, starting in 1979.

Our ratings from surfacestations.org are assumed to be valid for the 1979 – 2008 period, but with Muller et all doing analysis from 1950, it renders the station survey data moot since neither Menne et al nor Fall et al made any claim of the station survey data being representative prior to 1979. The comparisons made in Muller et al are inappropriate because they are outside of the bounds of our station siting quality data set.

Also, by using a 60 year period, Muller et al spans two 30 year climate normals periods, thus further complicating the analysis. Both Menne et al and Fall et al spanned only one.

Because of the long time periods involved in Muller et al analysis, and because both Menne et al and Fall et al made no claims of knowing anything about siting quality prior to 1979, I consider the paper fatally flawed as it now stands, and thus I recommend it be removed from publication consideration by JGR until such time that it can be reworked.

For me to comment on the conclusions of Muller et al would be inappropriate until this time period error is corrected and the analysis reworked for time scale appropriate comparisons.

The Berkeley Earth Surface Temperature analysis methodology is new, and may yield some new and potentially important results on siting effects once the appropriate time period comparisons are made. I welcome the BEST effort provided that appropriate time periods are used that match our work. But, by using time period mismatched comparisons, it becomes clear that the Muller et al paper in its current form lost the opportunity for a meaningful comparison.

As I was invited by The Economist to comment publicly, I would recommend rejecting Muller et al in the current form and suggest that it be resubmitted with meaningful and appropriate 30 year comparisons for the same time periods used by the Menne et al and Fall et al cited papers. I would be happy to review the paper again at that time.

I also believe it would be premature and inappropriate to have a news article highlighting the conclusions of this paper until such time meaningful data comparisons are produced and the paper passes peer review. Given the new techniques from BEST, there may be much to gain from a rework of the analysis limited to identical thirty year periods used in Menne et al and Fall et al.

Thank you for your consideration, I hope that the information I have provided will be helpful in determining the best course of action on this paper.

Best Regards,

Anthony Watts

cc list: James Astill, The Economist, Dr. Joost DeGouw, JGR Atmospheres editor, Richard A. Muller, Leslie Kaufman, Ian Sample

===========================================================

Despite my concerns, The Economist author James Astill told me that “the issue is important” and decided to forge ahead, and presumably produced the article above.

Here is the copy of the paper I was provided by Richard Muller. I don’t know if they have addressed my concerns or not, since I was not given any follow up drafts of the paper.

BEST_Station_Quality (PDF 1.2 MB)

I assume the journalists that are part of the media blitz have the same copy.

I urge readers to read it in entirety and to comment on it, because as Dr. Muller wrote to me:

I know that is prior to acceptance, but in the tradition that I grew up in (under Nobel Laureate Luis Alvarez) we always widely distributed “preprints” of papers prior to their publication or even submission.  That guaranteed a much wider peer review than we obtained from mere referees.

Please keep it confidential until we post it ourselves.

They want it widely reviewed. Now that The Economist has published on it, it is public knowledge.

There might be useful and interesting work here done by BEST, but I find it troubling that they can’t wait for science to do its work and run the peer review process first. Is their work so important, so earth shattering, that they can’t be bothered to run the gauntlet like other scientists? This is post normal science at its absolute worst.

In my opinion, this is a very, very, bad move by BEST. I look forward to seeing what changes might be made in peer review should these papers be accepted and published.

==============================================================

UPDATE: Judith Curry, who was co-author to some of these papers, has a post on it here

Also I know that I’ll be critcized for my position on this, since I said back in March that I would accept their findings whatever they were, but that was when I expected them to do science per the scientific process.

When BEST approached me, I was told they were doing science by the regular process, and that would include peer review. Now it appears they have circumvented the scientific process in favor of PR.

For those wishing to criticize me on that point, please note this caveat in my response above:

The Berkeley Earth Surface Temperature analysis methodology is new, and may yield some new and potentially important results on siting effects once the appropriate time period comparisons are made. I welcome the BEST effort provided that appropriate time periods are used that match our work. But, by using time period mismatched comparisons, it becomes clear that the Muller et al paper in its current form lost the opportunity for a meaningful comparison.

Given the new techniques from BEST, there may be much to gain from a rework of the analysis limited to identical thirty year periods used in Menne et al and Fall et al.

My issue has to do with the lost opportunity of finding something new, the findings may agree, or they may be different if run on the same time periods. I think it is a fair question to ask since my peer reviewed paper (Fall et al) and NOAA’s (Menne et al) paper both used 30 year periods.

If BEST can run their comparison on the 30 year period for which our data is valid, instead of 60 years, as stated before, I’ll be happy to accept the results, whatever they are. I’m only asking for the correct time period to be used. Normally things like this are addressed in peer review, but BEST has blown that chance by taking it public first before such things can be addressed.

As for the other papers supposedly being released today, I have not seen them, so I can’t comment on them. There may be good and useful work here, but it is a pity they could not wait for the scientific process to decide that.

================================================================

UPDATE2: 12:08 PM BEST has sent out their press release, below:

The Berkeley Earth team has completed the preliminary analysis of the land surface temperature records, and our findings are now available on the Berkeley Earth website, together with the data and our code at

www.BerkeleyEarth.org/resources.php.

Four scientific papers have been submitted to peer reviewed journals, covering the following topics:

1. Berkeley Earth Temperature Averaging Process

2. Influence of Urban Heating on the Global Temperature Land Average

3. Earth Atmospheric Land Surface Temperature and Station Quality in the United States

4. Decadal Variations in the Global Atmospheric Land Temperatures

By making our work accessible and transparent to both professional and amateur exploration, we hope to encourage feedback and further analysis of the data and our findings.  We encourage every substantive question and challenge to our work in order to enrich our understanding of global land temperature change, and we will attempt to address as many inquiries as possible.

If you have questions or reflections on this phase of our work, please contact, info@berkeleyearth.org.  We look forward to hearing from you.

All the best,

Elizabeth

Elizabeth Muller

Founder and Executive Director

Berkeley Earth Surface Temperature

www.berkeleyearth.org

=========================================================

I’m still happy to accept the results, whatever they might be, all I’m asking for is an “apples to apples” comparison of data on the 30 year time period.

They have a new technique, why not try it out on the correct time period?

UPDATE4: Apparently BEST can’t be bothered to fix basic errors, even though I pointed them out, They can’t even get the name of our paper right:

http://berkeleyearth.org/Resources/Berkeley_Earth_Station_Quality

I sent an email over a week ago advising of the error in names, got a response, and they still have not fixed it, what sort of quality is this? Fell et all? right under figure 1

And repeated six times in the document they released today.

Sheesh. Why can’t they be troubled to fix basic errors? This is what peer review is for. Here’s my email from October 6th

—–Original Message—–
From: Anthony Watts- ItWorks
Date: Thursday, October 06, 2011 3:25 PM
To: Richard A Muller
Subject: Re: Our paper is attached
Dear Richard,
Thank you for the courtesy, correction:  Fell et al needs to be corrected to
Fall et al in several occurrences.
When we complete GHCN (which we are starting on now) we’ll have a greater
insight globally.
Best Regards,
Anthony Watts

Here is the reply I got from Dr. Muller

—–Original Message—–
From: Richard A Muller
Date: Friday, October 14, 2011 3:35 PM
To: Anthony Watts- ItWorks
Subject: Re: Our paper is attached
Anthony,
We sent a copy to only one media person, from The Economist, whom we trust to keep it confidential.  I sent a copy to you because I knew you would also keep it confidential.
I apologize for not having gotten back to you about your comments.  I particularly like your suggestion about the title; that is an improvement.
Rich
On Oct 14, 2011, at 3:04 PM, Anthony Watts- ItWorks wrote:
> Dear Richard,
>
> I sent a reply with some suggested corrections. But I have not heard back
> from you.
>
> Does the preprints peer review you speak of for this paper include sending
> copies to media?
>
> Best Regards,
>
>
> Anthony Watts

==========================================================

UPDATE 5: The Guardian writer Ian Samples writes in this article:

The Berkeley Earth project has been attacked by some climate bloggers, who point out that one of the funders is linked to Koch Industries, a company Greenpeace called a “financial kingpin of climate science denial“.

Reader AK writes at Judth Curry’s blog:

I’ve just taken a quick look at the funding information for the BEST team, which is:

Funded through Novim, a 501(c)(3) corporation, the Berkeley Earth Surface Temperature study has received a total of $623,087 in financial support.

Major Donors include:

– The Lee and Juliet Folger Fund ($20,000)

– William K. Bowes, Jr. Foundation ($100,000)

– Fund for Innovative Climate and Energy Research (created by Bill Gates) ($100,000)

– Charles G. Koch Charitable Foundation ($150,000)

– The Ann & Gordon Getty Foundation ($50,000)

We have also received funding from a number of private individuals, totaling $14,500 as of June 2011.

In addition to donations:

This work was supported in part by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 ($188,587)

So now (pending peer-review and publication) we have the interesting situation of a Koch institution, a left-wing boogy-man, funding an unbiased study that confirms the previous temperature estimates, “consistent with global land-surface warming results previously reported, but with reduced uncertainty.

The identities of the people involved with these two organizations can be found on their websites. Let the smirching begin.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
409 Comments
Inline Feedbacks
View all comments
D. J. Hawkins
October 20, 2011 4:55 pm

Theo Goodwin says:
October 20, 2011 at 1:56 pm
OregonPerspective says:
October 20, 2011 at 12:35 pm
“Anthony, what you ask can now be done, and hopefully someone will do it.
Because the Berkeley group has made both their data and their methods available for download.
Any objections you have can now be tested.”
You overlook the fact that their research was not done on the topic proposed by Anthony, his 30 years of siting data, but on a 60 year period that makes the research incomparable to the original topic. They cleverly changed the topic without acknowledging the very real effects of doing so. This is one technique for hiding the pea.

What’s the adage about a lie getting half way around the world while the truth is getting its shoes on? Supposing that the data and methods they published allow their work to be reproduced, what happens if re-analysis of the data based on the 1979 starting point leads to a different conclusion? The MSM will have long moved on, and we’ll be a day late and a dollar short.

Maus
October 20, 2011 4:56 pm

Mosher:
“That’s just basic math.”
Yes, you already stated the “if” case. Stating it twice doesn’t make your case stronger. Show me the math that allows us to establish that our errors are normally distributed about the mean we don’t know. Now show that when we know things are skewed in one direction.
“You can find them yourself. Just download the software and run the code.”
In other words: “No, there aren’t any peer reviewed papers. My assertion is correct and you will trust me as a learned man on this issue.”
It seems to me that this is a pretty glaring hole in the data. Certainly you, arguing as strongly as you do for approaching this within science, can point me to the peer reviewed publications that establish the normal distribution of these errors when accounting for both UHI and UCI.

Theo Goodwin
October 20, 2011 5:05 pm

“REPLY: Why don’t you look at what has actually occurred here? I’m arguing because with BEST’s new methodology, they may have been able to find something Menne and I could not. But because they used the wrong time period, it (the siting error signal) likely got swamped in the cooling period of the 40–70′s
Now consider the reverse. If I had made a claim that from recent photographs and metadata gathered from NOAA, I was able to ascertain what the siting conditions of a station looked like in 1950, I’d be vilified. Even NOAA knew enough about our siting data not to do this. Muller didn’t, and he’s invalidated his own study with this error. All I’m asking for is a identical time period. If there’s a difference, great we learned something new. If not, I’m happy to accept it because it matches the other studies including my own.”
Anthony,
You are right on the money. Your case is clear as the finest crystal. Your argument is flawless. Those who have raised objections to you on this post have presented nothing by way of serious criticism. Though you are not wasting your time in response to them because you continue to articulate your case beautifully.
What everyone should recognize is that Anthony’s post is about Muller and Best’s behavior toward Anthony and about BEST’s desire for media attention over-riding their commitment to the normal practice of science. Anthony’s post is not about climate science but about very specific behavior on the part of a few climate scientists, Muller and the others at BEST.
Muller and BEST’s desire for media attention caused them to make their work public before it passed peer review. Peer Review might reveal that some of the work must be changed. Whether it does or not is not to the point. The point is that BEST valued media attention over the normal process of peer review. So, Muller and BEST chose to make headlines with work that might very well have to be rejected. Such behavior is not characteristic of honest and forthright men.

October 20, 2011 5:06 pm

You may or may not have a point with regard to their analysis – re-run their calculations for the last 30 years, as Mosher has recommended to others here, and find out. At a minimum, this pre-release publication will improve their final product, and maybe the final paper will have both a 60 and 30 year siting signal shown, specifically to address your criticisms. We have no way of knowing at this instant.
But you didn’t actually answer my questions, Anthony.

Richard "Heatwave" Berler, CBM
October 20, 2011 5:10 pm

Hi Anthony…I am asking if you have found evidence of a systematic +3C (+5F) absolute error for Tmax/Tmin for class #4 stations and >+5C (>+9F) for class 5 stations in your personal research of these numerous sites. I am not speaking of trends…I am asking about systematic errors in measured Tmax/Tmin temperatures .
Richard “Heatwave” Berler, CBM

REPLY:
NOAA did a study on this, and closed CRN5 stations as a result, see this:
http://wattsupwiththat.com/2008/01/23/how-not-to-measure-temperature-part-48-noaa-admits-to-error-with-baltimores-rooftop-ushcn-station/
I also looked at a CRN5 station in Orange county, CA that had issues
http://wattsupwiththat.com/2008/08/23/how-not-to-measure-temperature-part-69/
Anthony

wayne
October 20, 2011 5:16 pm

“””Bruce says:
October 20, 2011 at 3:31 pm
Mosher: “C02 warms the planet, the question is how much. That’s the real debate. join it”
Bright Sunshine was up in the 1990s. Bright Sunshine warms the planet, the question is how much.”
Albedo was down in the 1990s. A darker albedo warms the planet, the question is how much.
UHI was up in the 1990s. UHI makes it appear the planet is warmer, the question is how much.
Why do you always ignore everything but CO2 Mosher?””
Because Mosher now makes some portion of his living on CO2. He has to push CO2. CO2, CO2, CO2. Tis all about money, not actual science like albedo, clouds, and faulty sites, Most everyday-person I talk to now knows this (thanks to site’s like WUWT and the skeptical science minds willing to shout… STOP AND LOOK AT IT AND DATA YOURSELF. You have a college degree. You have the ability!). The ship-shod methology of these “climate scientists” makes me sicker as every day passes. They prefer a one dimensional view of our world, radiation up or down, never sideways, and in a 1d world their figures (Trenberth,Keihl) are wrong to boot! Look to some good astophysicists to solve this problem properly.

barry
October 20, 2011 5:16 pm

L Skywalker writes:

Makes me want to go back to John Daly’s rural records and if I had time I’d get in touch with the present record-keepers and find stations where we can actually trace the full record.

Excellent! There is one person who has commented in this thread who wants to roll up their sleeves and do some work.
Whereas kakada writes:

Coming from someone who identified themselves as one of Tami’s Troupe by linking their name right to the sometimes-musician’s site, of course we’ll trust your words as absolute proof without any supporting evidence. Well actually, we won’t.

kakada could easily verify what Mark has said, but instead stops well short of that mark with a personal jibe.
So much of this thread is sniping based on unreasonable assumptions.
Anthony Watts and the team in Fall et al corroborated – with qualifications – the US average temperature record. While there may be siting issues that affect diurnal range variation, these seem to cancel each other out for average temps – and that result is consistent with errors being averaged out for large data samples. Of course, these findings are not the first or the last word, and the authors rightly caveat their conclusions.
The result that UHI is not a strong influence on global (as opposed to local) average temperature records is pretty robust by any reasonable measure. There are a few score blog posts, including from skeptics, that take raw data and adjusted data and compare, raw data for airports, rural and urban stations and find that the difference is not much – and usually in an unexpected direction, same as the BEST results.
Jeff ID took raw data and produced a warmer centennial temp record than HadCRU etc. Over at Lucia’s, Zeke and others have verified the robustness of the official records, with some slight differences, of course. The general consensus in the scientific literature also is that UHI has a negligible influence on hemispheric and global temp records. The semi-popular, or ‘blog’ consensus – including work from skeptics – is strongly that siting issues have been dealt with fairly well in the temp records, and that UHI is not an important factor. If the work collated is limited to in-depth analysis (as opposed to anecdotes, unoriginal work, or too-small samples), the general conclusion is near-unanimous.
In this regard, it is not reasonable to assign bad motives to any and every result that runs counter to your predilections.
Eventually skepticism must shine on one’s own position.
(I’m happy to post links to blog posts from skeptics and others analysing raw and adjusted data, data from rural, airport, urban etc – just say the word. There are many. Just wanted this post not to get stuck in the spam filter)

Theo Goodwin
October 20, 2011 5:16 pm

Brian Angliss says:
October 20, 2011 at 5:06 pm
BEST’s research is not the issue being discussed here. The behavior of Muller and BEST toward Anthony is the issue. BEST’s research uses a 60 year period instead of a 30 year period. Anthony believed in good faith that they would use the 30 year period. At worst, they pulled a bait and switch. At best, they owe Anthony an apology and an explanation. The issue here is not scientific but moral.

EFS_Junior
October 20, 2011 5:23 pm

Wow, reading comprehensions fail. Way off the mark junior. I’m taking about my OWN dataset, the surfacestations survey dataset, which in our own peer reviewed paper we did several different analysis on the 1979-2008 period, as did NOAA’s Menne et al. The reason: we couldn’t ascertain siting quality that far back, there simply is no metadata. Muller should have realized this and done identical comparisons. He didn’t. I notified him of the issue and he still stuck with it. If you can tell me what the siting characteristic of the USHCN weather stations in the USA were during the 1950′s, 1960′s and 1970′s then I’ll gladly retract my concerns.
Until then, they stand – Anthony
_______________________________________________________________________________
So in your surfacestations.org effort, did this effort start circa 1979?
I’ll assume the answer to the above is no.
So somehow the USHCH datasets from the 1940-70 era are suspect because … ? Because your surfacestations.org efforts did not exist back then?
Give us one good reason why the records are suspect, if you don’t mind. A reason that is scientifically testable and falsifiable. Otherwise, you’ve just moved the goal posts so high as to make it impossible for anyone to pass your own subjective criteria.
All surface temperature datasets all over the world are suspect prior to 1979 (or some other arbitrary date of your own choosing) because … ?
As to the last half of your reply, use their datasets/coded, use your 30-year base period, if significantly different from the 60-year results (the subset covering your 30-year base period, that is), than a formal reply would seem to be in order to JGR.
That’s the normal process for carrying forward a true scientific debate. Just because they didn’t do it your way does not invalidate their results.

1DandyTroll
October 20, 2011 5:23 pm

So, essentially, a bunch of lefty hippies putting how the public view them before honest due process. Why am I not surprised that communist are more concerned with how others view them…

October 20, 2011 5:23 pm

Theo, Anthony has no leg to stand on to complain here, given he did the same thing in 2009 with Joe D’Aleo. It’s hypocritical of Anthony to complain about it, especially since the Heartland Institute/SPPI white paper got a great deal of media attention and, to the best of my knowledge, neither Anthony nor Joe have retracted that white paper or asked Heartland to do so. If Anthony has asked for retraction, then it should be a simple matter for him to prove it.

brett jobson sydney australia
October 20, 2011 5:32 pm

The Guardian report says that a 1deg c rise since 1950 confirms that all deniers are wrong. Another analysis of the same so-called data shows that the temperature rise since about 1800 is somewhat less than 1 deg c. What conclusions are to be drawn from this?

Dave Springer
October 20, 2011 5:39 pm

JeffC says:
October 20, 2011 at 10:51 am
“I warned you that this would happen when they asked you to assist … I hope you really aren’t suprised to have gotten fleas from these dogs …”
Mega dittos.
Told you, Anthony, at the outset BEST was a rubber stamp with a predetermined outcome.
Nothing different could possibly come out of Berkeley. Get real. Muller would have been committing professional suicide if he’d come with anything in the least bit critical of the establishment narrative. You’d need to get a study done by a university in eastern Europe or Asia to get an honest opinion. Maybe Czech Technical University in Prague where the faculty is still celebrating their right to express an honest thought…

1DandyTroll
October 20, 2011 5:41 pm

Angliss says:
October 20, 2011 at 5:06 pm
“You may or may not have a point with regard to their analysis[…]”
“But you didn’t actually answer my questions, Anthony.”
Apparently, obviously, he did. :p

October 20, 2011 5:42 pm

Muller & co. show no apparent understanding of the systematic measurement error that plagues surface temperature sensors. They don’t reference any of the papers that discuss this problem, they don’t discuss the problem of systematic sensor error — except as regards to siting problems, which are irrelevant to the issue of inherent sensor measurement error — and demonstrate no awareness that systematic temperature sensor measurement error can correlate regionally.
The problem of inherent systematic sensor error completely vitiates any judgement that surface temperature time series is accurate merely because the time series from surface station A correlates well with the analogous series from surface station B, X km away.
Climate scientists seem mesmerized by this correlation, and appear unable to see beyond it. This shows a real failure of analytical thinking.

October 20, 2011 5:45 pm

gbaikie says:
“Hasn’t it warmed in last century and a half?”
The planet has warmed along a clear trend line since the LIA. The warming has not accelerated over the past 150 years, the past 50 years, or the past decade. In fact, the past decade has resulted in less warming than usual since the LIA.
Using a zero baseline [or any flat temperature baseline] for a y-axis when constructing a graph routinely – and falsely – shows a rapidly accelerating recent warming. This is false. The alarmist clique [which includes Muller and his BEST crowd] deliberately uses deceptive charts that ignore the natural temperature trend that has been occurring at the same rate since before the industrial revolution began. Lying with charts is no different than lying with statistics.

EFS_Junior
October 20, 2011 5:53 pm

REPLY: Sure, just sign all your posts that way from now on and it will never be an issue – Anthony
_______________________________________________________________________________
Why?
I’ve been using EFS_Junior consistently on several climate related blogs for these past few years.
That’s my only excuse, sameness, consistency, and screen name recognition.
Why don’t you demand the same accuracy in name identification from all your many readers?
I mean I asked a simple question, you said something in the past, that you now say with additional qualifiers. That was all I did. You are certainly free to change your own mind at anytime, for whatever reason, as you have done so here. That is all.
You would appear to have some particular fixation with true identity, which for some reason, I have never before encountered elsewhere’s on these Internets.

johanna
October 20, 2011 5:54 pm

To recap my comment of last March, I hope you are right in trusting these people, but I very much fear that you are wrong.
They have been proved to be deficient both in terms of ethics and of science. They have double-crossed you, and used your good name to peddle propaganda. Not to mention setting you up with the media without even the courtesy of asking you.
Glad to read that you still have a shot or two in your locker. But beware of that friendly invitation to the model demonstration you have received. You may shortly afterwards see headlines like ‘renowned skeptic sees the light’.
Desperate times for the Team are generating desperate measures, just like pre-Copenhagen.

Slackermagee
October 20, 2011 5:56 pm

We’ve had some big ‘findings’ and subsequent, acrimonious withdrawals (see the latest news on the XMRV virus work) in biology. I’m sure millions went down the tubes looking into, then again into refuting that garbage. When it was all said and done though, papers went out proclaiming XMRV to be garbage.
Where are your papers? Where’s the call of BS? If you don’t put something out there it’ll never change the flow.
When we look in on this ‘debate’ from outside the climate sciences… we really do shake our heads quite a bit at the skeptics. No references, no papers, nothing that’s come through any sort of peer review!

October 20, 2011 5:58 pm

Muller & co. show no apparent understanding of the systematic measurement error that plagues surface temperature sensors. They don’t reference any of the papers that discuss this problem, they don’t discuss the problem of systematic sensor error — except as regards to siting problems, which are irrelevant to the issue of inherent sensor measurement error — and demonstrate no awareness that systematic temperature sensor measurement error can correlate regionally.
Wrong.
The actually discuss Folland 2001 and take issue with his calculation of measurement error.
And they have a unquire method of estimating it. If you have an issue take it up with the Nobel prize winner

October 20, 2011 6:00 pm

Theo they explain clearly why they picked 1950.
Anthony thinks 1979 is better
Anybody want to bet that the answer wont change when people get the matlab and run a 1979 case?

October 20, 2011 6:01 pm

“Because Mosher now makes some portion of his living on CO2”
sorry, the only income I make is on the book. go figure your wrong about me and C02

October 20, 2011 6:05 pm

Maus.
To show that the erors are normally distributed is pretty easy.
1. sensor errors.. go look at CRN data tri redundnat errors.
2. Adjustment errors: Go read karl and look at the validation tests
3. Micro site; go read Anthony’s paper
4. UHI: Best, jones, parker, peterson
Next

barry
October 20, 2011 6:05 pm

Judith Curry, who contributed to some of the BEST papers, has commented on their release:

“In my relatively minor role in all this, I have had virtually no input into the BEST PR strategy. I have encouraged making the data set available as soon as possible. They were reluctant to do this before papers had been submitted for publication, and cited the problems that Anthony Watts had with releasing his surfacestations.org dataset before papers were accepted for publication. IMO, two of the papers (decadal and surface station quality) should have been extended and further analyzed before submitting (but that very well may be the response of the reviewers/editors.) I agree that it is important to get the papers out there and not be scooped on this by others, especially since Muller and other team members have been giving presentations on this. I have no problem with posting the papers before they are accepted for publication, in fact I encourage people to post their papers before publication.
In terms of how effective the team’s overall PR strategy has been, that is a subject that is certainly open to debate. But my impression is that the group has been honest brokers in all this in terms of trying to improve our understanding of the surface temperature data, while maximizing the impact of the data set and their research.

http://judithcurry.com/2011/10/20/berkeley-surface-temperatures-released/

Bill Illis
October 20, 2011 6:06 pm

Why do these two papers show a different land temperature graph. It appears they are supposed to be the same but they are not. All of the various temperature series are different in the two charts.
See Figure 8:
http://berkeleyearth.org/Resources/Berkeley_Earth_Averaging_Process
See Figure 1:
http://berkeleyearth.org/Resources/Berkeley_Earth_Decadal_Variations
I am also concerned with the 12 month moving average smoothing process they used and the fact that the end-points do not exhibit a 6 month cut-off for example, it looks they jsut keep running a 12 moving average to the end. In addition, the entire GHCN database does not need to be smoothed with a 12 month moving average. Only regional series that have huge variability like the US for example needs to be smoothed. These are supposed to be the top statistical experts but I am seeing serious quality control issues in these papers.
Using the 1950 to 2010 US series for siting (when it is only valid for 1979 to 2009) is just another example of slopiness.

1 5 6 7 8 9 15