The Berkeley Earth Surface Temperature project puts PR before peer review

UPDATE: see this new story

BEST: What I agree with and what I disagree with – plus a call for additional transparency to prevent “pal” review

=======================================================

Readers may recall this post last week where I complained about being put in a uncomfortable quandary by an author of a new paper. Despite that, I chose to honor the confidentiality request of the author Dr. Richard Muller, even though I knew that behind the scenes, they were planning a media blitz to MSM outlets. In the past few days I have been contacted by James Astill of the Economist, Ian Sample of the Guardian, and Leslie Kaufman of the New York Times. They have all contacted me regarding the release of papers from BEST today.

There’s only one problem: Not one of the BEST papers have completed peer review.

Nor has one has been published in a journal to my knowledge, nor is the one paper I’ve been asked to comment on in press at JGR, (where I was told it was submitted) yet BEST is making a “pre-peer review” media blitz.

One willing participant to this blitz, that I spent the last week corresponding with, is James Astill of The Economist, who presumably wrote the article below, but we can’t be sure since the Economist has not the integrity to put author names to articles:

The full article is here. Apparently, Astill has never heard of the UAH and RSS Global Temperature records, nor does he apparently know that all the surface temperature records come from one source, NCDC.

Now compare that headline and subtitle to this line in the article:

It will be interesting to see whether this makes it past the review process.

And, The Economist still doesn’t get it. The issue of “the world is warming” is not one that climate skeptics question, it is the magnitude and causes.

I was given a pre-release draft copy of one of the papers, related to my work as a courtesy. It contained several errors, some minor (such as getting the name of our paper wrong i.e. Fell et al in several places, plus a title that implied global rather than USA) some major enough to require revision (incorrect time period comparisons).

I made these errors known to all the players, including the journal editor, and the hapless Astill, who despite such concerns went ahead with BEST’s plan for a media blitz anyway. I was told by a BEST spokesperson that all of this was “coordinated to happen on October 20th”.

My response, penned days ago, went unheeded as far as I can tell, because I’ve received no response from Muller or the Journal author. Apparently, PR trumps the scientific process now, no need to do that pesky peer review, no need to address the errors with those you ask for comments prior to publication, just get it to press.

This is sad, because I had very high hopes for this project as the methodology is looked very promising to get a better handle on station discontinuity issues with their “scalpel” method. Now it looks just like another rush to judgement, peer review be damned.

Below is my response along with the draft paper from BEST, since the cat is publicly out of the bag now, I am not bound by any confidentiality requests. Readers should note I have not seen any other papers (there may be up to 4, I don’t know the BEST website is down right now) except the one that concerns me.

My response as sent to all media outlets who sent requests for comment to me:

===========================================================

In contradiction to normal scientific method and protocol, I have been asked to provide public commentary to a mass media outlet (The Economist) on this new paper. The lead author,  Dr. Richard Muller has released me from a previous request of confidentiality on the matter in a written communication on 10/14/2011. 10/15/2011 at 4:07PM PST in an email.  The paper in question is:

Earth Atmospheric Land Surface Temperature and  Station Quality [Tentative title, may have changed] by Muller et al 2011, submitted to the AGU JGR Atmospheres Journal, which apparently has neither completed peer review on the paper nor has it been accepted for publication by JGR.

Since the paper has not completed peer review yet, it would be inappropriate for me to publicly comment on the conclusions, especially in light of a basic procedural error that has been discovered in the methodology that will likely require a rework of the data and calculations, and thus the conclusions may also change. The methodology however does require comment.

The problem has to do with the time period of the data used, a time period which is inconsistent with two prior papers cited as this Muller et al paper being in agreement with. They are:

Fall et al (2011), Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends J. Geophys. Res.

http://pielkeclimatesci.files.wordpress.com/2011/07/r-367.pdf

and

Menne et al  (2010), On the reliability of the U.S. surface temperature record, J. Geophys. Res.

Both papers listed above (and cited by Muller et al) do an analysis over a thirty year time period while the Muller et al paper uses data for comparison from 1950 – 2010 as stated on lines 142-143:

“We calculated the mean temperature from 1950 to the present for each of these sites, and subtracted the mean of the poor sites from the OK sites.”

I see this as a basic failure in understanding the limitations of the siting survey we conducted on the USHCN, rendering the Muller et al paper conclusions highly uncertain, if not erroneous.

There is simply no way siting quality can be established as static for that long. The USHCN survey was based on photographs and site surveys starting in of 2007, plus historical metadata. Since the siting of COOP stations change as volunteers move, die, or discontinue their service, we know the record of siting stability to be tenuous over time. This is why we tracked only from 1979 and excluded stations whose locations were unknown prior to 2002. 1979 represented the practical limit of which we assumed we could reasonably ascertain siting conditions by our survey.

We felt that the further back the station siting changes occurred, the more uncertainty was introduced into the analysis, thus we limited meaningful comparisons of temperature data to siting quality to thirty years, starting in 1979.

Our ratings from surfacestations.org are assumed to be valid for the 1979 – 2008 period, but with Muller et all doing analysis from 1950, it renders the station survey data moot since neither Menne et al nor Fall et al made any claim of the station survey data being representative prior to 1979. The comparisons made in Muller et al are inappropriate because they are outside of the bounds of our station siting quality data set.

Also, by using a 60 year period, Muller et al spans two 30 year climate normals periods, thus further complicating the analysis. Both Menne et al and Fall et al spanned only one.

Because of the long time periods involved in Muller et al analysis, and because both Menne et al and Fall et al made no claims of knowing anything about siting quality prior to 1979, I consider the paper fatally flawed as it now stands, and thus I recommend it be removed from publication consideration by JGR until such time that it can be reworked.

For me to comment on the conclusions of Muller et al would be inappropriate until this time period error is corrected and the analysis reworked for time scale appropriate comparisons.

The Berkeley Earth Surface Temperature analysis methodology is new, and may yield some new and potentially important results on siting effects once the appropriate time period comparisons are made. I welcome the BEST effort provided that appropriate time periods are used that match our work. But, by using time period mismatched comparisons, it becomes clear that the Muller et al paper in its current form lost the opportunity for a meaningful comparison.

As I was invited by The Economist to comment publicly, I would recommend rejecting Muller et al in the current form and suggest that it be resubmitted with meaningful and appropriate 30 year comparisons for the same time periods used by the Menne et al and Fall et al cited papers. I would be happy to review the paper again at that time.

I also believe it would be premature and inappropriate to have a news article highlighting the conclusions of this paper until such time meaningful data comparisons are produced and the paper passes peer review. Given the new techniques from BEST, there may be much to gain from a rework of the analysis limited to identical thirty year periods used in Menne et al and Fall et al.

Thank you for your consideration, I hope that the information I have provided will be helpful in determining the best course of action on this paper.

Best Regards,

Anthony Watts

cc list: James Astill, The Economist, Dr. Joost DeGouw, JGR Atmospheres editor, Richard A. Muller, Leslie Kaufman, Ian Sample

===========================================================

Despite my concerns, The Economist author James Astill told me that “the issue is important” and decided to forge ahead, and presumably produced the article above.

Here is the copy of the paper I was provided by Richard Muller. I don’t know if they have addressed my concerns or not, since I was not given any follow up drafts of the paper.

BEST_Station_Quality (PDF 1.2 MB)

I assume the journalists that are part of the media blitz have the same copy.

I urge readers to read it in entirety and to comment on it, because as Dr. Muller wrote to me:

I know that is prior to acceptance, but in the tradition that I grew up in (under Nobel Laureate Luis Alvarez) we always widely distributed “preprints” of papers prior to their publication or even submission.  That guaranteed a much wider peer review than we obtained from mere referees.

Please keep it confidential until we post it ourselves.

They want it widely reviewed. Now that The Economist has published on it, it is public knowledge.

There might be useful and interesting work here done by BEST, but I find it troubling that they can’t wait for science to do its work and run the peer review process first. Is their work so important, so earth shattering, that they can’t be bothered to run the gauntlet like other scientists? This is post normal science at its absolute worst.

In my opinion, this is a very, very, bad move by BEST. I look forward to seeing what changes might be made in peer review should these papers be accepted and published.

==============================================================

UPDATE: Judith Curry, who was co-author to some of these papers, has a post on it here

Also I know that I’ll be critcized for my position on this, since I said back in March that I would accept their findings whatever they were, but that was when I expected them to do science per the scientific process.

When BEST approached me, I was told they were doing science by the regular process, and that would include peer review. Now it appears they have circumvented the scientific process in favor of PR.

For those wishing to criticize me on that point, please note this caveat in my response above:

The Berkeley Earth Surface Temperature analysis methodology is new, and may yield some new and potentially important results on siting effects once the appropriate time period comparisons are made. I welcome the BEST effort provided that appropriate time periods are used that match our work. But, by using time period mismatched comparisons, it becomes clear that the Muller et al paper in its current form lost the opportunity for a meaningful comparison.

Given the new techniques from BEST, there may be much to gain from a rework of the analysis limited to identical thirty year periods used in Menne et al and Fall et al.

My issue has to do with the lost opportunity of finding something new, the findings may agree, or they may be different if run on the same time periods. I think it is a fair question to ask since my peer reviewed paper (Fall et al) and NOAA’s (Menne et al) paper both used 30 year periods.

If BEST can run their comparison on the 30 year period for which our data is valid, instead of 60 years, as stated before, I’ll be happy to accept the results, whatever they are. I’m only asking for the correct time period to be used. Normally things like this are addressed in peer review, but BEST has blown that chance by taking it public first before such things can be addressed.

As for the other papers supposedly being released today, I have not seen them, so I can’t comment on them. There may be good and useful work here, but it is a pity they could not wait for the scientific process to decide that.

================================================================

UPDATE2: 12:08 PM BEST has sent out their press release, below:

The Berkeley Earth team has completed the preliminary analysis of the land surface temperature records, and our findings are now available on the Berkeley Earth website, together with the data and our code at

www.BerkeleyEarth.org/resources.php.

Four scientific papers have been submitted to peer reviewed journals, covering the following topics:

1. Berkeley Earth Temperature Averaging Process

2. Influence of Urban Heating on the Global Temperature Land Average

3. Earth Atmospheric Land Surface Temperature and Station Quality in the United States

4. Decadal Variations in the Global Atmospheric Land Temperatures

By making our work accessible and transparent to both professional and amateur exploration, we hope to encourage feedback and further analysis of the data and our findings.  We encourage every substantive question and challenge to our work in order to enrich our understanding of global land temperature change, and we will attempt to address as many inquiries as possible.

If you have questions or reflections on this phase of our work, please contact, info@berkeleyearth.org.  We look forward to hearing from you.

All the best,

Elizabeth

Elizabeth Muller

Founder and Executive Director

Berkeley Earth Surface Temperature

www.berkeleyearth.org

=========================================================

I’m still happy to accept the results, whatever they might be, all I’m asking for is an “apples to apples” comparison of data on the 30 year time period.

They have a new technique, why not try it out on the correct time period?

UPDATE4: Apparently BEST can’t be bothered to fix basic errors, even though I pointed them out, They can’t even get the name of our paper right:

http://berkeleyearth.org/Resources/Berkeley_Earth_Station_Quality

I sent an email over a week ago advising of the error in names, got a response, and they still have not fixed it, what sort of quality is this? Fell et all? right under figure 1

And repeated six times in the document they released today.

Sheesh. Why can’t they be troubled to fix basic errors? This is what peer review is for. Here’s my email from October 6th

—–Original Message—–
From: Anthony Watts- ItWorks
Date: Thursday, October 06, 2011 3:25 PM
To: Richard A Muller
Subject: Re: Our paper is attached
Dear Richard,
Thank you for the courtesy, correction:  Fell et al needs to be corrected to
Fall et al in several occurrences.
When we complete GHCN (which we are starting on now) we’ll have a greater
insight globally.
Best Regards,
Anthony Watts

Here is the reply I got from Dr. Muller

—–Original Message—–
From: Richard A Muller
Date: Friday, October 14, 2011 3:35 PM
To: Anthony Watts- ItWorks
Subject: Re: Our paper is attached
Anthony,
We sent a copy to only one media person, from The Economist, whom we trust to keep it confidential.  I sent a copy to you because I knew you would also keep it confidential.
I apologize for not having gotten back to you about your comments.  I particularly like your suggestion about the title; that is an improvement.
Rich
On Oct 14, 2011, at 3:04 PM, Anthony Watts- ItWorks wrote:
> Dear Richard,
>
> I sent a reply with some suggested corrections. But I have not heard back
> from you.
>
> Does the preprints peer review you speak of for this paper include sending
> copies to media?
>
> Best Regards,
>
>
> Anthony Watts

==========================================================

UPDATE 5: The Guardian writer Ian Samples writes in this article:

The Berkeley Earth project has been attacked by some climate bloggers, who point out that one of the funders is linked to Koch Industries, a company Greenpeace called a “financial kingpin of climate science denial“.

Reader AK writes at Judth Curry’s blog:

I’ve just taken a quick look at the funding information for the BEST team, which is:

Funded through Novim, a 501(c)(3) corporation, the Berkeley Earth Surface Temperature study has received a total of $623,087 in financial support.

Major Donors include:

– The Lee and Juliet Folger Fund ($20,000)

– William K. Bowes, Jr. Foundation ($100,000)

– Fund for Innovative Climate and Energy Research (created by Bill Gates) ($100,000)

– Charles G. Koch Charitable Foundation ($150,000)

– The Ann & Gordon Getty Foundation ($50,000)

We have also received funding from a number of private individuals, totaling $14,500 as of June 2011.

In addition to donations:

This work was supported in part by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 ($188,587)

So now (pending peer-review and publication) we have the interesting situation of a Koch institution, a left-wing boogy-man, funding an unbiased study that confirms the previous temperature estimates, “consistent with global land-surface warming results previously reported, but with reduced uncertainty.

The identities of the people involved with these two organizations can be found on their websites. Let the smirching begin.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
409 Comments
Inline Feedbacks
View all comments
JK
October 20, 2011 7:35 pm

Oh, this is rich!
Oh, boohoo, they tricked us! Oh, they released this before it was peer reviewed! Oh, they put everything out in public, which isn’t the scientific process! Oh, they didn’t pick the time period we did! Oh, we shouldn’t comment on anything not peer reviewed, Oh, we will NOT accept this duplicity when we liked him so much! Oh, they were in league with “the team” all the time, we know it!

Theo Goodwin
October 20, 2011 7:37 pm

FleshNotMachine says:
“…and will commenters and posters here accept the BEST results if the paper DOES pass peer-review?”
Of course not. Anthony has clearly stated the reason. His agreement with Muller was for work on the last 30 years but Muller switched to 60 years. Muller can redo his work for 60 years. However, this post is not about science but morality. Muller practices Bait and Switch.

FleshNotMachine
Reply to  Theo Goodwin
October 20, 2011 8:40 pm

Theo Goodwin commented
His agreement with Muller was for work on the last 30 years but Muller switched to 60 years. Muller can redo his work for 60 years.
Does Anthony accept the results for the 30-yr period? It’s BEST’s analysis, after all, and their publication. Hence their choice of what data to look at. It limits their results in one respect, but strengthens them in another. All scientists in a collaboration rarely agree no exactly how a project should proceed.
However, this post is not about science but morality. Muller practices Bait and Switch
Like all science, the results are about the real physical world and independent of anyone’s morality. It’s unfortunate that anyone’s feelings were hurt, but that’s for the collaboration to work out amongst themselves. If Anthony’s name is on the paper(s), he can request it be removed. He can submit a rebuttal paper, or a letter, to the journal. In any case, the world is going to focus on the study itself, as they should.

October 20, 2011 7:38 pm

Smokey, download the code and data from BEST and run it with this:
http://www.gnu.org/software/octave/
Knock your socks off!

October 20, 2011 7:39 pm

Very poor reply, Steve. Folland, et al., 2001 never consider systematic sensor measurement error and your argument from authoritative Nobelism carries no weight.
I’ve searched Muller et al’s unpublished paper (thanks for linking the pdf, Anthony). They don’t at all consider the systematic error that plagues even well-sited surface temperature sensors. This systematic error need not at all average away as 1/sqrtN, and may even increase as new measurements are added in to an average.
Unfortunately for Muller — and equally for Jones and Hansen — it’s impossible to recover the weather-induced systematic error of past measurements. The only way to approach it is by an empirical and widely distributed calibration experiment using homologous instruments (I.e., an array of LiG thermometer-CRS shield set-ups spread across a selection of varied climates), and then apply that error estimate to past measurements. But no one has set up such a calibration experiment, and after-the-fact theory-based error models seem like little more than a hopeless exercise in false precision.

October 20, 2011 7:40 pm

We need a catchy name for this “accept their findings” coarse of events. Retract-gate. No. Renege-gate. Alliterative. Better but no. Got it! Abro-gate. Yep, that’s it! Abro-gate!

October 20, 2011 7:44 pm

Holy cow, BioBob gets it, in spades. Great summary!

Maus
October 20, 2011 7:44 pm

Mosher:
“To show that the erors are normally distributed is pretty easy.”
One should certainly hope. But given that you’re failing admirably at the task does give one pause. So here’s a lesson in the basics for you: To show that it’s normally distributed requires that you already have the mean calculated from other sources and instruments. These can then be compared to each other for relevancy.
You with me so far, or are the basics beyond you?
The Urban Heat Island is defined as a temperature anomaly that has a positive delta with regards to the expected value. By definition it is not normally distributed about the mean. In case you were a complete neophyte to basic thinking I threw in the ‘Urban Cold Island’ so that the relevancy would be crystal clear to you.
However, you’re claiming that they *are* normally distributed about the global mean. This is a different matter and quite intriguing. So rather than having you continue to bluster and sputter over your demonstrated ignorance in calculating basic averages? Provide me a proper citation to a paper that demonstrates that the UHI and UCI are symmetrically distributed about the global mean. Certainly, as an established author you know how to cite a paper properly?
Or are those basics beyond you as well?

Theo Goodwin
October 20, 2011 7:50 pm

Error! Muller can redo his work for 30 years. The last 30 years.

October 20, 2011 7:51 pm

FnM, we’d need evidence of a falsifiable climate model with predictive resolution in the 1 W/m^2 range.
That’s the “evidence,” and the only evidence, of AGW that any science-minded person should require.
But we don’t have one. Therefore, most believer’s criterion of evidence for AGW is clearly unscientific.

FleshNotMachine
Reply to  Pat Frank
October 20, 2011 9:03 pm

Pat Frank commented
FnM, we’d need evidence of a falsifiable climate model with predictive resolution in the 1 W/m^2 range.
How can this depend on models? It seems to me it has to depend on observations. After all, no model can predict the future, because no one knows the exact path the world will take for CO2/CH4/NO2… emissions, land use changes, …

Theo Goodwin
October 20, 2011 7:55 pm

aceandgary says:
“so the article starts by complaining this paper hasn’t passed peer review? What happened to all the screaming that the peer review system was broken? How come I don’t hear that now?”
Let me do this for you in stark colors, black and white. The Warmista are those who trumpet the glory of peer reviewed work. Yet here is Muller choosing to violate the peer review system. Some terms that describe such behavior are Hypocrite, Back Slider, Two Faced, and so on.
We criticize Muller for violating peer review, and especially harshly, because he is one of those who constantly trumpets the glory of the peer review system. We are applying his standards to him and emphasizing that he is not applying them to himself.

Jerry
October 20, 2011 8:00 pm

Anthony, I’d like to ask you what I think is a highly relevant question: what is Mueller’s reaction to your study showing that there are significant temperature differences in Stevenson screens painted with latex paint compared to those painted with whitewash? Your study on that topic is simply irrefutable, and can be easily reproduced by anyone willing to take the time and effort to do so. It’s important because it clearly shows that a systematic upward bias in temperature measurements absolutely has been introduced into the historic temperature data. If the BEST method of analysis does not reveal such a trend, then something is very wrong with it.

Toto
October 20, 2011 8:03 pm

rw says:
October 20, 2011 at 11:35 am

Toto, peer review isn’t perfect, but in my experience it’s usually adequate. (Recall W. Churchill’s remark about democracy: that it’s the worst political system in the world, except for every other one that’s been tried). The trouble comes if an entire field has become too politicized and polarized; then all bets are off and we’re into post-normal (or is it post-modern?) science.

I get Churchill and W.C.Fields confused. Great sound-bite, but what does it really mean? The full quote and its context are interesting but OT.
Regarding peer review, I would say sometimes adequate, often dismally not. It is tradition bound; we could do better now. Getting wide input before publication is not a bad thing; the question is: what will BEST do with it? It’s quite the shotgun approach; will they even see or read most of the responses?

Richard "Heatwave" Berler, CBM
October 20, 2011 8:03 pm

Thanks Anthony…those 2 rooftop exposures are good examples. Glad they were done away with. When looking at Baltimore City (gross example of a class #5 site!) vs. BWI, I note annual mean temperatures about +1.8C (+3.1F) for city over BWI. The records suggest that individual days have wandered +5C (+9F) above BWI. Tmax average +0.8C (+1.4F) above BWI, Tmins average +2.2C (+3.9F) above BWI. Tucson’s class #5 record of 117F is just 1.1C (2F) above the Tucson WSO 115F record, and .5C (1F) below the 118F at the Tucson Magnetic Observatory. My MMTS would be described as class 5, and I find that it’s readings are not discernably different from readings that I have made with traceable thermometers in a large adjacent field well away from the manmade surfaces. In the 25 years that I have operated the MMTS, my absolute TMax is 114F. Surrounding towns within 75 miles have recorded 116F in the case of Encinal, Freer, and Zapata, 114F in Hebbronville. With the work that you have done with numerous class 4 & 5 sites, can you estimate the systematic error in Tmax/Tmin for these classes as a whole (My experience in looking at the data from the monthly climatological data from Texas coop stations and my own measurements in the field and from my MMTS suggests that the siting errors are a good fraction of an order of magnitude below the >5C (>9F) error that is attributed to class 5 stations in your paper. I know that you were quoting that number from other papers…I’m wondering what your experience has been in this regard.

Jerry
October 20, 2011 8:04 pm

Anthony, I’d like to ask you what I think is a highly relevant question: what is Mueller’s reaction to your study showing that there are significant temperature differences in Stevenson screens painted with latex paint compared to those painted with whitewash? Your study on that topic is simply irrefutable, and can be easily reproduced by anyone willing to take the time and effort to do so. It’s important because it clearly shows that a systematic upward bias in temperature measurements absolutely has been introduced into the historic temperature data. If the BEST method of analysis does not reveal such a trend, then something is very wrong with it.
(Suggestion for future experiment: use that nifty IR imaging camera you used to refute the Climate 101 Gore experiment to show the differences between IR absorption/reflection on latex and whitewashed Stevenson screens. Also, please tell us the make and model of the tool. I wouldn’t mind buying one of them for similar studies of my own.)

Theo Goodwin
October 20, 2011 8:07 pm

Brian Angliss says:
October 20, 2011 at 5:23 pm
“Theo, Anthony has no leg to stand on to complain here, given he did the same thing in 2009 with Joe D’Aleo.”
Brian, you are one extended fallacious whine. What Anthony did with D’Aleo is irrelevant to this topic.
I have set forth some of the moral wrongs committed by Muller and BEST against Anthony. They are set forth right up above in plain English. Address them. Then you can have a debate. All you are doing at this time, bringing up entirely different matters such as D’Aleo, is flailing the air.

Reply to  Theo Goodwin
October 20, 2011 9:16 pm

Funny, Theo, but last I checked, the title of this post was “The Berkeley Earth Surface Temperature project puts PR before peer review.” Anthony did that very thing (by his own admission earlier in these comments) with the Heartland piece he wrote with D’Aleo regarding early results of the surfacestations project. Put simply, Anthony committed the same sin he now accuses Muller of committing, and it’s hypocritical of Anthony unless he’s asked that his original Heartland paper be taken down. This is especially true given that Anthony’s own paper showed that the original Heartland white paper was wrong.
If he’s asked Heartland to remove the paper (or, now that I think about it, attach a warning as to the wrongness of the paper) then Anthony isn’t a hypocrite. If he hasn’t, then he is a hypocrite. And if he’s a hypocrite on this issue, then that cast a pall over this entire post. I’d personally say that is a more fundamental issue than what Muller did or didn’t do and whether it was or was not moral/ethical. But that’s just me.

Theo Goodwin
October 20, 2011 8:12 pm

steven mosher says:
October 20, 2011 at 6:00 pm
“Theo they explain clearly why they picked 1950.
Anthony thinks 1979 is better”
There are two sides so you get to pick yours and that is the end of the matter. That is the road to moral idiocy. Anthony had an understanding with Muller. Muller betrayed Anthony’s trust. Can you even recognize the terminology of morality?

Theo Goodwin
October 20, 2011 8:14 pm

Anthony,
This post proves that you have no choice but to stand alone. Any contact with Warmista will be turned into an industry of hatred directed toward you. Clearly, it would be a horrendous mistake for you to accept Trenberth’s invitation.

Richard "Heatwave" Berler, CBM
October 20, 2011 8:15 pm

And as an addition, today is a nice example of my class #5 site vs. area temperatures within 75 miles:
Laredo MMTS 88F
Laredo AWIS 89F
Faith Ranch (SW Dimmit county) 87F
Cotulla 88F
Zapata 88F
Hebbronville 87F
This is not to say that class #5 sites should be utilized in climate change studies…I question the large systematic errors that are attributed to these sites in your source for your table of site class vs. expected error.
Richard “Heatwave” Berler, CBM

October 20, 2011 8:39 pm

“The issue of “the world is warming” is not one that climate skeptics question, it is the magnitude and causes.”
For myself I would add:
And the validity of the models, the anticipated impacts of warming, (including their projected economic effects) and the appropriate response if their forecasts were accurate.

October 20, 2011 8:49 pm

What this thread shows is that NO ONE really knows what is going on (except maybe the geologists) … there are folks with axes to grind on both sides while the chickens run around either saying the sky is falling or they are headless … but for most of us it will still be chicken soup and whether it is half a degree warmer or colder than usual won’t matter a whit. It just goes to show the whole IPCC/Global Warming thing is a huge scam with a lot of money for pretenders on both sides of the issue. Meanwhile, I shall go fill out my forms for my zero tillage carbon credits and my tree farm carbon credits and shoot a few wild beasties around the oil well for dinner and add a fish or two from my fish pond … /sarc off (sort off) regards, Wayne, from my oily Alberta horse powered ranch. ;-(

David Ball
October 20, 2011 8:50 pm

Anthony (and anyone else planning to attend), I am hoping you will go into the Trenberth arena with eyes wide open. The past actions of these fellows should not be discounted. You always say that ” malice cannot be assumed where incompetence will suffice” ( paraphrasing). Neither is acceptable IMO.

Theo Goodwin
October 20, 2011 9:00 pm

FleshNotMachine says:
October 20, 2011 at 8:40 pm
“Does Anthony accept the results for the 30-yr period? It’s BEST’s analysis, after all, and their publication. Hence their choice of what data to look at. It limits their results in one respect, but strengthens them in another. All scientists in a collaboration rarely agree no exactly how a project should proceed.”
Can you define one single moral concept? Can you apply it? I think not. It is really wonderful when people like you identify yourselves so clearly. I shall studiously avoid you.

FleshNotMachine
Reply to  Theo Goodwin
October 20, 2011 9:12 pm

Theo Goodwin commented
Can you define one single moral concept? Can you apply it? I think not. It is really wonderful when people like you identify yourselves so clearly. I shall studiously avoid you.
Science isn’t about morals — it’s about describing the world.

John Dodds
October 20, 2011 9:04 pm

About the BEST releases and the Peer Review
THe 1896 Arrhenius paper that blamed CO2 for the greenhouse effect warming, was apparently also peer reviewed. HOWEVER Mother Nature shows every NIGHT that it is the reduction in incoming energy photons and NOT the CO2 in the greenhouse effect that is responsible for the change in temperature: ie more photns means warming, fewer photons means cooling.
This means that EVERY SINGLE paper from 1896 thru NOW that blames CO2 for warming, or uses this basis is wrong, and the Peer Reviewers have missed it. Does that make peer review a reliable mechanism?

FleshNotMachine
Reply to  John Dodds
October 20, 2011 9:15 pm

John Dodds commented
HOWEVER Mother Nature shows every NIGHT that it is the reduction in incoming energy photons and NOT the CO2 in the greenhouse effect that is responsible for the change in temperature: ie more photns means warming, fewer photons means cooling.
This means that EVERY SINGLE paper from 1896 thru NOW that blames CO2 for warming, or uses this basis is wrong, and the Peer Reviewers have missed it. Does that make peer review a reliable mechanism?

AGWers blame CO2 for _additional_ warming, not _all_ warming. Let’s at least admit they’re not completely incompetent.

October 20, 2011 9:06 pm

FleshNotMachine says:
“Now, what does “per the scientific method” mean?”
It means the complete transparency of all data, metadata, methodologies, and everything else related to the authour’s conclusions. How else can the claims of runaway global warming be replicated?? Wake me when the alarmist contingent complies with transparency. So far, they have always stonewalled.
Picking and choosing which cherry-picked information to provide is just playing pseudo-science games. When Steve McIntyre states that all his requested information has been provided, I’ll be satisfied. Until then, your conniving and mendacious side is just being anti-science. As usual.

FleshNotMachine
Reply to  dbstealey
October 20, 2011 9:21 pm

Smokey commented
It means complete transparency of all data, metadata, methodologies, and everything else related to the authour’s conclusions. Wake me when the alarmist contingent complies. So far, they have stonewalled.
BEST provides all this. What are they lacking?
And very, very little historical findings in science would fit this bill, either now or at the time they were published. Where is Kepler’s data that proved his third law? Or the data that BEST’s Perlmutter used to prove accelerating expansion of the universe? Or the nightly raw data that showed a lack of sunspots during the Maunder Minimum?

FleshNotMachine
October 20, 2011 9:10 pm

Theo Goodwin says:
Error! Muller can redo his work for 30 years. The last 30 years.
It’s BEST’s project — they can do it for whatever period they want, and their results apply accordingly.
if anyone else, like you or Anthony, wants a different time period, they should do it themselves and publish their own paper. BEST already provides a framework to start from.

1 7 8 9 10 11 15