Below is an article from TheStatsBlog that I found interesting. Here is what they say about the organization behind it.
Since its founding in 1994, the non-profit, non-partisan Statistical Assessment Service (STATS) has become a much-valued resource on the use and abuse of science and statistics in the media. Our goals are to correct scientific misinformation in the media and in public policy resulting from bad science, politics, or a simple lack of information or knowledge; and to act as a resource for journalists and policy makers on major scientific issues and controversies.
This is exactly what is needed. An arbiter of the validity of statistically presented information. The article they wrote below provides some insight into their mindset.:
In a series of interviews with New York Times science writer Gary Taubes on scientificblogging, psychology professor Seth Roberts turns to the question of how do you go about making the judgment as to whether a scientist is trustworthy, especially when the topic is controversial. Taubes responds:
I’m a stickler about the use of words like “evidence” and “proof”. So if someone tells you there’s no evidence for some controversial belief, you can be fairly confident that they’re a bad scientist. There’s always evidence, or there wouldn’t be a controversy. If somebody says that “we proved that this was true” or “we set out to prove that this was true” that’s another bad sign. The point here, as [Karl] Popper noted, among others, is that you can never prove anything is true; you can only refute it. So researchers who talk about proving a hypothesis is true rather than testing it make me worried.
SETH: Yeah, I see what you’re saying. They overstate; they twist things around to make it come out the way they want. They are way too sure of what they…
TAUBES: Yes, and the really good scientists are the ones, almost by definition, who are most skeptical of evidence that seems to support their beliefs. They’re most aware of how they could have been fooled, how they could have screwed up, or how they might have missed artifacts in their experiment that could have explained what they observed. They’re very careful about what they say. If you ask them to do play devil’s advocate, and tell you how they could have screwed up, then at the very least, they’ll say “Well, if I knew how I could have done it, I would have checked it before I made the claim”. So when I’m talking about discerning the difference between a good scientist and a bad scientist, I’m talking about how they speak about their research, the evidence itself, it’s presence or absence.
Worth bearing in mind when you hear something which appears to overturn consensus expressed in strident terms: Where all the other possible explanations for the phenomenon considered? How did the researchers test their theory and data against the best possible countervailing research? Why do their conclusions offer better explanatory power?
I would add that telling them apart would also extend to graciously being able to admit to mistakes. Recently in this blog, one chemist who goes by a lagomorphic pseudonym made a simple mistake citing pH trends.
Decreasing pH is more acidic, increasing more basic. A solution with pH 8 is more basic than a solution with pH 9.
I pointed it out, as did others, and yet he goes on if nothing has happened a few comments later. Some may argue “well, he didn’t see it” and I would argue “always recheck your work”. Granted its a small and simple mistake, but I can’t understand why not simply admit, apologize, and move on? We’ve seen examples in comments where it is obvious that ego and vanity get in the way of clear thinking. I don’t think anyone is totally immune from that in the scientific process. Sometimes the need to be “right” exceeds accurate representation of results. Thus, the need for a statistical arbiter like the Statistical Assessment Service. Since they haven’t done any climate science work that I know of, perhaps MBH98 (Mann et, al and the “hockey stick”) would be a good first start.
That being said, don’t be afraid to point out my mistakes.