Gatekeeping at Geophysical Research Letters

Dr. Judith Curry writes:

As the IPCC struggles with its inconvenient truth – the pause and the growing discrepancy between models and observations – the obvious question is: why is the IPCC just starting to grapple with this issue now, essentially two minutes before midnite of the release of the AR5?

My blog post on the Fyfe et al. paper triggered an email from Pat Michaels, who sent me a paper that he submitted in 2010 to Geophysical Research Letters, that did essentially the same analysis as Fyfe et al., albeit with the CMIP3 models.

Assessing the consistency between short-term global temperature trends in observations and climate model projects

Patrick J. Michaels, Paul C. Knappenberger, John R. Christy, Chad S. Herman, Lucia M. Liljegren, James D. Annan

Abstract.  Assessing the consistency between short-term global temperature trends in observations and climate model projections is a challenging problem. While climate models capture many processes governing short-term climate fluctuations, they are not expected to simulate the specific timing of these somewhat random phenomena—the occurrence of which may impact the realized trend. Therefore, to assess model performance, we develop distributions of projected temperature trends from a collection of climate models running the IPCC A1B emissions scenario. We evaluate where observed trends of length 5 to 15 years fall within the distribution of model trends of the same length. We find that current trends lie near the lower limits of the model distributions, with cumulative probability-of-occurrence values typically between 5% and 20%, and probabilities below 5% not uncommon. Our results indicate cause for concern regarding the consistency between climate model projections and observed climate behavior under conditions of increasing anthropogenic greenhouse-gas emissions.

The authors have graciously agreed for me to provide links to their manuscript:   [manuscriptMichaels_etal_2010 ] and [supplementary material Michaels_etal_GRL10_SuppMat].

Drum roll . . .  the paper was rejected.   I read the paper (read it yourself), and I couldn’t see why it was rejected, particularly  since it seems to be a pretty straightforward analysis that has been corroborated in subsequent published papers.

The rejection of this paper raised my watchdog hackles, and I asked to see the reviews.  I suspected gatekeeping by the editor and bias against the skeptical authors by the editor and reviewers.

Read more: Peer review: the skeptic filter

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

125 Comments
Inline Feedbacks
View all comments
Mark Bofill
September 21, 2013 1:40 pm

Richard,
Thanks. I agree with your statement:

Whether or not Nick Stokes were right, then there is a clear bias in the publishing criteria adopted by GRL. Clearly, WHO publishes and not WHAT is submitted is being adopted as a publication criterion.

I tried to make the point earlier that certain commenters seemed to blindly be making the same mistake as the reviewers with their various ‘shoot from the hip’ criticisms of the paper, but probably I expressed it poorly. But what else should we expect from the reviewers, who after all are possibly merely a subset of the population also represented by commenters, who exhibit the same behavior? Still, it’s a little ironic.
Best regards sir.

John Whitman
September 21, 2013 1:48 pm

Pamela Gray on September 21, 2013 at 1:23 pm
John, I like this one about the history of climate models but there are others out there. You can imagine the back story related to this. It is a dog eat dog environment that is as territorial as any pitbull. Too bad we don’t get that side of the story.
http://www.aip.org/history/climate/GCM.htm

– – – – – – – –
Pamela Gray,
Scanned through that link. Looks well referenced.
Thanks. I’ll go through it.
Looks like there were significant skeptics well before there were any inklings of thoughts of a body like the IPCC.
John

John Whitman
September 21, 2013 2:38 pm

Judith Curry said,
[. . .]
Well, it seems like ‘skeptical’ papers require a larger number of reviewers (2-3 is typical), especially after one of the original reviewers ‘defects’ and ends up as a coauthor on the paper. I’ve gone through the reviews and discussed them with Michaels and Knappenberger, and we’ve agreed on the following summary of the second round of reviews:
[. . .]

And

JC message to James Annan: kudos, and thank you.

– – – – – – – – –
First, I found very noteworthy that Judith Curry closed her post on GRL’s rejection of Michaels et al paper with a heartfelt thanks to Annan, across the perceived skeptic chasm. She shows one can interact with those who you may have current disagreements without name calling and rude behavior that is seen all too often. We can all learn from her example.
Second, the anonymity of reviewers begs the kind of gate keeping (skeptic filtering) behavior Judith investigates and that we have seen in CG1 & CG2. If reviewers knew their reviews and names would eventually be made public after the decision to publish a paper or not, then It would be less likely for there to be systematic long term gate keeping.
John

Pamela Gray
September 21, 2013 2:40 pm

I have advocated a journal just for reviews, for the very purpose you mention. If reviews were public, papers would improve, the reviews would improve, and we would all benefit from transparent science.

September 21, 2013 2:52 pm

Mark Bofill says: September 21, 2013 at 12:54 pm
I think you’re missing my point here. It isn’t whether the statistical situation is correctly described; it’s whether the result is interesting enough to be publishable. If you can say that models and weather have definitely diverged, then fine. But if you can only say, we’re in the zone where maybe yes, maybe not, then it would be natural for an editor to say, come back when it’s clearer.
And if the weather has turned in a way to undermine the case for yes, it’s much harder again. Lucia effectively admitted this. The usual response to a rejection is to submit elsewhere. But she said that they should keep the ms until the evidence was more favorable, and it seems they did.

September 21, 2013 3:03 pm

Nick Stokes:
Your argument at September 21, 2013 at 2:52 pm is spurious. Mark Bofill is right (and I suspect you know he is).
The matter is explained for you in my post at September 21, 2013 at 1:12 pm
http://wattsupwiththat.com/2013/09/20/gatekeeping-at-geophysical-research-letters/#comment-1422966
Richard

Pamela Gray
September 21, 2013 3:22 pm

While Nick and I have not seen eye to eye on many climate issues, on publishing your work he is pretty accurate. If it bleeds it leads, even in science. Fortunately, we are awash in journals. Keep submitting or hold on to it for a later submission. There isn’t anything particularly evil about it. It just is what it is. Journal employees have to eat. And they eat when people subscribe to the journal. Does it suck? Yep. So be prepared to be injured in climate science publications.

Nick Stokes
September 21, 2013 3:22 pm

richardscourtney says: September 21, 2013 at 3:03 pm
“The matter is explained for you in my post..”

But wrongly.
“The later paper by Fyfe et al.(2013) which GRL accepted and published used similar statistical analysis”
GRL did not publish that paper.
Further, the Fyfe et al paper did include some similar statistical analysis, but did considerably more. It subdivided looking for causative factors (eg ENSO, volcanoes etc).

September 21, 2013 3:34 pm

Nick Stokes:
re your post at September 21, 2013 at 3:22 pm.
The two papers analysed the same data using similar methods and reached the same conclusions. Addition of a few guesses as to why the models are failing is a trivial addition to the later paper.
Richard

September 21, 2013 3:37 pm

Pamela Gray:
re your post at September 21, 2013 at 3:22 pm.
As you often do, you completely miss the point under discussion. It is not relevant whether Michaels et al. could have submitted to another journal.
This thread is about the different treatment of two similar papers by THE SAME JOURNAL.
Richard

Mark Bofill
September 21, 2013 3:39 pm

Nick,

Nick Stokes says:
September 21, 2013 at 2:52 pm …

To cross the t’s and dot the i’s, I expect you are correct about the second submission. There is no question of gate keeping in the case of the second submission. I don’t think this is disputed by anyone, and I’m not disputing it here.
Talking about the first submission, Lucia says:
As for what reviewers seemed to not like: They seemed to not like the result. One reviewer suggested there must be some extra uncertainty we weren’t accounting for and that we could call in Zwiers to settle the argument about this extra uncertainty. Now in 2013– 3 years after we wrote our paper, this extra uncertainty happens not to be included in a paper whose co-author is Zwiers (as it should not be included because including it would be double counting.)
Furthermore, she says :Zwiers published almost essentially the same thing but at a point in the cycle where the rejection is less obvious that at the time we were submitting.
The problem here had nothing to do with an editor saying come back when it’s clearer. It apparently was not clearer when the paper Z coauthored was published.
You’re correct that I got sidetracked off of your main point. This happened because it doesn’t bug me that we disagree about the gate keeping. It bugs me that you offer arguments about the statistics that you know perfectly well are so misleading that they might as well be false. I mean really Nick; the flicker between accept and reject involved is entirely due to our arbitrary convention about what level of confidence we call certainty. But knowing as you do how this stuff works, you pretend that hitting 95% confidence and then dropping back to 93.2% confidence (or whatever the numbers were) means something. It’s something warmists often complain about skeptics doing; people start saying things like ‘Merchants of Doubt’ when the skeptic side uses tactics like this for some reason. Yet here you are, using the same tactics. Yeah, it bugs me, but I guess that’s my problem. I just don’t understand why somebody with your obvious intelligence and education indulges in crap like that.

Nick Stokes
September 21, 2013 3:51 pm

richardscourtney says: September 21, 2013 at 3:37 pm
“This thread is about the different treatment of two similar papers by THE SAME JOURNAL.”

Shouting doesn’t make it any more right. It wasn’t the same journal.

September 21, 2013 4:06 pm

Nick Stokes:
Thankyou for correcting me in your post at September 21, 2013 at 3:51 pm.
You are correct.
Michaels et al. was rejected by GRL.
Fyfe et al. was published by Nature Climate science.
I misunderstood the above essay. My bad.
I owe Pamela Gray an apology and a freely offer it.
Richard

September 21, 2013 4:13 pm

richardscourtney is a class act.

Nick Stokes
September 21, 2013 6:27 pm

Mark Bofill says: September 21, 2013 at 3:39 pm
“It bugs me that you offer arguments about the statistics that you know perfectly well are so misleading that they might as well be false.”

Which are they? I simply said that a reviewer who sees something asserted as true within statistical significance, when at the time he looks at it it is not true, might have justifiable doubts about whether it should be published.
But this is misleading:
“It apparently was not clearer when the paper Z coauthored was published.”
People have been asserting over and over that The Zwiers paper was just the same. But I can’t see ant evidence that they have read it (there are no links). And it’s not at all true.
Firstly Fyfe et al were looking at CMIP5, not CMIP3. The trends, and discrepancy, are greater. So yes, it’s clearer for that reason.But secondly, they don’t do the same analysis at all. Fyfe et al is a much more sophisticated paper.
My main statistical objection to the original study was that it tested observed weather against bounds calculated from model results. This is an obvious failing, and I expect was the basis for some of the referee comments. I raised it with James Annan, who said that they supposed that variability of models and weather would be the same, but gave no evidence that it was. In fact, CMIP3 generation models were well understood to not reproduce well sources of variation like ENSO.
Fyfe et al tested against bounds derived from HADCRUT 4 realizations, which is not subject to that objection.

Mark Bofill
September 21, 2013 6:50 pm

Nick,

Which are they?

Here you say:

Now statistical significance is supposed to be what enables you to say something that won’t be invalidated by future chance happenings. So it’s a lot harder to defend if four months later, the picture does look a lot different.

And here:

I’m applying what you folks say you favor – the test of results. They have used statistical significance tests which are supposed to show results are robust, but four months later they are different.

Please note that I did not claim your statements were false, but that they were so misleading that they might as well be false.
We can dance to this tune all night Nick, but I’m sure it’s getting just as stale for everyone reading as it is for you and me. If you don’t want to admit that it’s misleading to offer stuff like this without explaining why this is the case, I can’t force you to do so.

People have been asserting over and over that The Zwiers paper was just the same. But I can’t see ant evidence that they have read it (there are no links). And it’s not at all true.

I stated earlier that I was taking Lucia at her word in this regard. I haven’t read the Zwiers paper. On this and the subsequent point you raise I’m not competent to try to refute you; I can’t call, so I fold.

Mark Bofill
September 21, 2013 6:54 pm

Nick,
Let me add, I appreciate the time you spent discussing this with me. Thanks.

Nick Stokes
September 21, 2013 8:17 pm

Thanks Mark,
I’m sure we’ll be talking again.

John Whitman
September 21, 2013 11:42 pm

Judith Curry said,
“As the IPCC struggles with its inconvenient truth – the pause and the growing discrepancy between models and observations – the obvious question is: why is the IPCC just starting to grapple with this issue now, essentially two minutes before midnite of the release of the AR5?
{bold emphasis by me-JW}

– – – – – – –
Why? My most favorite of all questions.
The crisis at the IPCC related to AR5, that is sufficiently traumatic to themselves that they procrastinated until it is too late to deal with it, is not the problematic climate models. Their AR5 crisis is not the potential for falsification of the models. Look at the history of climate models from the 1960s until now. Modeling efforts are still viewed as principally just a long term work in progress; a work in progress is not a crisis for them.
For the IPCC the AR5 crisis is in their own lack of confidence in continuing to sell fossil fuel alarm due to their self imposed isolation from independent, open and transparent dialog. Their crisis is they do not know how to engage with the broader community that includes skeptics; with skeptics that have succeeded spectacularly in engaging and communicating well with everyone.
The models only pose an interesting dilemma scientifically, but the AR5 crisis is its inability to handle criticism because of the IPCC’s self-imposed isolationism.
John

rogerknights
September 22, 2013 7:25 am

… the AR5 crisis is its inability to handle criticism . . .

Its crisis is that it can’t handle the truth.

Crabby
September 22, 2013 10:52 am

When the money runs out or is worthless, they will be swinging from the Windmills!! Unfortunately it will be too late for us as well because the world will be in Anarchy thanks to these SOB’s.

September 22, 2013 9:36 pm

Pamela Gray and richardscourney:
Lacking the events that underlie it, a model is scientifically and logically nonsensical. Can you identify the events that underlie the climate models of IPCC AR4?

1 3 4 5