A courtesy note ahead of publication for Risbey et al. 2014

People send me stuff. In this case I have received an embargoed paper and press release from Nature from another member of the news media who wanted me to look at it.

The new paper is scheduled to be published in Nature and is embargoed until 10AM PDT Sunday morning, July 20th. That said, Bob Tisdale and I have been examining the paper, which oddly includes co-authors Dr. Stephan Lewandowsky and Dr. Naomi Oreskes and is on the topic of ENSO and “the pause” in global warming. I say oddly because neither Lewandowsky or Oreskes concentrates on physical science, but direct their work towards psychology and science history respectively.

Tisdale found a potentially fatal glaring oversight, which I verified, and as a professional courtesy I have notified two people who are listed as authors on the paper. It has been 24 hours, and I have no response from either. Since it is possible that they have not received these emails, I thought it would be useful to post my emails to them here.

It is also possible they are simply ignoring the email. I just don’t know. As we’ve seen previously in attempts at communication with Dr. Lewandowsky, he often turns valid criticisms into puzzles and taunts, so anything could be happening behind the scenes here if they have read my email. It would seem to me that they’d be monitoring their emails ahead of publication to field questions from the many journalists who have been given this press release, so I find it puzzling there has been no response.

Note: for those that would criticize my action as “breaking the embargo” I have not even named the paper title, its DOI, or used any language from the paper itself. If I were an author, and somebody spotted what could be a fatal blunder that made it past peer review, I’d certainly want to know about it before the paper press release occurs. It is about 24 hours to publication, so they still have time to respond, and hopefully this message on WUWT will make it to them.

Here is what I sent (email addresses have been link disabled to prevent them from being spambot harvested):

===============================================================

From: Anthony

Sent: Friday, July 18, 2014 9:01 AM

To: james.risbey at csiro.au

Subject: Fw: Questions on Risbey et al. (2014)

Hello Dr. Risbey,

At first I had trouble finding your email, which is why I sent it to Ms.Oreskes first. I dare not send it to professor Lewandowsky, since as we have seen by example, all he does is taunt people who have legitimate questions.

Can you answer the question below?

Thank you for your consideration.

Anthony Watts

—–Original Message—–

From: Anthony

Sent: Friday, July 18, 2014 8:48 AM

To: oreskes at fas.harvard.edu

Subject: Questions on Risbey et al. (2014)

Dear Dr. Oreskes,

As a climate journalist running the most viewed blog on climate, I have been graciously provided an advance copy of the press release and paper Risbey et al. (2014) that is being held under embargo until Sunday, July 20th. I am in the process of helping to co-author a rebuttal to Risbey et al. (2014) I think we’ve spotted a major blunder, but I want to check with a team member first.

One of the key points of Risbey et al. is the claim that the selected 4 “best” climate models could simulate the spatial patterns of the warming and cooling trends in sea surface temperatures during the hiatus period.

But reading and re-reading the paper we cannot determine where it actually identifies the models selected as the “best” 4 and “worst” 4 climate models.

Risbey et al. identifies the 18 originals, but not the other 8 that are “best” or “worst”.

Risbey et al. presented histograms of the modeled and observed trends for the 15-year warming period (1984-1998) before the 15-year hiatus period in cell b of their Figure 1.   So, obviously, that period was important. Yet Risbey et al. did not present how well or poorly the 4 “best” models simulated the spatial trends in sea surface temperatures for the important period of 1984-1998.

Is there some identification of the “best” and “worst” referenced in the paper that we have overlooked, or is there a reason for this oversight?

Thank you for your consideration.

Anthony Watts

WUWT

============================================================

UPDATE: as of 10:15AM PDT July 20th, the paper has been published online here:

http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate2310.html

Well-estimated global surface warming in climate projections selected for ENSO phase

Abstract

The question of how climate model projections have tracked the actual evolution of global mean surface air temperature is important in establishing the credibility of their projections. Some studies and the IPCC Fifth Assessment Report suggest that the recent 15-year period (1998–2012) provides evidence that models are overestimating current temperature evolution. Such comparisons are not evidence against model trends because they represent only one realization where the decadal natural variability component of the model climate is generally not in phase with observations. We present a more appropriate test of models where only those models with natural variability (represented by El Niño/Southern Oscillation) largely in phase with observations are selected from multi-model ensembles for comparison with observations. These tests show that climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.

of interest is this:

Contributions

J.S.R. and S.L. conceived the study and initial experimental design. All authors contributed to experiment design and interpretation. S.L. provided analysis of models and observations. C.L. and D.P.M. analysed Niño3.4 in models. J.S.R. wrote the paper and all authors edited the text.

The rebuttal will be posted here shortly.

UPDATE2: rebuttal has been posted

http://wattsupwiththat.com/2014/07/20/lewandowsky-and-oreskes-are-co-authors-of-a-paper-about-enso-climate-models-and-sea-surface-temperature-trends-go-figure/

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Brad

Anthony,
Very well written!! Nothing “extra” added, simply asking a question.
It will be interesting to see if you get a response, or the release gets pushed back.

Jeff D.

Friends helping friends.

Bloke down the pub

Always with the negative waves. I’m sure that it couldn’t possibly make any difference to the results.

Jimmy Haigh.

Best? Worst? They’re all as bad as each other so does it really matter? With climate models its more about “artistic impression” than reality. Think ice skating versus ice hockey…

MattN

I have 4 best guesses as to the response.

Crispin in Waterloo

It will of course be of interest to me to know how close the best and worst are to the actual temperatures as far as they are known.
Nothing could be better for us all than a validated model in the field of climate science.

Anthony,
I was going to ask if you were sent the supplementary data that so often accompanies papers published in Nature, but it is unusual for papers relying on separate supplements to refer the reader to them, so I am supposing this is not an oversight of the sender in this case. Very well handled.
REPLY: I asked the journalist if an SI was included, and none was listed. Still such an important label of the best and worst models, central to the claim of the paper, surely would not be relegated to the depths of an SI. – Anthony

Justthinkin

So we have a shrink,and a history teacher pretending to be climate “scientists”? Just how does one get in on this scam?

Mark Bofill

Lew again huh. He’s probably only doing this so he can write some stupid study about the reception the paper receives.

john robertson

I guess Loo is out of paper again, perhaps he could be deterred by using high gloss instead of newsprint.

Jon

according
http://www.mdpi.com/2076-0787/3/3/299/pdf
This actually is a debate around political/ideological motivated use of science as a tool to promote political ideology and solutions and science resisting this?
“2. ‘The Plan’
For more than 25 years the conventional view has been that an international political solution to
climate change can be negotiated if driven by the engine of science. That is, if a strong enough
scientific consensus on the causes and consequences of anthropogenic climate change could be forged
and sustained, then the compelling force of such rationality would over-ride the differences in
worldviews, beliefs, values and ideologies which characterise the human world. Such a scientific
consensus would bring about the needed policy solutions. This is the “If-then” logic of computer
programming, the conviction that the right way to tackle climate change is through what Dan Sarewitz
at Arizona State University has called “The Plan” [8]. And there are those who still believe in this
project. They excoriate others who obstruct and obscure this pure guiding light of rationality—a
position adopted, for example, by Naomi Oreskes and Erik Conway in their recent book Merchants of
Doubt [9].”

Eliza

WUWT mistake again..Throwing your pearls to the pigs. I would have kept the info, allowed it to be published and THEN only then nailed both the authors and The Journal. Huge mistake once again by WUWT re SG
REPLY: your opinion is being given all the due consideration it deserves, thank you – Anthony

Non Nomen

You caught them napping, I suppose.
It might be helpful to find out the names of the peer reviewers…

Omission is a better word than blunder

Eliza

If it has been published I retract above LOL

pokerguy

“It is also possible they are simply ignoring the email.”
Well let’s put it this way. Had your email contained effusive praise for their brilliant work, they’d have answered you in a New York minute.

wobble

Would it make sense to also send your questions to your contacts at Nature that wanted you to look at it? Or were they simply attempting to generate media interest in the article rather than trying to improve the quality?
REPLY: to be clear, this was sent to me from another journalist, not the Nature editors or PR department – Anthony

George Steiner

Eliza says:
July 19, 2014 at 10:32 am
Mr. Watts is interested in collecting more brownie points towards sainthood. He is not interested in effective opposition to the CO2 scam.
REPLY: your ridiculous opinion is noted, and wrong – just watch and see what happens. – Anthony

DontGetOutMuch

Anthony, the best models are secret, as you would only try to poke holes in them. We should just take Lewandowsky’s word for it, after all he is a doctor.
PS. I hope you did not rupture anything important snickering at my obvious sarcasm…
Hmmm… Snicker Snark beware the Jabberydork!
Oh looky, time for me meds!

G. E. Pease

Anthony,
Myguess is that your notifications went into the two individuals’ junk/spam mail, and they do not check this daily (or ever?).
REPLY: I check my spam folders daily, but noting it here almost certainly ensures they will see it, even if my emails are relegated to spam. – Anthony

M Courtney

It would appear I am susceptible to conspiracy theories as I can’t help wondering what contribution Oreskes and Lew could have made to this paper.
Is it possible that the choice of “best” and “worst” is not calculated by comparison with the real world but rather with socially constructed viewpoints? They could contribute to a subjective choice of models.
In which case, the whole thing becomes a circular as the flight of the oozlum bird.
But I might be a conspiracy theorist

Peter Miller

Lew must suffer from that embarrassing syndrome where individuals suffer an overwhelming urge to have their opinions shot down in flames.
I think psychologists call it ROOFOFF – Recursive Overwhelming Obsessive Fury Over Fanciful Facts.

M Courtney

By the way:
New Scientist reported on Lewandowsky’s Recursive Fury paper in its Feedback section this week.
New Scientist found no fault in the paper and reported that it proved sceptics are all nutters and the complaints could be ignored as it was proven that sceptics are all nutters and that the complaints are actually more proof that sceptics are all nutters…
They didn’t mention that the paper was debunked.
Presumably next week Feedback will include “Buzz Aldrin believes the Moon Landings were faked” as apparently they believe he does.

Joe G

“Pause? Dat ain’t no steenkin’ pause! Dat is the climate engine getting a tune up and revving it’s freakin’ motor to run right over you steenkin’ denialists!”
Remember- The cold Antarctic glacial runoff is feeding the expanding Antarctic sea ice extent. The oceans are reaching their max capacity for storing CO2 without causing mass extinctions. We can’t predict the weather for 10 days out yet we sure as heck can model the climate for decades in the future because hey climate is not weather. 🙂

Cheshirered

Yet *another* explanation for the Pause – is that 14 now? Amazing, considering the science was ‘settled’.

David L. Hagen

Excellent questions that the reviewers should have caught.

bernie1815

Is this the same James Risbey who wrote this paper in 2011: http://www.marine.csiro.au/~ris009/pubfiles/cc_know_ign_clires.pdf ? If so, it is hard for me to square what seems to be the thrust of the current paper with “The ability of CGCMs to simulate changes in the 3d flow in the atmosphere is
severely hampered by the lack of resolution in the ocean component of current
CGCMs. The ocean models in CGCMs used for climate projections do not resolve
mesoscale eddies. This means that they don’t resolve the main source of dynamic
instability of the flow in these models and only very crudely parameterize some of
the components of that instability (Section 3.2).” If it is the same author, did he make a breakthrough or do CGCMs at the global level not suffer from these same limitations?

the selected 4 “best” climate models
============
the obvious mechanism is that they checked all the models and cherry picked the 4 that accidentally happened to have the best fit with observations.
as has been shown repeatedly, when you cherry pick a sample from a larger population because they happen to match observations, this does not demonstrate the sample has any skill at predicting the observations. the laws of probability tell us that some members of a population will match the observations simply by chance.
thus, for example, the hockey stick, and similar results. selection on the dependent variable leads to spurious correlations.

What a discouraging start to a lovely summer weekend; an invitation to review the latest weeping excrescence from the anti-science Púca worshippers. Needless to say, given the authors, it’s a very dubious trap.
Now because I commented that, I must be a conspiracy nut. Not that Oreskes and Lewnydowsky are capable of truly being conspiracists; because they’re blind oblivious parasitic fleas (Xenopsylla cheopis)chock full of enterobacteria (Yersinia pestis) infesting the most disgusting rats. A proper conspiracist must be capable of maintaining multiple layers of deceit; whereas the CAGW believers tend to stick with outbursts of opprobrium and weird application of statistics to poorly kept data.
Speaking of poorly kept data. Anyone else suspect that the tango twins mentioned above are actually waiting for skeptics to thresh the models looking for a so-called best four?
What factors truly make any model best? Just because one accidentally seems to replicate a chosen period? Running what entry positions? Does the model return the same results every time?
Will all data be posted?
Will all model code be published?
Anthony: Your response is well made and spoken. You are treating them as proper scientists. As you’ve demonstrated so often, you are responding as a gentleman would respond to gentlefolk.
Be careful with any next steps. Remember a previous Lewnydoodoo involved a deliberate deception on who sent what. The lack of a response is anomalous or perhaps intentional.
Good Luck!

Chris B

Eliza says:
July 19, 2014 at 10:32 am
WUWT mistake again..Throwing your pearls to the pigs. I would have kept the info, allowed it to be published and THEN only then nailed both the authors and The Journal. Huge mistake once again by WUWT re SG
————————————————
Perhaps that’s the difference between an honest skeptic and dishonest ideologues.

Mick

You do realize the presence of non physical scientists Oreskes and Lewandowsky on the list of authors is probably so the BBC can treat them as valid “climate experts”. I guess we have to look forward to a period of their “views” being paraded by the Beeb as consensus climate science.

Gunga Din

Eliza says:
July 19, 2014 at 10:32 am
WUWT mistake again..Throwing your pearls to the pigs. I would have kept the info, allowed it to be published and THEN only then nailed both the authors and The Journal. Huge mistake once again by WUWT re SG

=================================================================
A mistake? Not if the goal is accuracy and honesty in the field.

Björn from Sweden

Oreskes on climate science???
Small world, not many rats left onboard the sinking AGW-vessel.
This can only be an act of desperation.
Anyway, dont expect a more helpful response than:
“Why should I make the data available to you, when your aim is to try and find something wrong with it…”

One of the key points of Risbey et al. is the claim that the selected 4 “best” climate models could simulate the spatial patterns of the warming and cooling trends in sea surface temperatures during the hiatus period.
Well I suppose I will have to wait for the paper, but the obvious follow up question would be how well did they simulate the spatial patterns prior to the hiatus period? Further, how well did they simulate spatial patterns other than sea surface both before and after the hiatus period? Four models getting one part of the problem right for one part of the time = FAIL.
What might be equally interesting is if this provokes a couple of other possible reactions:
1. The modelling groups fingered as “the worst” defending their position and in so doing, attacking this paper’s credibility.
2. If the paper holds up, and the four worst are really that much different and that bad, then what is the excuse for continuing to use them as part of the ensemble mean? If the paper holds up, these models should be dropped from the ensemble mean for their inaccuracy, the side effect of which would be to lower the sensitivity calculation of the ensemble.

Let’s see.
We know there are 4 best and 4 worst.
It might not be an oversight to not name them.
Hint. Modelers and those who evaluate models generally don’t identify the best versus the worst.
Leads to model wars.
Mine is bigger. No mine is.. Blah blah blah
There are some authors who do name names however.

Hoffer gets at one reason for not naming names.
But one can not simply throw out the worst
The issue is the four worst on this test will be the best on
Some other test

Anthony, I think your approach was the best of all the options, and I would also agree with Mr. Mosher that omission is the better perspective, but the problem with Mr. Mosher’s comment was that you used it in regard to one of your papers and not that of Risbey. You wrote “oversight” to that affect.

Björn from Sweden

” Mick says:
July 19, 2014 at 11:17 am
You do realize the presence of non physical scientists Oreskes and Lewandowsky on the list of authors is probably so the BBC can treat them as valid “climate experts”. I guess we have to look forward to a period of their “views” being paraded by the Beeb as consensus climate science.”
Now thats a brilliant observation, very good Mick!
You nailed it!

I seem to recall during the climategate thing that there was some controversy over which data were used for a particular analysis; that the selection of stations — several out of very many, if I recall correctly, was neither published nor furnished upon request. The reply was “we provided you with all the data”.
We could be seeing the early days of a similar kind of reply here.

A Generalist

Hmm. I’ve got an undergraduate degree in political theory, so it seems that qualifies me to co-author a paper on climate change? I can certainly pitch in lessons learned from Machiavelli. Whoops! Seems they’ve already read the Cliff Notes! Anthony, I hope they don’t pursue legal action against you regarding the embargo. But I wouldn’t be at all surprised if they did.

Steven Mosher;
Steven Mosher says:
July 19, 2014 at 11:31 am
Hoffer gets at one reason for not naming names.
But one can not simply throw out the worst
The issue is the four worst on this test will be the best on
Some other test
>>>>>>>>>>>>>>>>>>>>
Ah yes. If the model got something right, we should keep it’s results across the board and average the parts known to be wrong into the ensemble anyway. Pffft.

Stephen Richards

Steven Mosher says:
July 19, 2014 at 11:31 am
Hoffer gets at one reason for not naming names.
But one can not simply throw out the worst
The issue is the four worst on this test will be the best on
Some other test
Steven, you been eating your weetabix again. That is a really important point you make. Good on yer. Only thing I would change is “will” to might or could or maybe. 🙂

But one can not simply throw out the worst
The issue is the four worst on this test will be the best on
Some other test
================
thus demonstrating that it is chance, not skill that is determining the model results.
think of it this way. you are back in school. every day there is a math quiz. the worst 4 students one day are not going to be the 4 best students another day, no matter how many times you give a quiz. that is because the scores on the math quiz reflect skill.

richard verney

It is very difficult to properly understand what is going on without sight of the paper.
it does surprise me that if reliance is being placed upon the ‘best’ 4 models full details of these models how they are tested, validated and what they say over their entire range is not set out in the paper as well as the reason for selecting those particular models. Are they superior in some way/ or is it just per chance that their outputs better reflect observational data?
Whilst I can see both the pros and the cons of contacting the authors with your enquiry prior to publication of the paper, and I can therefore see why you considered that to contact them is the best approach (although others may disagree, I myself consider it is the best approach), I am not sure why you would wish to share this with us, the readers of your blog, prior to the publication of the paper.
When the paper is published, you could have provided a copy of the paper on this blog, and at the same time set out your (and Bob’s) comments, and explain that you had contacted the authors in advance of publication but they had not responded. That might have been the most ‘saintly’ approach, since it is possible that people will not like the fact that you have referred to an embargoed paper, in advance of publication, and in future you may not be given copies of such papers in advance. Not a criticism, just a thought.
Also, I am unsure, from a tactical perspective, why in your message you would say “.. I am in the process of helping to co-author a rebuttal…” since this may cause the shutters to go up, whereas a more neutral response not mentioning this fact, but merely raising your enquiry regarding the models might be more likely to elicit a constructive response from the authors. As soon as you mention rebuttal, the authors no doubt jump into defence mode. That could explain their lack of response.
Again, not a criticism per se, just my thoughts.

“ferdberple says:
July 19, 2014 at 12:17 pm
But one can not simply throw out the worst
The issue is the four worst on this test will be the best on
Some other test
================
thus demonstrating that it is chance, not skill that is determining the model results.
################
not really. clearly you havent looked at the matter

“Steven, you been eating your weetabix again. That is a really important point you make. Good on yer. Only thing I would change is “will” to might or could or maybe. 🙂
Gavin and others have made the same point. Its a known problem cause the democracy of the models.
not really headline news

hooofer
“Ah yes. If the model got something right, we should keep it’s results across the board and average the parts known to be wrong into the ensemble anyway. Pffft.”
I wish that willis were here to tell you to quote my words
Simple fact is that the avergae of models is a better tool than any given one.
deal with it.
Same with hurricane prediction in some cases.
does it make sense to average models? probably not. But you get a better answer that way
so until someone devises a test to score models.. that is what you have,
pragmatics

I agree with others that it will be useful to know which the best are for this particular feature of climate, because they can then be tested against other features. If they aren’t the best for other features then it would seem more likely they match by chance than by skill. Surely the AGW community has somehow to narrow down the vast range of models to a handful that they consider to be most skillful? Discarding the obvious outliers would demonstrate progress.

Colluded before so here is a context for a trio. Note the ages and probable experience.
Whack javascript off first
http://www.scientificamerican.com/author/stephan-lewandowsky-james-risbey-and-naomi-oreskes/

so until someone devises a test to score models…
We have a test score for models: accurate, repeated predictions.
Reality = 1
Models = 0