Gaming the peer review system: IPCC scientists behaving badly

Dilbert.com
Embedded with permission from dilbert.com - click to see original

How IPCC scientists interfere with publication of inconvenient scientific results

By David H. Douglass, Professor of Physics, University of Rochester, New York, and John R. Christy, Distinguished Professor, Atmospheric Science, University of Alabama at Huntsville

  • In this article, reprinted from The American Thinker, two eminent Professors reveal just one of the many seamy stories that emerge from the Climategate emails. A prejudiced journal editor conspires with senior IPCC scientists to delay and discredit a paper by four distinguished scientists demonstrating that a central part of the IPCC’s scientific argument is erroneous.

The Climategate emails from the Climatic Research Unit at the University of East Anglia in England have revealed how the normal conventions of the peer-review process appear to have been compromised by a Team of “global warming” scientists, with the willing cooperation of the editor of the International Journal of Climatology, Glenn McGregor.

The Team spent nearly a year preparing and publishing a paper that attempted to rebut a previously published paper in that journal by Douglass, Christy, Pearson and Singer. Our paper, reviewed and accepted in the traditional manner, had shown that the IPCC models that predicted significant “global warming” in fact largely disagreed with the observational data.

We will let the reader judge whether this team effort, revealed in dozens of emails and taking nearly a year, involves inappropriate behavior including (a) unusual cooperation between authors and editor, (b) misstatement of known facts, (c) character assassination, (d) avoidance of traditional scientific give-and-take, (e) using confidential information, (f) misrepresentation (or misunderstanding) of the scientific question posed by us in our paper, (g) withholding data, and more.

The team is a group of a number of climate scientists who frequently collaborate and publish papers which often supports the hypothesis of human-caused global warming. For present purposes, leading members of the Team include Ben Santer, Phil Jones, Timothy Osborn, and Tom Wigley, with lesser roles for several others.

Introduction

We submitted our paper to the International Journal of Climate on 31 May 2007. The paper was accepted four and a half months later, on 11 October. The page-proofs were accepted on 1 November. The paper was published online on 5 December. However, we had to wait very nearly a year after online publication, until 15 November 2008, for publication of the print version of the paper.

Ben Santer and 17 members of the Team subsequently published a paper intended to refute ours. It was submitted to the International Journal of Climate on 25 March 2008. It was revised on 18 July, accepted two days later, published online on 10 October, and published in print on 15 November, little more than a month after online publication.

This story uses various of the Climategate emails and our own personal knowledge of events and issues. References will be made to items in an appendix that are arranged chronologically. Each of the emails has an index number which comes from a compilation at http://www.eastangliaemails.com/index.php

2. The story

Our record of this story begins when Andrew Revkin, a reporter for the New York Times, sent three Team members an email dated 30 Nov 2007, to which he attached the page-proofs of our paper, which we had not sent to him. His email to the Team is dated just one week before the online publication of our paper. The subject of Revkin’s email,

“Sorry to take your time up, but really do need a scrub of Singer/Christy/etc effort”, implies that there had been prior correspondence between Revkin and the Team.

Carl Mears, a Team member, quickly responded with an email dated 4 December 2007 to fellow Team members Jones, Santer, Thorne, Sherwood, Lanzante, Taylor, Seidel, Free and Wentz Santer replies to all of these:

“I’m forwarding this to you in confidence. We all knew that some journal, somewhere, would eventually publish this stuff. Turns out that it was the International Journal of Climatology.”

Santer knew this because he had reviewed and rejected our paper when it had been previously submitted to another journal. Phil Jones, then director of the Climatic Research Unit at East Anglia, and now stood down pending an investigation of the Climategate affair, responded to Santer:

“It sure does! Have read briefly – the surface arguments are wrong. I know editors have difficulty finding reviewers, but letting this one pass is awful – and the International Journal of Climatology was improving.”

This exchange provides the first reference to the International Journal of Climatology.

The next day, 5 December 2007, the day on which our paper appeared on-line, Santer sent a email to Peter Thorne with copies to Carl Mears, Leopold Haimberger, Karl Taylor, Tom Wigley, Phil Jones, Steve Sherwood, John Lanzante, Dian Seidel, Melissa Free, Frank Wentz, and Steve Klein. Santer says:

“Peter, I think you’ve done a nice job in capturing some of my concerns about the Douglass et al. paper… I don’t think it’s a good strategy to submit a response to the Douglass et al. paper to the International Journal of Climatology. As Phil [Jones] pointed out, the Journal has a large backlog, so it might take some time to get a response published. Furthermore, Douglass et al. probably would be given the final word.”

The most critical point throughout these emails is the goal of preventing us from providing what is considered normal in the peer-reviewed literature: an opportunity to respond to their critique, or as they put it, “be given the final word.” One wonders if there is ever a “final word” in science, as the authors here seem to imply.

The next day, 6 December 2007, Melissa Free responded with a cautious note, evidently because she had presented a  paper with Lanzante and Seidel  at the American Meteorological Society’s 18th conference on Climate Variability and Change, acknowledging the existence of the discrepancy between observations and models – the basic conclusion of our paper:

“What about the implications of a real model-observation difference for upper-air trends? Is this really so dire?”

Santer responded on 6 December 2007 with his key reason for attacking our paper:

“What is dire is Douglass et al.’s wilful neglect of any observational datasets that do not support their arguments.”

This “wilful neglect” of “observational datasets” refers to the absence of two balloon datasets RAOBCORE v1.3 and v1.4. We had explained in addendum to our paper that these datasets were faulty.

A further email from Jones, dated 6 Dec 2007, discusses options for beating us into print.  Wigley, a former head of the Climatic Research Unit, enters the story on 10 Dec 2007 to accuse us of “fraud”, adding that under “normal circumstances” this would “cause him [Professor David Douglass] to lose his job”.

We remind the reader that our paper went through traditional, anonymous peer-review with several revisions to satisfy the reviewers and without communicating outside proper channels with the editor and reviewers.

Tim Osborn, a colleague of Jones at the Climatic Research Unit and a member of the editorial board of the International Journal of Climate, then inserted himself into the process, declaring a bias on the issue. He said that Professor Douglass’ previous papers “appear to have serious problems”.

Santer responded on 12 December 2007 with gratitude for the “heads-up”, again making the claim that our paper had ignored certain balloon datasets, when in fact our paper had not used these datasets because they were known to be faulty.

The same day, an unsigned report appeared on the Team’s propaganda website, RealClimate.org, attacking us especially about not using the RAOBCORE 1.4 balloon dataset.

This prompted us to submit a one-page Addendum to the International Journal of Climatology on 3 January 2008 to explain two issues: first, the reason for not using RAOBCORE 1.4 and secondly, the experimental design to show why using the full spread of model results to compare with observations (as Santer i. would do) would lead to wrong conclusions about the relationship between trends in the upper air temperature vs. the surface. A copy of the addendum may be found at http://www.pas.rochester.edu/~douglass/.)

Osborn wrote to Santer and Jones on 10 January 2008 to discuss the “downside” of the normal comment-reply process in which we should be given an “opportunity to have a response.”  He explained that he has contacted the editor of the International Journal of Climatology, Glenn McGregor, to “see what he can do”.  According to Osborn, McGregor “promises to do everything he can to achieve a quick turn-around.”  He also wrote:

“… (and please treat this in confidence, which is why I emailed to you and Phil only) that he [McGregor] may be able to hold back the hardcopy (i.e. the print/paper version) appearance of Douglass et al., possibly so that any accepted Santer et al. comment could appear alongside it. He [McGregor] also intends to “correct the scientific record” and to identify “in advance reviewers who are both suitable and available”, perhaps including “someone on the email list you’ve been using”. Given the bias of Osborn and McGregor as expressed in the emails, one could wonder what it means to be a “suitable” reviewer of the Santer paper.

Santer responded with his conditions, highlighting his intent to have the “last word”:

“1. Our paper should be regarded as an independent contribution, not as a comment on Douglass et al. … 2. If the International Journal of Climatology agrees to 1, then Douglass et al. should have the opportunity to respond to our contribution, and we should be given the chance to reply. Any response and reply should be published side-by-side, in the same issue of the Journal. I’d be grateful if you and Phil could provide me with some guidance on 1 and 2, and on whether you think we should submit to the Journal. Feel free to forward my email to Glenn McGregor.”

This Osborn email and the response by Santer essentially lay out the publication strategy apparently agreed to by Santer, Jones, Osborn and editor McGregor. Santer accepts Osborn as a conduit and defines the conditions (having the “last word”). This is exactly what he seeks to deny to us, even though it was we who had published the original paper in this sequence in the Journal, and should, under customary academic procedures, have been entitled to have the last word alongside any rebuttal of our paper that the Journal published.

We were never informed of this process, even though it specifically addressed our paper, nor were we contacted for an explanation on any point raised in these negotiations. Santer’s allegations regarding our paper and his conditions for publication of his response to it were simply accepted by the Journal’s editor. If our results had indeed been so obviously and demonstrably in error, why would anyone have feared a response by us?

The same day, 10 January 2008, Jones told the Team (Wigley, K. Taylor, Lanzante, Mears, Bader, Zwiers, Wentz, Haimberger, Free, MacCracken, Jones, Sherwood, Klein, Solomon, Thorne, Osborn, Schmidt, and Hack) a “secret” he had learned from Osborn: that one of the recipients on the Santer email list was one of the original reviewers of our paper – a reviewer who had not rejected it:

“The problem! The person who said they would leave it to the editor’s discretion is on your email list! I don’t know who it is – Tim does – maybe they have told you? I don’t want to put pressure on Tim. He doesn’t know I’m sending this. It isn’t me by the way – nor Tim! Tim said it was someone who hasn’t contributed to the discussion – which does narrow the possibilities down!”

Does Santer start wondering who the original reviewer is?  Does Osborn reveal this part of McGregor’s secret?

Then, on the matter of paying for expensive color plots, Jones adds, “I’m sure I can lean on Glenn [McGregor] to evidently deal with the costs.” Obviously, no such assistance had been offered to us when we had published our original paper.

The final approval of the strategy (Santer’s conditions) to deny us an opportunity to respond in the normal way is acknowledged by Osborn to Santer and Jones on 11 January 2008. Osborn writes that McGregor, as editor is “prepared to treat it as a new submission rather than a comment on Douglass et al.” and “my [McGregor’s] offer of a quick turnaround time etc. still stands.”  Osborn also reminds Santer and Jones of the potential impropriety of this situation:

“… the only thing I didn’t want to make more generally known was the suggestion that print publication of Douglass et al. might be delayed… all other aspects of this discussion are unrestricted.”

Santer now informed the Team that the strategy had been agreed to. We were never notified of these machinations, and it is clear that Santer’s story of the situation was never investigated independently. In this long email, the issue of radiosonde errors is discussed, together with the fact that one dataset, RAOBCORE v1.4, is missing from our paper.

To explain briefly, Sakamoto and Christy (accepted in 2008 and published in 2009) looked closely at the ERA-40 Reanlayses on which RAOBCORE v1.3 and v1.4 were based, and demonstrated that a spurious warming shift occurred in 1991 (a problem with a satellite channel: HIRS 11) which was then assimilated into RAOBCORE, producing spurious positive trends in the upper troposphere and lower stratosphere.

Sakamoto and Christy had been working on this since 2006 when they had first met, and so were aware of the problems at that time. Later, on 27 May 2008, Sherwood – a member of the Team – comments on this evidence during the deliberations on Santer’s publication, so the Team was aware of the problem too.  Even though McGregor had sent Santer our Addendum explaining the RAOBCORE problems as early as 10 April 2008, their published paper contains the statement:

“Although DCPS07 had access to all three RAOBCORE versions, they presented results from v1.2 only.”

Another interesting comment here is that Santer does “not” want to “show the most recent radiosonde [balloon] results” from the Hadley Center and Sherwood’s IUK. In short, he was withholding data that did not support his view, probably because these two datasets, extended out in time, provide even stronger evidence in favor of our conclusion. The final version of Santer’s paper cuts off these datasets in 1999.

Professor Douglass became concerned that McGregor had not responded after receiving the Addendum sent on 3 January 2008. The Professor wrote on 1 April 2008 to ask about the status of the Addendum.  On 10 April 2008 McGregor responded that he had had “great difficulty locating your Addendum”, and Douglass responded with the International Journal of Climatology’s file number acknowledging receipt of the Addendum on 3 January, and attached the Addendum again.  That very day, McGregor sent the Addendum to Santer to “learn your views.”  Santer was afforded the opportunity to comment on our Addendum, but we never heard about it from McGregor again.

On 24 April 2008 McGregor informed Santer that he had received one set of comments and,  though he “… would normally wait for all comments to come in before providing them to you, I thought in this case I would give you a head start in your preparation of revisions”.

That day, Santer informed the Team of the situation. Ws there ever any possibility that Santer’s paper could have been rejected, given the many favors already extended to this submission? McGregor now knew, because he had the Addendum, what the main point of our response to Santer et al. might be, yet evidently dropped the Addendum from consideration.  At this point, we were unaware of any response by Santer to our Addendum, as we were dealing with the RealClimate.org blog on this matter.

Santer was worried about the lack of “urgency” in receiving the remaining reviews and, on 5 May 2008, complained to McGregor.  He reminded McGregor that Osborn had agreed to the strategy that the “process would be handled as expeditiously as possible”. McGregor replied that he hoped that the further comments would come within “2 weeks”.  The following day, Osborn wrote to McGregor that Santer’s 90-page article was much more than anticipated, implying that Santer was being rather demanding considering how much had been done to aid him.  One wonders why it should take 10 months and 90 pages to show that any paper contained a “serious flaw”, and why Santer et al. needed to be protected from a response by us.

A paper by Thorne now appeared in Nature Geosciences which referenced the as-yet-unpublished paper by Santer et al. (including Thorne). On 26 May 2008, Professor Douglass wrote to Thorne asking for a copy and was told the following day that Thorne could not supply the paper because Santer was the lead author author.

Professor Douglass replied that day, repeating his request for a copy of the paper and reminding Thorne of Nature’s publication-ethics policy on the availability of data and materials:

“An inherent principle of publication is that others should be able to replicate and build    upon the authors’ published claims. Therefore, a condition of publication in a Nature journal is that authors are required to make materials, data and associated protocols available …”

At the same time Professor Douglass asked Santer for a copy of the paper. Santer responded by saying, “I see no conceivable reason why I should now send you an advance copy of my International Journal of Climatology paper.”  From the emails, we now know that the Santer et al. manucsript had not been accepted at this point, even though it had been cited in a Nature Geosciences article.  What is very curious is that in the email Santer claims Professor Douglass “… did not have the professional courtesy to provide me with any advance information about your 2007 International Journal of Climate paper …”.

In fact, Santer had been a reviewer of this paper when it had been submitted earlier, so he had in possession of the material (only slightly changed) for at least a year. Additionally, Santer received a copy of the page-proofs of our paper about a week before it even appeared online.

In further email exchanges the following day, 28 May 2008, Santer and his co-authors discussed the uncomfortable situation of having a citation in Nature Geosciences and being unable to provide the paper to the public before “a final decision on the paper has been reached”.  Santer stated they should “resubmit our revised manuscript to the Journal as soon as possible”, implying that Professor Douglass’ point about the ethics policies of Nature, which required cited literature to be made available, might put Santer et al. in jeopardy.

On 10 July 2008, Santer wrote to Jones that the two subsequent reviews were in but reviewer 2 was “somewhat crankier”. Santer indicated that McGregor has told him that he will not resend the coming revised manuscript to the “crankier” reviewer. This was another apparent effort by McGregor to accommodate Santer.

Conclusion

On 21 July 2008, Santer heard that his paper had been formally accepted and expressed his sincere gratitude to Osborn for “all your help with the tricky job of brokering the submission of the paper to the International Journal of Climatology”. Osborn responds, “I’m not sure that I did all that much.”

On 10 October 2008, Santer et al’s paper was published on-line.  Thirty-six days later Santer et al. appeared in print immediately following our own paper, even though we had waited more than 11 months for our paper to appear in print.  The strategy of delaying our paper and not allowing us to have a simultaneous response to Santer et al. published had been achieved.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

182 Comments
Inline Feedbacks
View all comments
Joel Shore
December 22, 2009 6:23 pm

Eric (skeptic) says:

For those interested in how minor an issue the calculation of the variance of the model ensembles is, here is more background: arxiv.org/pdf/0905.0445 Joel Shore’s argument is simply chaff to distract from numerous important differences in the approaches.

I love how the sort of people who have been making mountains out of molehills for years (e.g., the Y2K bug that had essentially zero effect on the global temperature trend) are willing to dismiss complete and significant mathematical errors that in fact cause considerable change in the results when it is made by people who they like.
I am personally rather agnostic as to whether differences in the tropical tropospheric amplification between models and data are statistically significant. There are simply too many problems with the various data sets to make a rigorous determination…or at least one that allows you to conclude anything about where the fault lies. The original work by Santer et al. back in 2005 ( http://www.sciencemag.org/cgi/content/abstract/sci;309/5740/1551), which I personally found more compelling than their more recent paper (and much better than the hopelessly flawed work of Douglass et al.) concluded at that time that there was a difference between the models and experimental data…and gave good reasons to suspect why the problem lay with residual problems with the data. (And, in fact, since that paper was published, an error in the UAH data was discovered and corrected that, while still not resolving the conflict with the models or with the RSS data set, certainly makes a significant step in that direction…See “Update 7 Aug 2005” in this file http://vortex.nsstc.uah.edu/data/msu/t2lt/readme.18Jul2009 for mention of the correction of this error which, as they note, most strongly affected the tropical trends.)
However, this still doesn’t excuse all the flaws with the Douglass et al. paper, which vastly underestimated the uncertainty in the modeling results and also sidestepped the whole issue of problems with the data, even though those problems were well-known and indeed had led to a myriad of different data sets with various corrections applied. It was an extremely poor and biased piece of work and the press release and public pronouncements by the authors were a downright embarrassment.

savethesharks
December 22, 2009 8:33 pm

Joel Shore: “It was an extremely poor and biased piece of work and the press release and public pronouncements by the authors were a downright embarrassment.”
Wow. Have you certainly outed yourself.
You could have walked away from this on the high road and if you did all of us who are diametrically opposed to your own form of CAGW Newspeak bias, would say: Well, I don’t agree with Joel but I respect his fervor and scientific mind.
But you blew it this time.
Instead of commending Christy and Douglas for their meticulous approach [AND IT WAS….even if you did not agree with them]….you resort to THIS???
How very unscientific of you.
Chris
Norfolk, VA, USA

P Wilson
December 22, 2009 9:47 pm

Joel Shore (18:23:49)
“which I personally found more compelling than their more recent paper (and much better than the hopelessly flawed work of
I am personally rather agnostic as to whether differences in the tropical tropospheric amplification between models and data are statistically significant. There are simply too many problems with the various data sets to make a rigorous determination”
Observation: Pure personal rhetoric written in the most galling style. Besides which personal feelings are sentiments and you can’t extrapolate them to the level of truths. Its something of a doctrinaire mish mash, which is worse than chaff.
i don’t find Christy and Douglas’s works personally more compelling. It is more objective. (PS> I garner that you’re not in a field related to climatology or temperature data analysis)

Eric (skeptic)
December 23, 2009 3:54 am

Joel Shore states “I love how the sort of people who have been making mountains out of molehills for years (e.g., the Y2K bug that had essentially zero effect on the global temperature trend) are willing to dismiss complete and significant mathematical errors that in fact cause considerable change in the results when it is made by people who they like.”
Looks like you have caught the RC disease since you can’t say the name “Steve McIntyre”. But you are wrong, SM did not dismiss the error, he noted it, he said Santer was correct to use Std dev and Douglass was not correct. However SM also noted that the other differences in methodology and use of data was much more important. At the time the RC spin (see Santer email above) was that RAOB 1.4 was the most important difference in the two methodologies, not std dev vs std err. Why? Because they figured that people wouldn’t notice that RAOB 1.4 used models to correct the radiosonde which was then used by Santer to validate the models. A perfect example of their reliance on circular reasoning (they’ve done the same with the satellite data as we’ve discussed here before).
The problems with RAOB 1.4 had already been noticed by SM in his diamonds thread (link above). So step back a moment and take a look at the hypotheses of Douglass and Santer which were to test how well the instrumental record matches the mean of the models. Is model variance a more important component than the shape of the instrumental record curves in the upper atmosphere? Of course not. That is the whole reason that the original RC spin was curve shape not the model variance.
Then you say: “I am personally rather agnostic as to whether differences in the tropical tropospheric amplification between models and data are statistically significant. There are simply too many problems with the various data sets to make a rigorous determination…or at least one that allows you to conclude anything about where the fault lies.”
Of course you are not interested in the differences. You are here simply to poke random holes in Douglass et al, toss invective at Christy, and throw up whatever other chaff you can. You offer no science of your own, only Gavin’s latest spin and an unrelated analogy. But the real reason you are not interested in the differences is that the model hypothesis has failed.
It is ironic that your quibble with Douglass is underestimated model variance. That is one of the main reasons that the climate models are so worthless. Your essential conclusion is that the models might show tropical tropo warming or they might not and thus will always match the instrumental record whether it shows warming or not.

foinavon
December 23, 2009 9:01 am

Eric (03:54:08)

It is ironic that your quibble with Douglass is underestimated model variance. That is one of the main reasons that the climate models are so worthless.

I’d say that the opposite is true, Eric. Climate models have successfully predicted many outcomes that were subsequently identified in the real world…however it would be an extraordinary model that predicted everything correctly, and an extraordinary set of models that produced tight agreement on everything; we should recognise variability where this exists (and quantitate it correctly if we wish to attempt robust interpretations!) and be rather happy about it in fact, since this provides a research focus. That’s why models are so extraordinarily useful (in all scientific endeavours).
So it’s valuable to assess developments of the last 12-18 months in this field. Note that the apparent disparities between empirical (satellite/radiosondes) tropical trends and models have potential sources in the models and the empirical data. The early large discrepencies resulting from the horribly flawed UAH product and large radiosonde biases, were largely eliminated in favour of the models by the time of the 2008 Santer et al paper (everywhere but the tropics, largely). Recent work has is progressing towards resolving some of the remaining discrepancy:
1. Continuing analyses of radiosonde biases has identified and corrected warm biases in the historical record due amongst other things to early deficiencies in efficient shielding of radiosondes from solar radiation, and so radisonde tropospheric trends are now closer to modelled trends.
L. Haimberger (2008) Toward Elimination of the Warm Bias in Historic Radiosonde Temperature Records—Some New Results from a Comprehensive Intercomparison of Upper-Air Data Journal of Climate 21, 4587–4606
2. And the radiosonde data in the tropical troposphere are still currently biased cold, although these aren’t yet robustly quantified. Note that the improved radiosonde homogenizations have been made independently of climate models (they use neighbour-based iterative breakpoint analysis or the background forecast (BG) temperature time series produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) 40-year series, or take into account wild-shear measurements):
H. A. Titchner et al. (2009) Critically Reassessing Tropospheric Temperature Trends from Radiosondes Using Realistic Validation Experiments Journal of Climate 22, 465–485
Sherwood SC et al. (2008) Robust Tropospheric Warming Revealed by Iteratively Homogenized Radiosonde Data J. Climate 21, 5336-5350
3.In the meantime, analysis of satellite MSU temperature data with a data set filtered to match the coverage of the radiosonde data finds a close agreement between the RSS and the radiosonde data
Mears CA, Wentz FJ (2009) Construction of the RSS V3.2 Lower-Tropospheric Temperature Dataset from the MSU and AMSU Microwave Sounders J Atmospheric Oceanic Technology 26, 1493-1509
4. And analysis of new tropospheric (and stratospheric) temperature measures (thermal wind; GPS radio occulation), indicate tropospheric warming/ stratospheric cooling and warming/cooling across the tropopause, that are broadly consistent with the newly homogenized radiosonde data and with climate models.
A. K. Steiner et al. (2009) Atmospheric temperature change detection with GPS radio occultation 1995 to 2008 Geophys. Res. Lett. 36, L18702
P. W. Thorne (2008) Atmospheric science: The answer is blowing in the wind Nature Geoscience 1, 347 – 348
5. And with respect to models, an analysis of the effects of ozone depletion on tropical upper troposperic temperatures, indicates that those models that take into account the small cooling contribution in the upper troposphere will likely produce a closer match to (improving!) tropical upper tropospheric temperature measures…
P. M. Forster et al. (2007) Effects of ozone cooling in the tropical lower stratosphere and upper troposphere Geophys. Res. Lett., 34, L23813
Shindell, D. (2008) Cool ozone Nat. Geosci., 1, 85–86
Likely Douglass et al were premature in ascribing apparent discrepencies between tropical upper tropospheric temperature measures and models to statistically robust differences between model predictions and real world measurements. They’ve done the same in the past. As is often the case in research areas where novel experimental approaches with large potential errors are used to measure potentially small effects in the real world (see tropospheric water vapour content; ocean heat content measurement etc.), it’s best to wait just a little while to see how things develop.

JChristy
December 23, 2009 9:21 am

Odds and ends:
To Joel Shore: Think of it this way. If several models all had a surface trend of +0.125 C/decade (as the real world had), what would the upper air trends of that set of models look like? The answer is they would have a very tight set of increasing trends as one ascended in the atmosphere, because the surface-troposphere relationship in models is quite rigid (has to do with radiative-convective schemes).
Your suggestion on the amplification factor (AR, i.e. a normalized approach of calculating the ratio of upper air trends to surface ) is a good one, and has been done (see Christy et al. 2007 JGR). Here we see model ARs are much higher than observed ARs – same result as DCPS. Note, however, that by taking the mean of the model runs in DCPS, we in effect performed the normalization thanks to the fact the mean of the model surface trends indeed was close to the surface. Latest results show model ARs for LT and MT are 1.4 for both while that of observations is 0.8 (LT) and 0.5 (MT) – significantly different from models.
This is of serious import because the lapse-rate feedback and associated atmospheric warming are crucial to depict properly for model integrations. Apparently, the real atmosphere finds ways (negative feedbacks) to expel the heat that models retain.
There is quite a bit of literature on the precision of upper air datasets in which specific problems have been identified. Let me quote the most recent result from Bengtsson and Hodges (ECMWF) on MSU data “We have found that the data from UAH is more credible than the data from RSS.” Their paper will be published in Climate Dynamics soon.
Revkin’s email was our starting point on the American Thinker piece. He did not do anything unusual here. He often forwards me (a relative skeptic) embargoed papers for comments when he is developing a story (see our notes on this in the appendix A). We began the story with his email because that was the first in the set of emails which mentioned the DCPS paper and it helps us establish a timeline. Revkin did not participate in the events following this early exchange.

Vincent
December 23, 2009 10:53 am

foinavon,
“Climate models have successfully predicted many outcomes that were subsequently identified in the real world…”
Some examples would be nice.

Eric (skeptic)
December 23, 2009 1:29 pm

foinavon (09:01:46)
Are you saying that a instrument measurement that is below the model mean but falls within some variance is a successful match? What are you trying to compare?
Second, it’s nice that you implicitly acknowledge the fallacy of adjusting an instrument (satellite or radiosonde) using a model, then using the adjusted instrument to validate (another) model. Your point 2 does that, so why not come out and state that RAOB 1.4 should not be used to validate models the way Santer did (and Douglass refused to do)? IMO, that is the main problem with Santer and the original team defense of Santer until they changed it.
As pointed out in the Diamonds thread (link above), the Sherwood nearest neighbor adjustments tend to spread bad data from one sensor to others. SM notes that the approach of taking all the data with no consideration of physics or metadatat and putting it through a homogenization meat grinder is common to climate studies but may create as many problems as it solves.
The work by Mears looks reasonable at least for satellites: http://www.remss.com/msu/msu_browse.html It uses physical principles rather than a model for adjustments. I don’t have any information on his radiosonde work, hopefully he did the same and eschewed the use of a model.
On your point 4, “broad consistency” seems like a vague term to describe a model to instrument match. Thorne has tried for years to produce data consistent with the modeled tropical tropospheric hot spot, It is no surprise that his wind shear theory matches the model, but it is just a theory with no way to independently validate it. In general I am not impressed by numbers of peer reviewed analysis that confirm the tropospheric hot spot after seeing the way peer review has been (not) working.
My opinion matches yours “Likely Douglass et al were premature in ascribing apparent discrepencies between tropical upper tropospheric temperature measures and models to statistically robust differences between model predictions and real world measurements. They’ve done the same in the past. …it’s best to wait just a little while to see how things develop.”
Except you apparently don’t want to wait a while for better support for the hot spot, since you seem to have no problems with anything on the list you gave. One of the hallmarks of unbiased science is to describe both strengths and weaknesses of approaches even in short blurbs like yours and I see no evidence of that. It’s all good, it all supports the models, it proves robustness, etc. But (for example) one can more easily say Thorne is premature than Douglass is premature.

Joel Shore
December 23, 2009 3:14 pm

Dr. Christy,
I appreciate your response to my posts. Here are my replies:
(1) I do appreciate your point about the amplification factor being what is most important. And, I understand that both the way that things were done in DCPS (i.e., using the standard error instead of the standard deviation) and the way things would be done if one divided the surface (or near-surface) temperature trend would tend to take things in the same direction (i.e., to provide a tighter uncertainty on the model results). However, I think that the way that you did it in DCPS was incorrect and uncontrolled…and if it gets a result anywhere close to what one gets by doing it the other way, it is only by shear luck. (If there were a thousand different models rather than ~20, this would be even more obvious!) At any rate, my impression (without having done the calculation) is that your method turns out to estimate the uncertainty in the models to be smaller than the correct method and hence exaggerates any discrepancy between models and data.
(2) When you say, “Latest results show model ARs for LT and MT are 1.4 for both while that of observations is 0.8 (LT) and 0.5 (MT)”, how are you dealing with the issue of contamination from the stratospheric trend, which I think everyone agrees is a potential problem for the MT and some (Fu et al) argued might still be a problem for your LT results, although I know that that is controversial?
(3) I understand that you favor your own UAH results over RSS regarding the remaining discrepancy and that some others have agreed with you. However, given all the issues that have plagued these satellite data analyses over the years, it seems to me that caution is warranted in concluding which data set has the more accurate tropical tropospheric trend.
(4) I don’t really understand your statement, “This is of serious import because the lapse-rate feedback and associated atmospheric warming are crucial to depict properly for model integrations. Apparently, the real atmosphere finds ways (negative feedbacks) to expel the heat that models retain.” As you know, the lapse-rate feedback is a negative feedback in the models and the most direct result of any claim that the tropical tropospheric amplification is absent would be that this negative feedback is being overestimated by the models…i.e., that the surface would have to warm more than the models think it has to warm in order to warm the effective emitting layer enough to restore radiative balance in response to a radiative forcing. Admittedly, since a lot of the same physics controls the lapse rate feedback and the water vapor feedback, the lack of amplification could indicate something wrong with our understanding of the latter too, although the train to that conclusion is more indirect…and also has to be considered in light of the other evidence showing the upper troposphere tending to moisten as predicted (most definitively for temperature fluctuations on the monthly to yearly scales and with the usual caveat concerning data analysis on the multidecadal timescales).
(5) I also think that one should be cautious in drawing conclusions about the models based on these discrepancies between model (or surface temp record, depending on your point of view) and satellite and radiosonde analyses. As foinavon has pointed out, the history of your analysis and the subsequent corrections to it along with a longer data record has been for most of the discrepancies to be resolved in favor of the models / surface record. I find it disappointing that no matter how much of the discrepancy gets resolved in favor of the models, the rhetoric about how the models must be seriously wrong because of any remaining discrepancy doesn’t seem to get tempered by any humility based on this past history. I find it even more frustrating when comments are made (which I am glad that you have not made here) that imply that the tropospheric amplification is some sort of “fingerprint” of the warming being due to greenhouse gases and hence that any lack of amplification suggests the warming seen is due to some other mechanism. (I do agree that lack of such amplification, if real, might have important implications for feedback processes.)
(6) I think an important point from the Santer et al. (2005) paper does not get sufficient emphasis, which is that the observational data from both satellites and radiosondes do in fact show the expected tropical tropospheric amplification for the fluctuations on timescales of months to a few years…and this data is robust to the sort of issues that plague the data for the multidecadal trends. It is only for the multidecadal trends, where the observational data is most susceptible to artifacts, that there is a deviation of that observation data (at least in some analyses) from the predictions. This point suggests the problem could very well be in the data and also severely constrains the ways in which the models could potentially be wrong if indeed the observational data is saying that they are wrong on those multidecadal timescales. I’d be curious whether you even have any hypotheses of how the models could be wrong that would result in such behavior? Most ways in which one might imagine the models not handling convection correctly, for example, would seem to imply the discrepancies should show up on the shorter timescales too.
Again, I appreciate your taking the time to respond to my questions and comments!

foinavon
December 23, 2009 3:54 pm

Eric (skeptic) (13:29:40) :
Not really Eric. I’m suggesting (i) that we should really wait until the problems with upper tropospheric temperature measures over the tropics have been sorted out before attempting categorical conclusions about the relationships between model predictions and real world measurements in this specific instance (tropical upper tropospheric temperature trends), and (ii) that even in the 18 months since the Douglass and Santer papers were published, significant progress has been made in this respect (see papers cited in my post just above; not just in relation to measurements, but also in the refinement of models to incorporate a likely small cooling efect of ozone depletion in the upper troposphere).
An earlier study along these lines illustrates the problem of jumping to erroneous conclusions based on premature acceptance of the apparent reliability of observations. The same group of authors (some of) made similar grand pronouncements a few years ago on the relationship between empirical and modeled tropospheric temperatures [*]:

“From the general agreement in amplitude and phase of these three data sets we infer that the methodologies of all are essentially correct and free from harmful errors”

However we know that two of the data sets (the UAH tropospheric temperature data and the radiosonde data), were actually incorrect and plagued by “harmful errors”. They then concluded, based on their false confidence in the reliability of their data sets:

“The implication of this conclusion is that the temperature of the surface and the temperature of the air above the surface are changing at different rates due to some unknown mechanism.”

However now that the biases and errors in the radiosonde and UAH tropospheric temperature measures are identified, we know that the temperature of the air above the surface is changing pretty much as expected. So we don’t, in fact, have to postulate any “unknown mechanisms” after all..
This illustrates a fundamental aspect of the relationship between models and observations in science, namely that one can’t establish the reliability of a model with respect to empirical observation until one has established that the empirical observations are robust. Far better to use any apparent disparity to develop some insightful experiments or analyses (such as those I cited above) to address the issue. That’s where models (which are essentially a formalised encapsulation of current understanding of a phenomenon) are so useful in science…
[*] Douglass et al (2004) Disparity of tropospheric and surface temperature trends: New evidence, Geophys. Res. Lett. 31, L13207

foinavon
December 23, 2009 4:51 pm

Vincent (10:53:06) :
There are lots of examples Vincent. [1] It’s particularly impressive that early global circulation models from 2 decades ago predicted the broad elements of heat distribution in a greenhouse-forced warming. The focus of warming in the northern latitudes, and the absence of heating in the Antarctic circumpolar region in the early periods of a greenhouse forced warming was predicted in simulations run in the late 1980’s/early 1990’s.
Bryan K et al. (1988) Interhemispheric asymmetry in the transient-response of a coupled ocean atmosphere model to a CO2 forcing. J. Phys. Oceanography 18, 851-867.
Manabe S et al. (1992) Transient responses of a coupled ocean atmosphere model to gradual changes of atmospheric CO2 .2. Seasonal response J. Climate 5, 105-126
[2] The general response of the global surface temperature to enhanced greenhouse forcing was correctly predicted (somewhat fortuitiously, possibly given the very early days!) as early as the mid 1970’s and formalised GCM models set up in the mid 1980’s and projected into the following several decades have done a pretty good job in predicting the subsequent temperature evolution:
Broecker, WS (1975) Climate Change: Are we on the brink of a pronounced global warming? Science 189, 460-463
J. Hanson et al (1998) Global climate changes as forecast by goddard institute for space studies 3-dimensional model J. Geophys. Res. 93: 9341-936
see for an update: J. Hansen et al. (2006) Global temperature change Proc. Natl. Acad. Sci. 103, 14288-14293
[3] The effect of greenhouse warming in enhancing the water vapour concentration in the upper troposphere was long predicted by models, before this was conclusively demonstrated around 2005 and in the following years, even while one rather prominent scientists were asserting that greenhouse warming would cause the upper troposphere to dry!
I. Held and B. Soden (2000) Water vapor feedback and global warming Annu. Rev. Energy Environ. 25, 441
[4] As we’ve already seen (see posts above on this thread), models long predicted that greenhouse warming would cause a warming of the upper troposphere, even during a period of more than a decade when erroneous satellite measures apparently showed little upper tropospheric warming and even cooling.
[5] And models predicted stratospheric cooling in a greenhouse-warming world…they predicted that the night-time warming would be larger than day-time warming….they predicted that tropsopheric water vapour concentrations would vary so as to maintain roughly constant relative humidity (real world data is consistent with this expectation, but it’s probably too early to say that this is yet a robust conclusion)….models did a rather good job of predicting the magnitude and temporal response of the Pinatubo eruption.
[6] Climate models have done a pretty good job of predicting the warming response of atmospheric circulation; e.g. :
Vecchi GA (2006) Weakening of tropical Pacific atmospheric circulation due to anthropogenic forcing Nature 441, 73-76
And the latitudinal distribution of changes in precipitation trends in a greenhouse –warmed world:
Zhang XB et al. (2008) Detection of human influence on twentieth-century precipitation trends Nature 448, 461-464
…etc. etc.

Eric (skeptic)
December 23, 2009 5:33 pm

foinavon, you state “The same group of authors (some of) made similar grand pronouncements a few years ago on the relationship between empirical and modeled tropospheric temperatures”. But I can’t help noticing other groups of authors making grand pronouncements about model validation and thus catastrophic AGW. One specific example is Santer et al 2003 tropopause height rising paper with the announcement “A team of scientists, including several from the National Center for Atmospheric Research (NCAR), has determined that human-related emissions are largely responsible for an increase in the height of the tropopause–…” comparing their models to the measurements. Apples to apples, same amount of grandness as Douglass.
As I said above, I’ll let the experts adjust the radiosonde and satellites using physical principles. I’m not as interested in statistics using the full data set without metadata. I am not at all interested in Leopold-style model-based adjustments which are then used to validate (other) models. After the adjustments we can finally get a paper from Douglass comparing model means to the measurements provided that the team doesn’t block publication. I assume you’ve given up on the idea that measurements lying within model standard deviations are ok even when they don’t match the mean.
As for your final statement “That’s where models (which are essentially a formalised encapsulation of current understanding of a phenomenon) are so useful in science” The models are not at all an “encapsulation” of climate-inducing phenomenon. The biggest failure is their inability to simulate weather due to their insufficient granularity. In fact this very topic, tropospheric warming, depends on convection which must be modeled on a mesoscale to have a shred of a hope of determining what kind of water vapor feedback will exist in the tropics. Then they need to do the same for temperate zones since those have negative water vapor feedback. Then the models must cover the planet as a whole to determine the boundaries of tropical and temperate zones. Without all of those results the models are not useful at all.

Eric (skeptic)
December 23, 2009 6:25 pm

VIncent, I haven’t studied the rest of the papers, but the Soden paper uses a climate model to determine the UT water vapor because it uses satellite measurements of temperature, not water vapor. This was done because the longer satellite records are needed to look for multi-decade trends. Now newer satellites measure water vapor directly with active soundings.
So the Soden paper was a clear case of using a model to adjust (or in this case determine) an atmospheric parameter, then using the parameter to confirm the model. Circular reasoning.

Vincent
December 24, 2009 4:27 am

foinavon,
thank you for taking the time to compile a list of model successes. I would caution however, of falling into the “prosecutors fallacy” – because A causes B, then if B is observed, it proves A was the cause. For example,
1)The prediction that CO2 leads to higher warming in high latitudes, which has been observed does not prove that CO2 caused this warming. This can equally be accounted for by PDO cycles and other long term climatic phenomena.
2)”GCM models set up in the mid 1980’s and projected into the following several decades have done a pretty good job in predicting the subsequent temperature evolution”.
Only up until 1998, and then fail to predict. This is evidence against said models.
3)”The effect of greenhouse warming in enhancing the water vapour concentration in the upper troposphere was long predicted by models.”
Prosecutor fallacy again. That water vapour increases with warming is a known physical law (Clausius-Clapeyeron). Therefore to argue that model predictions of increased water vapour when the temperature rises, are evidence that the warming is caused by CO2 is illogical.
4)You say the upper troposphere warmed but satellites that showed this not to be happening were erroneous. I am not sure about this. From what I read of the scientific literature, there seems to be a lot of disagreement in this area, depending on who you talk to.
5)”Models predicted stratospheric cooling.” True, but according to Lindzen and Choi, the stratosphere does not cool when the surface warms. Their paper concludes that the radiation escapes into space, contrary to the model predictions.
5b)”they predicted that tropsopheric water vapour concentrations would vary so as to maintain roughly constant relative humidity”
This is a repeat of point 3) above.
6) “Climate models have done a pretty good job of predicting the warming response of atmospheric circulation; e.g. :
Vecchi GA (2006) Weakening of tropical Pacific atmospheric circulation due to anthropogenic forcing”
Again, we see the assumption that the weakening of atmospheric circulation is due to anthopogenic forcing, when in reality, there is no way this conclusion can be reached.
I leave you with a quote from Roger Pielke sr. who states that models do not have predictive skill.

December 24, 2009 4:37 am

Vincent (04:27:03),
Now that you’ve corrected foinavon, don’t expect him to accept reality. He will simply move the goal posts. I’ve seen it happen time and again when he shows up here. When someone is afflicted with cognitive dissonance, it doesn’t matter that the flying saucers didn’t arrive as predicted. Rather than accept the possibility that there never were any flying saucers, alarmists like foinavon simply re-schedule the arrival time.

phlogiston
December 24, 2009 2:24 pm

Eric (skeptic), Vincent
“The models are not at all an “encapsulation” of climate-inducing phenomenon. The biggest failure is their inability to simulate weather due to their insufficient granularity.”
I’m way out of my own field in looking at atmosphere thermodynamics in the context of climate. However a question occurred to me – does climate modeling regard the inversion boundary between troposphere and stratosphere as flat and stable? Would it make any difference if one allowed it to be complex showing turbulence and a fractal surface? I remember reading about some work on simulating a supernova, in particular the wave of fusion and heavy-element creation expanding outwards from the supernova core. Attempts to make a computer simulation this proved intractable until the nature of the advancing wave was allowed to be turbulent and fractal rather than an onion-like simple concentric ring. If atmospheric boundaries such as troposphere / stratosphere were turbulent and fractal rather than flat this might for instance increase the radiative/convective surface and influence the amount of heat transfer? (or not?) Just a thought.

Eric (skeptic)
December 24, 2009 7:54 pm

Phlog, you are way beyond my expertise. My information is that most models have a dozen or less vertical layers, so the tropo-strato boundary is going to be modeled with a very broad brush. That doesn’t preclude a submodel or a fancy parameterization that uses something like what you describe. But finite element or grid box models aren’t really compatible with equations for spatial boundaries (fractal or not) because the former is discrete and the latter is continuous.
There is a tendency by modelers particularly physicists to develop elegant models like what you describe, but which don’t adequately address the other parts of the problem like how the heat gets up to that boundary with radiation (easier and potentially elegant) and convection (physically messy, hard to model elegantly, etc).

John Howard
December 25, 2009 3:47 pm

I haven’t read all of the comments, but so far one point missing is that these dishonest whores are all feeding at the tax trough. Thus, for those who understand that stealing is evil, it is no surprise that thieves will also lie, especially when their lies are an excuse to engage in more stealing (cap’n trade).
What is most amazing is all the folks who are most amazed that government whores will tell lies. To repeat the point: anyone who would live by stealing from you would not hesitate to lie to you. Ten-year-olds understand this. It takes years of higher education to get them to forget it. It takes a graduate degree to get them to believe that a “consensus” of tax-suckers is a guide to scientific truth.
The big surprise here is that this scandal is a big surprise.

phlogiston
December 26, 2009 9:33 am

Eric (skeptic) (19:54:45)
According to your helpful characterisation of the model situation, we would indeed be looking at parameterisation of heat transfer at the boundary. Complexity or turbulence at the boundary, if found to be present, could lead via some simulation of a submodel, to some revised parameters for tropo-strato heat transfer. (One could even – shock horror! – make experimental measurements to find out the nature of the boundary.) It would certainly not be appropriate to add such a level of detail to the whole model.
According to Dr Christy, “the real atmosphere finds ways (negative feedbacks) to expel the heat that models retain.” A tropo-strato boundary assumed to be flat and placid but in fact turbulent and quasi-fractal with a much larger surface area for exchange, might (speculatively) address such a discrepancy.

JChristy
December 26, 2009 9:44 am

Final Post (Response to Joel Shore and others).
(1) Our question was simple: do trends of observations agree with the models’ best estimate (i.e. model average following IPCC analogy). The best estimate requires SE – a consequence of the central limit theorem. The best estimate is a single value (for each level) accompanied by an error estimate of its magnitude (SE), it is not a spread of individual realizations. If the sample size is large (thousands) there will be virtually no error in the value of the best estimate. We purposefully did not test the spread for the simple reason of the inconsistency of surface trends between most models and observations (though we did test the spread for amplification ratios in another exercise and found models inconsistent with observations). Perhaps this is all just a matter of terminology – DCPS tested the best estimate for absolute trend magnitudes. And, as we’ve always stated, it was fortunate that the average (or best estimate) of models’ surface trend matched the observed surface trend. Without this outcome, we could not have directly compared upper air trends in an apples to apples comparison (though we could have selected a subset of models with the correct average surface trend – but the result would be the same.) There are a couple of models that are close to observations and we hope to report on those next year.
(2) MT has about 10 percent stratospheric weighting in the observations and in models. Again, apples to apples. Some like to use MT for the ratio, which is their prerogative, and the numbers are even worse for models when using MT.
(3) Only a few studies have actually dug into the errors and their causes. Other studies, often cited, do not – they prefer democracy rather than employing known information about problems – see details in Christy and Norris 2006, Christy et al. 2007, Randal and Herman 2008, Bengtsson and Hodges 2009, Sakamoto and Christy 2009, and Christy and Norris 2009. RAOBCORE and RSS have problems that can be demonstrated and explained (see papers).
(4) I didn’t explain it well. Feedback includes more than the lapse-rate and water vapor, both of which need to be correct in their own right, but are convolved with the following. Cloud distribution responses have been shown to provide a dominant source of negative feedback in the tropics (See Spencer and Braswell’s work). Lindzen shows a bit of this in his GRL paper this year. I would encourage you to contact Roy Spencer for the latest in this area. Models show a positive feedback here (i.e. in models, cloud coverage shrinks with warming allowing sun to heat the earth/troposphere, observations show cloud coverage expands with warming episodes and acts as a thermostat to cool the earth – see Spencer’s results.)
(5) See (3). As errors are resolved (which for a number of datasets creates cooler trends), models are not looking any better, but worse in our analyses. They also look worse as time has gone on with only modest warming trends in the observed data updated to the present. More will be coming out next year on this. Please note that no upper air dataset has been scrutinized more than UAH’s, especially by those who find its result objectionable – which is good for everyone in the long run. We’ve generated new versions when we and others (in two cases) discover various problems. As time goes on, the confidence intervals shrink. It is interesting that Bengtsson and Hodges of the ECMWF concluded that UAH was more credible than RSS for temperature trends.
(6) The observed atmosphere reveals an amplification factor on shorter time scales that is larger than seen on longer time scales according to several of the latest observational studies (see strong evidence in Klotzbach et al. 2009.) Models are simpler in this regard due, I suspect, to simplistic thermodynamics (convective adjustment) and cloud physics, and show consistent amplification factors throughout time scales.
Merry Christmas 2009

Joel Shore
December 27, 2009 11:28 am

Dr. Christy,
Thank you for your response.

(1) Our question was simple: do trends of observations agree with the models’ best estimate (i.e. model average following IPCC analogy). The best estimate requires SE – a consequence of the central limit theorem. The best estimate is a single value (for each level) accompanied by an error estimate of its magnitude (SE), it is not a spread of individual realizations. If the sample size is large (thousands) there will be virtually no error in the value of the best estimate.

I appreciate the notion of trying to address simple questions, but I usually find it most useful to address simple questions for which there is actual disagreement on what the answer is likely to be. If you can find anyone who believes that the observations should agree with the model results to within the SE, then please let me know. The IPCC doesn’t believe this ***even for the forced component of the climate response*** (as is very clear from their estimated likely range of the climate sensitivity in comparison to the models’ range of climate sensitivities).
And, of course, that leaves aside the issue of the difference between only the forced component and the entire climate response, which includes also the internal variability…I.e., to find someone who believes the SE is the relevant thing to compare to the real climate system, you will presumably have to find someone who believes that the single flip of a die is going to give a value within, say, 0.001 of 3.5 because a simple numerical simulation of die flipping can easily determine the best estimate for the average value to at least that small of an SE. My prediction is that such a person will not be easy to find.
I’ll leave it there…as the rest of our discussion basically revolves around issues of data precision and analysis that will presumably become clearer over time and I think we have both stated our personal opinions over how it is likely to be resolved, but ultimately nature will be the arbiter.
Happy holidays to you and all!

phlogiston
December 27, 2009 3:34 pm

Joel Shore (11:28:35)
Just how far wrong do you see the models as being? If it is far enough, then we can all agree and be a happy family!
Happy Christmas and New Year indeed!

Eric (skeptic)
December 28, 2009 2:58 pm

Phlog, I would be happy enough if the models would show flat temperatures for the next decade or so in their range of estimates. Perhaps Joel can point out a study showing model results that contain flat or declining temperatures in the next decade within their range of potential scenarios. If not, then we either we are guaranteed warmer climate or the models or their estimated variances are wrong!

Joel Shore
December 28, 2009 6:03 pm

The models do in general predict that it should not be particularly unusual to find periods of 10…or even 15… years with little trend or even a small negative trend. See, for example, this paper: http://www.esrl.noaa.gov/psd/csi/images/GRL2009_ClimateWarming.pdf Of course, predicting when these periods will occur would mean predicting the specific trajectory that the climate system takes, including internal variability, which is very sensitive to the initial conditions. So, the models give us guidance on the general statistics of the variability but not exactly when a particular short-term trend will occur.

Martin Lewitt
December 29, 2009 12:04 am

Joel Shore,
The model statistics on variability don’t qualify as guidance on the climates variability. The climate has ENSO and multidecadal oscillations which the models cannot yet reproduce. This shows that the models have more problems in matching observed climate trajectories than just poor data on initial conditions. Authors and peer reviewers are rather cavalier in allowing conclusions that the model results relate to the climate. Models results are assumed to relate to the climate. Apparently no amount of correlated error from diagnostic studies or failures to reproduce important climate phenomena and modes will temper their hubris. Models don’t match the climate or each other well enough to shed quantitative insight on a radiative imbalance this small, where we don’t even know whether the net feedback to CO2 forcing is positive or negative and where the climate sensitivity to CO2 may be as low as 0.5 to 1.5 degrees C rather than in the models ranges of 2 to 4+ degrees C.