Throwing down the gauntlet on reproducibility in Climate Science – Forest et al. (2006)

After spending a year trying to get the data from the author without success, Nic Lewis has sent a letter to the editor of Geophysical Research Letters (GRL) and has written to me to ask that I bring attention to his letter published at Judith Curry’s website, and I am happy to do so.  He writes:

I would much appreciate it if you could post a link at WUWT to an article of mine (as attached) that has just been published at Climate Etc. It concerns the alteration of data used in an important climate sensitivity study, Forest 2006, with a radical effect on the resulting climate sensitivity estimated PDF.

I’m including the foreword here (bolding mine) and there is a link to the entire letter to the editor of GRL.

Questioning the Forest et al. (2006) sensitivity study

By Nicholas Lewis

Re:  Data inconsistencies in Forest, Stone and Sokolov (2006)  GRL paper 2005GL023977 ‘Estimated PDFs of climate system properties including natural and anthropogenic forcings

In recent years one of the most important methods of estimating probability distributions for key properties of the climate system has been comparison of observations with multiple model simulations, run at varying settings for climate parameters.  Usually such studies are formulated in Bayesian terms and involve ‘optimal fingerprints’. In particular, equilibrium climate sensitivity (S), effective vertical deep ocean diffusivity (Kv) and total aerosol forcing (Faer) have been estimated in this way. Although such methods estimate climate system properties indirectly, the models concerned, unlike AOGCMs, have adjustable parameters controlling those properties that, at least in principle, are calibrated in terms of those properties and which enable the entire parameter space to be explored.

In the IPCC’s Fourth Assessment Report (AR4), an appendix to WGI Chapter 9, ‘Understanding and attributing climate change’[i], was devoted to these methods, which provided six of the chapter’s eight estimated probability density functions (PDFs) for S inferred from observed changes in climate. Estimates of climate properties derived from those studies have been widely cited and used as an input to other climate science work. The PDFs for S were set out in Figure 9.20 of AR4 WG1, reproduced below.

The results of Forest 2006 and its predecessor study Forest 2002 are particularly important since, unlike all other studies utilising model simulations, they were based on direct comparisons thereof with a wide range of instrumental data observations – surface, upper air and deep-ocean temperature changes – and they provided simultaneous estimates for Kv and Faer as well as S. Jointly estimating Kv and Faer together with S is important, as it avoids dependence on existing very uncertain estimates of those parameters. Reflecting their importance, the IPCC featured both Forest studies in Figure 9.20. The Forest 2006 PDF has a strong peak which is in line with the IPCC’s central estimate of S = 3, but the PDF is poorly constrained at high S.

I have been trying for over a year, without success, to obtain from Dr Forest the data used in Forest 2006. However, I have been able to obtain without any difficulty the data used in two related studies that were stated to be based on the Forest 2006 data. It appears that Dr Forest only provided pre-processed data for use in those studies, which is understandable as the raw model dataset is very large.

Unfortunately, Dr Forest reports that the raw model data is now lost. Worse, the sets of pre-processed model data that he provided for use in the two related studies, while both apparently deriving from the same set of model simulation runs, were very different. One dataset appears to correspond to what was actually used in Forest 2006, although I have only been able to approximate the Forest 2006 results using it. In the absence of computer code and related ancillary data, replication of the Forest 2006 results is problematical. However, that dataset is compatible, when using the surface, upper air and deep-ocean data in combination, with a central estimate for climate sensitivity close to S = 3, in line with the Forest 2006 results.

The other set of data, however, supports a central estimate of S = 1, with a well constrained PDF.

I have written the below letter to the editor-in-chief of the journal in which Forest 2006 was published, seeking his assistance in resolving this mystery. Until and unless Dr Forest demonstrates that the model data used in Forest 2006 was correctly processed from the raw model simulation run data, I cannot see that much confidence can be placed in the validity of the Forest 2006 results. The difficulty is that, with the raw model data lost, there is no simple way of proving which version of the processed model data, if either, is correct. However, so far as I can see, the evidence points to the CSF 2005 version of the key surface temperature model data, at least, being the correct one. If I am right, then correct processing of the data used in Forest 2006 would lead to the conclusion that equilibrium climate sensitivity (to a doubling of CO2 in the atmosphere) is close to 1°C, not 3°C, implying that likely future warming has been grossly overestimated by the IPCC.

This sad state of affairs would not have arisen if Dr Forest had been required to place all the data and computer code used for the study in a public archive at the time of publication. Imposition by journals of such a requirement, and its enforcement, is in my view an important step in restoring trust in climate science amongst people who base their beliefs on empirical, verifiable, evidence.

Nic Lewis

==============================================================

Just let me say that there’s movement afoot to address the issues brought up about reproducibility in journal publications in the last paragraph. I’ll have more on this at a future date.

Here’s the foreword and letter to the GRL editor in PDF form:  Post on Forest 2006 GRL letter final

This figure from that letter by Lewis suggests a lower climate sensitivity to a doubling of CO2 than the original:

-Anthony

About these ads
This entry was posted in Climate sensitivity, Modeling, Peer review and tagged , , , , , , , , . Bookmark the permalink.

82 Responses to Throwing down the gauntlet on reproducibility in Climate Science – Forest et al. (2006)

  1. Robert Brown says:

    I’ve been doing my best to help this along. This should be brought to the attention of Tom Hammond on the House Science Committee. We had a long discussion about exactly this sort of thing, and since the US government almost always pays or helps pay for the work, it isn’t crazy to insist on it.

    rgb

  2. Doug says:

    The Journal of Irreprocible Results was always one of my favorites!

  3. Bill Tuttle says:

    Does this mean GRL paper 2005GL023977 should now be considered “grey literature”?

  4. Doug says:

    Make that Irreproducible. It still exists!
    http://www.jir.com/

  5. Kaboom says:

    You can’t show the data means you don’t have a paper.

  6. Interstellar Bill says:

    As if there was such a thing as global climate sensitivity,
    calculated by a mere spatial average of local sensitivities,
    but valid for predicting ‘global average temperature’ (whatever that is)
    under future emission scenarios.
    Talk about far-fetched.
    Worse yet, they average various bogus ‘sensitivites’ to get a ‘likely’ sensitivity.
    This garbage is as totally removed from climatic reality
    as Keynsian economics is from economic reality.

  7. timetochooseagain says:

    Hehe, looks like the high sensitivity “fat tail” is a phantom.

    Keep in mind that the scariest climate scenarios are dependent on the “fat tail” for their plausibility, that is, they require a non-negligible probability of sensitivities greater than 4 K per doubling of CO2.

  8. johnmcguire says:

    I no longer bother to read or try to understand the information published in these journals as you simply can;t believe anything they tell you anymore. I make it a point to tell this to everyone I know and as it is known that I’m always interested in the science end of things , people I know are also doubtful of things science published in the msm. So I do get some revenge on the liers afterall. But it is a sad and bad time for scientific advancement now.

  9. Kaboom

    You can’t show the data means you don’t have a paper.
    … as a last resort – the paper challenged use the inside of the toilet roll!

  10. Louis Hooffstetter says:

    “Dr Forest reports that the raw model data is now lost.”

    The dog ate my homework defense! – A classic ‘Team’ response!:
    http://rogerpielkejr.blogspot.com/2009/08/we-lost-original-data.html

    Dr. Forest is obviously a ‘Team’ player!

  11. Baa Humbug says:

    Unfortunately, Dr Forest reports that the raw model data is now lost.

    Quite so.

  12. Taphonomic says:

    “Unfortunately, Dr Forest reports that the raw model data is now lost.”

    Makes me wonder if the “dog ate my homework” excuse worked for Forest in grammar school, or if it now comes down to “…the accumulation of the raw model data is left as an exercise for the reader.”.

  13. Alex Heyworth says:

    Lost in the Forest?

  14. timetochooseagain says:

    If I understand the IPCC’s chart, the “lines” at the bottom are meant to be the PDFs collapsed into central estimates with confidence intervals. Notice how the central estimates are pretty much all to the right (higher sensitivity) than the peaks of the PDFs. It seems to me that they may have used inappropriate methods for determining central estimates, given “fat tailed” (ie skewed) distributions.

  15. Kaboom says:

    I’d feel for those who used Forest’s work as a key ingredient for their own studies as this pretty much invalidates anything they’ve come up with. But alas they’re likely to be cut from the same cloth of preconcept-dictates-research academics and thus have not added to the body of science with their papers anyway.

  16. Pamela Gray says:

    Once again I don’t get it with the lost data. I am just a poedunk nothin in terms of research and have only a decade’s old master’s level research endeavor archived at Oregon State University plus an article of that research in a major journal. I have no Ph.D. attached to my name, just a bachelor and two masters degrees and my resume does not come with a vitae. Yet I still have my raw data. I have kept it to this day. I still have a drawing of the electrical components used to generate the stimulus I used. I still have a poloroid of the stimulus captured on a spectral analyser. I still have the original typed on a Wang computer masters article that was then copied into the archive volume at OSU. And I no longer practice in that field.

    Did this guy skip research 101 class?

  17. Dodgy Geezer says:

    What happens in the academic world if you accuse someone of altering data?

  18. Geoff says:

    I note in passing that Cris Forest is now a colleague of Michael Mann at Penn State. Maybe they can undertake a joint project on data management.

  19. Nic Lewis says:

    timetochooseagain:

    The filled circles on the 5-95% ranges (which aren’t true confidence intervals) in the bottom section of the IPCC figure are the medians, which as you say are to the right of the peaks of the PDFs. That is actually to be expected, because errors in estimating changes in forcings and ocean heat uptake greatly exceed errors in temperature data. But almost all the distributions are more skewed than that effect would account for: their tails are indeed too fat. In the case of Forster/Gregory 06, that is because the IPCC changed the distribution onto an incorrect basis. And the very bumpy and other strange shaped distributions are self-evidently flawed.

  20. Jason Calley says:

    Losing your data is simply the most extreme version of altering your data.

  21. RobW says:

    “Team Dictionary”

    Lost- under no circumstances release raw data to anyone not on the team. All they want to do is find something wrong with it.

    And real science takes another hit from the team.

  22. Luther Wu says:

    What, no trolls?

  23. more soylent green! says:

    Reproducibility? Every time we run the same model (ie, computer program) with the same starting points and same data, we reproduce the same results.

    /sarc

  24. Hot under the collar says:

    At least climategate didn’t show anyone losing raw data eh!

    If one of Dr Forests students lost their coursework data would it be a fail?

  25. Alex says:

    It just amazes me that “scientists” don’t use any source controll programs, try that in any software company nowdays and you would be viewed as a clueless n00b.

  26. tonyb says:

    Pamela Gray

    Phil Jones also seemed to have lost his data so Forster is in good company.

    A journalist described Hansens office as ‘comically cluttered’ and he was concerned enough to email her saying it was much better than it used to be.
    It seems the higher up the food chain the more haphazard the treatment of the data. Personally I’m not sure I could rely on a paper produced by someone who tries to work in a ‘comically cluttered'; office.
    tonyb

  27. G. Karst says:

    “Dr Forest reports that the raw model data is now lost.”

    Someone has been reading Climategate E-mails, as an instruction manual. GK

  28. Tom Murphy says:

    I continue to be amazed that neither the journals nor their peers mandate the release of the data used to support a climate researcher’s or group’s resulting paper. The golden rule of auditing (in any discipline) is that if you don’t write it down, it never happened. In today’s over-hyped Information Age, this presumption that the journal or peer should trust the researcher or group is antiquated to the point of naiveté.

    I’m reminded what Stephen J. Gould stated in his own (controversial) book “The Mismeasure of Man” says it best, “Phony psychics like Uri Geller have had particular success in bamboozling scientists with ordinary stage magic, because only scientists are arrogant enough to think that they always observe with rigorous and objective scrutiny, and therefore could never be so fooled – while ordinary mortals know perfectly well that good performers can always find a way to trick people.” The same, I think, could be said of some climate scientists.

    They seemingly and desperately want to believe that humankind is solely responsible for this “catastrophe,” which has been predicted by the models. At day’s end, it’s really rather sad to witness such educated persons failing so publicly, while they remain “eyes wide shut” to the last.

  29. E. Z. Duzzit says:

    Data that have been lost, destroyed, secreted or otherwise unavailable are no different than data that is non-existent. Conclusions based on non-existent data are useless.

  30. atheok says:

    What a surprise! How utterly original!

    Phil can’t figure out what he did with the original data.

    UVA claims they lost the emails, oops, unfortunately (for UVA) they were found. It appears that delete key doesn’t always work.

    Now the trees (data) can’t be seen and the forest got lost. One does wonder if the data ever really existed. If it did exist, one then wonders if the original data suffered from a reaction to delete key pressing or if it got lost in the recycling (strictly paper bound).

    The trouble with the data going the recycle route, just what/when was it electronic data so that computer manipulation was possible? I bet those darn backup servers still have copies.

  31. RHS says:

    I don’t think the dog ate the homework, I think his virus ate the data…

  32. kakatoa says:

    I assume that Dr. Forest, et al. will be using their models to predict, make that simulate via a few scenarios, the effect of CO2 levels for AR5. I hope that more robust means of data management will be followed this time around.

    I can’t imagine Mr. Putin agreeing to modify his countries behavior in regards to CO2 if the scientific experts in his government can’t review the details……..

  33. timetochooseagain says:

    Nic Lewis- Is there a reason to prefer the median of these distributions as a “central estimate” to the mode?

    Well, could be worse, they could have gone with the mean, which would really skew right.

  34. björn says:

    You do not lose your data, period!
    It is impossible to tell when and if or why you want to use them again!
    Besides, ir feels really good to have a huge stack of data, makes you proud of your efforts.
    I would feel terrible losing all that work, even if only for sentimenral reasons.

  35. Nic Lewis says:

    timetochooseagain: ‘Is there a reason to prefer the median of these distributions as a “central estimate” to the mode?’

    I suppose that the median reflects the full distribution to a greater extent than the mode does. But I’m not sure that any ‘central estimate’ is that useful with wide, skewed distributions like these. I prefer to see the full PDF. That has an added advantage: if its shape is peculiar, it warns you to regard the study involved with some suspicion.

  36. Follow the Money says:

    Don’t be so harsh, people. They lose and forget lots of things at Penn State.

  37. So Dr Forrest faffs for a year… and after a year of faffing, admits claims he’s lost the data…

    What did he gain by his paper? Quoting in IPCC and consequent kudos…

    Dog-Ate-My-DataGate

  38. Mike Jowsey says:

    Excellent research Mr. Lewis. Clearly you have put an enormous amount of (unpaid) work into this study. We await with interest a response from the GRL editor.

  39. timetochooseagain says:

    Nic Lewis says: “I prefer to see the full PDF. That has an added advantage: if its shape is peculiar, it warns you to regard the study involved with some suspicion.”

    The shape of the distributions is not surprising, or suspicious, in and of itself, I think. Sensitivity scales with the feedback factor as 1/(1-f), so if the estimate of f is normally distributed, and the mean is greater than zero, you inevitably get the fat tail for the distribution for estimated sensitivity. But even if we aren’t suspicious, we should check studies to see if the distributions that can be derived from the data really meet the conditions which would lead to a fat tail (mean of estimated f greater than zero is, I think, the crucial condition, or close enough, but it depends on the variance of the estimate of estimates of f) and if they don’t, the fat tail should disappear.

  40. Latimer Alder says:

    Its probably just slipped down the back of the sofa and will turn up soon. Or maybe Forest put it somewhere safe and forgot where it was.

    After all he’s not expected to be a well-organised and analytical professional scientist or anything is he? Anybody can lose the data associated with the most important paper they’ll ever write. It’s just so forgettable. And you only remember you’ve lost it when somebody asks to see it…….

  41. from Judith Curry’s comment:
    Nic Lewis’ academic background is mathematics, with a minor in physics, at Cambridge University (UK). His career has been outside academia. Two or three years ago, he returned to his original scientific and mathematical interests and, being interested in the controversy surrounding AGW, started to learn about climate science. He is co-author of the paper that rebutted Steig et al. Antarctic temperature reconstruction (Ryan O’Donnell, Nicholas Lewis, Steve McIntyre and Jeff Condon, 2011)…

    I have been discussing this issue with Nic over the past two weeks. Particularly based upon his past track record of careful investigation, I take seriously any such issue that Nic raises. Forest et al. (2006) has been an important paper, cited over 100 times and included in the IPCC AR4...

    This particular situation raises some thorny issues, that are of particular interest especially in light of the recent report on Open Science from the Royal Society:

    .. assuming for the sake of argument that there is a serious error in the paper: should a paper be withdrawn from a journal, after it has already been heavily cited?..

  42. Nic Lewis says:

    timetochooseagain: “The shape of the distributions is not surprising, or suspicious, in and of itself”

    I agree that a fat tail distribution is to be expected. I don’t regard a fat tail in itself as a peculiarity, but I do regard multiple peaks and strange bumps and shoulders in the PDF as being peculiar.

    Only one of the distributions in the IPCC figure, Gregory 02, is genuinely consistent with a normally distributed estimate of f – and the Gregory 02 is missing nearly half of its probability mass, due to being cut off at f=1. The Forster/Gregory 06 PDF represents a normally distributed estimate for f, but the IPCC experts decided to multiply the resulting climate sensitivity PDF by sensitivity squared – supposedly to make it comaprable to the other PDFs!

  43. Berényi Péter says:

    “Unfortunately, Dr Forest reports that the raw model data is now lost.”

    Unfortunate indeed. For Dr. Forest the honest course of action to follow at this point is

    1. withdraw the paper from GLR immediately, as results described in it are irreproducible
    2. remove all references to it from the IPCC AR4 report retroactively
    3. have all other researchers withdraw their papers, who have relied on it
    4. pay back all grant money gained for this and subsequent research
    5. serve proper jail term for animal abuse, letting the dog eat raw data instead of cooked ones

  44. Stacey says:

    Is it worth trying his co authors Messrs Stone and Sokolov surely they must have a copy of the data?

  45. HankH says:

    I work with volumes of clinical research data all the time. In the course of a research project I might have several subsets of the original data as queries of the original data produce output that looks at how the experimental variable(s) affect different stratifications of the sample group. Each dataset must be properly validated, versioned, systematically stored according to “best practices.” Further, all data is mirrored and stored in two data centers and warehoused with a data vault company. Such data is considered so precious that such controls are an absolute requirement.

    Anyone who outright looses the original data has such bad organization and lack of controls in place that any results of their work must be called into question. I continue to be astounded at the shoddy research practices of these climatologists and even more astounded that their work is not thrown in the waste bin by the publishing journal when such gross negligence is discovered.

  46. Manfred says:

    Small planet

    Dr. Forest is now with the Department of Meteorology at the Pennsylvania State University.

    Before that, he was with MIT, his thesis advisors were Kerry A. Emanuel and Peter Molnar.

    http://ploneprod.met.psu.edu/people/cef13/

  47. Nic Lewis says:

    Stacey: “Is it worth trying his co authors Messrs Stone and Sokolov surely they must have a copy of the data?”

    I have tried. I understand Dr Stone was seriously ill when I emailed last year, so I have let him be, poor chap.

    I have failed to obtain any response from Dr Sokolov, who is the expert on the MIT 2D climate model. Maybe he thinks that it is entirely Dr Forest’s responsibility to respond, or perhaps he doesn’t like a non-academic poking his nose in.

  48. timetochooseagain says:

    Nic Lewis-Yes, you are correct, I should have said I don’t regard the fat tail itself as suspicious, but like you I do find the odd shoulders or extra local maxima (secondary “modes”) as curious and suspicious. In this regard the worst offender appears to be “Knutti 02″ which gives the most outrageous estimate for sensitivity of all of them, surely!

  49. Hot under the collar says:

    @Stacey says,
    I suspect the dogs paw accidentally hit the delete button on the co authors computer.

  50. Gail Combs says:

    Pamela Gray says:
    June 25, 2012 at 10:40 am

    Once again I don’t get it with the lost data…..
    ___________________________
    I am with Pamela on this. I did not even do a Master’s thesis only a couple of Senior Year Topics papers, one in geology and one in Chemistry. I still have all the information and even gave a copy of the Geo research to a geologist who asked me for the information at a National Speleological Convention of all places.

    If you can not come up with the raw data and have it validated and verified by others it is not science PERIOD! Think cold fusion and the more recent CERN finds faster than light particles? and the update CERN researchers find flaw in faster-than-light measurement.

  51. Jean Parisot says:

    I wonder if it would be reasonable to try and reproduce the top 15 climate paper, in terms of cites?

  52. Mariana Britez says:

    surely this should be a sticky top post?

  53. Nic Lewis says:

    timetochooseagain: ‘ the worst offender appears to be “Knutti 02″ ‘
    Indeed – not a very useful estimate of sensitivity IMO, since it reflects and unweighted average of 5 different ocean models, and uses a weak yes/no rejection test for the simulation-observation mismatch. The Knutti paper didn’t actually include either a sensitivity PDF or any distribution from which one could be calculated, except for a 5 bar histogram (2 K per bar) in the SI. I presume that the IPCC ran this through some sort of smoothing algorithm.

  54. Jimbo says:

    Silly old me thinking science was about reproducability from the data. Now if you can’t get the data then it’s not science in my opinion.

  55. Gail Combs says:

    Here is the great irony of the “Lost Data”/Rcord Keeping mystery.

    It seems that while CRU/IPCC scientists can get away with “The dog ate my home work” The United Nations (OIE) 2005 Draft Guide To Good Farming Practices wanted to hold farmers around the world to much higher standards.

    Record keeping
    [from Section a) buildings and other facilities: surroundings and environmental control) – so as to make access difficult for unauthorised persons or vehicles (barriers, fences, signs)]

    Keep a record of all persons entering the farm: visitors, service staff and farm professionals (veterinarian, milk tester, inseminator, feed deliverer, carcass disposal agent, etc.)

    keep the medical certificates of persons working in contact with animals and any document certifying their qualifications and training

    keep, for each animal or group of animals, all documents relating to the treatment and veterinary actions

    keep all laboratory reports, including bacteriological tests and sensitivity tests (data to be placed at the disposal of the veterinarian responsible for treating the animals)

    keep all documents proving that the bacteriological and physico-chemical quality of the water given to the animals is regularly tested

    keep all records of all feed manufacture procedures and manufacturing records for each batch of feed

    keep detailed records of any application of chemical products to fields, pastures and grain silos, as well as the dates that animals are put out to grass and on which plots of land

    keep all the records relating to the cleaning and disinfection procedures used in the farm (including data sheets for each detergent or disinfectant used) as well as all the records showing that these procedures have effectively been implemented (job sheets, self-inspection checks on the effectiveness of the operations) and animal products

    keep documents relating to the pest control plan (including the data sheets for each raticide and insecticide used) as well as all the records showing that the control plan has effectively been implemented (plan showing the location of baits and insecticide diffusers, self-inspection checks on the effectiveness of the plan)

    keep all the documents relating to self-inspections (by the livestock producer) and controls (by the authorities and other official bodies) relating to the proper management of the farm and the sanitary and hygienic quality of the animal products leaving it

    keep all documents sent by the official inspection services (distributors or the quality control departments of food-processing firms) relating to anomalies detected at the abattoir, dairy, processing plant or during the distribution of products (meat, eggs, milk, fish, etc.) derived from the farm’s animals

    ensure that all these documents are kept long enough to enable any subsequent investigations to be carried out to determine whether contamination of food products detected at the secondary production or distribution stage was due to a dysfunction at the primary production level

    place all these documents and records at the disposal of the competent authority (Veterinary Services) when it conducts farm visits.
    Source: http://www.oie.int/eng/publicat/rt/2502/review25-2BR/25-berlingueri823-836.pdf [from 2009 may be dead or have changed greatly]
    Gissela saved a copy (pdf). the link is at http://xstatic99645.tripod.com/naisinfocentral/id121.html

    If the UN expects FARMERS to keep and have available those type of very specific records, how come PhD scientists are not held to the same standard as a lowly farmer? Especially when the scientist will have a much larger effect than any one farmer – Double Standards anyone?

    In light of the above UN document, the “dog ate my homework” excuse can not be justified. The IPCC should be throwing out ALL scientific work that is not as precisely documented as the UN was demanding of farmers who are NOT even receiving public research grants to cover the costs.

  56. Manfred says:

    Nic Lewis says:
    June 25, 2012 at 3:24 pm
    timetochooseagain: ‘ the worst offender appears to be “Knutti 02″ ‘
    Indeed – not a very useful estimate of sensitivity IMO, since it reflects and unweighted average of 5 different ocean models, and uses a weak yes/no rejection test for the simulation-observation mismatch.

    ———————————————————-

    Such papers obviously do not harm an author’s career in this branch of science. Co-author Thomas A Stocker is now co-chair of the coming IPCC report.

  57. Jimmy Haigh says:

    Lost the data? Good enough for “climate science”.

  58. Alex Heyworth says:

    Don’t blame Dr Forest. I’m sure he had funds set aside to pay a grad student to do some filing, but his boss insisted that the money be spent instead on a junket to an exotic location a well earned holiday a boring conference in some godforsaken third world hellhole.

  59. ferd berple says:

    Unfortunately, Dr Forest reports that the raw model data is now lost.
    ===================
    Can’t prove scientific fraud if the data is lost. Is simply means that Forest et al. (2006) is scientific nonsense. It cannot be reproduced, even by the author. It has the same value as used toilet paper. It is paper, but it is covered in crap.

  60. jorgekafkazar says:

    No replicability ☰ No science.
    The paper must be withdrawn, invalidating all references to it.

  61. theduke says:

    Why should Dr. Forest make his data available to Nic when Nic’s aim is to find something wrong with it? /sarc

  62. ferd berple says:

    How to make Money in Climate Science

    1. find a major unanswered question.
    3. question top scientists to see what they will accept as an answer
    3. cherry pick data and methods to arrive at that answer
    4. re-label this technique “training” – it makes it sound intelligent.
    5. publish the result.

    The results will seem correct to fellow scientists, especially those at the top, so they wont bother to check the math. Everyone will be impressed you have answered the hard question. More so because you will have proven their best guess correct and made them look good in the process. You will advance in your career in science. Fame and fortune will follow.

    If anyone does question the results:

    6. lose the data and methods

  63. Lucy Skywalker says: June 25, 2012 at 1:33 pm Quoting from the Royal Society, “should a paper be withdrawn from a journal, after it has already been heavily cited?..”

    Lucy, should a mineral discovery be withdrawn from a prospectus after it had been found that the assays were badly wrong, or lost, or both?” People can go to prison for this.

    Is it not self-evident that there should be an active, continuing process of demotion into separate storage of unacceptable papers and correction of acceptable papers when, for example, better data become available, like new temperature series used in proxy calibration? Although Science is seldom expressed in absolutes, should not there be a classification like G for general exhibition, M+ for mature audiences, R for dubious and X for fail? Why, there could even be a ratings journal that logs papers and shows event that overtake them.

    Have addendum, corregidum, erratum, etc been wiped from the repertoires of authors?

    There is evidence for a quite bad structural/organisational failure in the halls of academia when the emphasis is on floating new ideas, with an almost 100% failure to wipe the slate clean of the baddies. As noted ad nauseum, much of this starts with the gross and frequent failure of peer review.

  64. mikemUK says:

    The paper in question, I believe, is Forest, Stone and Sokolov (2006)
    “Estimated PDF’s of climate system properties including natural and anthropogenic forcings”

    Yesterday, out of interest, I googled ‘Dr C E Forest PSU’ to find out more about the man.

    Unless I am mistaken, no such paper now exists in his bio of ‘Selected Publications’, which bearing in mind it’s obvious importance seems a little odd to say the least.

  65. Jessie says:

    Nic Lewis,
    Congratulations on your endeavour and clearly sustained work focus. Thank you also for the article and subsequent explanations. Most interesting, I am still working my way through reading this post.

    However. I still have a problem understanding the term (and focus) on sensitivity. I was taught about sensitivity and specificity. Type I & II errors.
    I do not understand why there continues to be a focus on sensitivity when, to my understanding, the question should be on the specificity. .. an incorrectly formulated hypothesis.

    Would you have an explanation to this question please?

  66. Peter Lang says:

    Nic Lewis,

    Thank you for responding here. While most commenters are most concerned about the the fact the data is “lost” and the code withheld, I am more interested in the policy implications of a possible reduction in the climate sensitivity (central estimate and ‘fat tail’). Therefore, could you please advise what has happened in response to your finding, reported on Judith Curry’s site in July 2006, that the IPCC AR4 had replotted (apparently incorrectly and without fully explaining why) the climate sensitivty results of Forster and Gregory (2006)?

    For the benefit of other readers I’ll provide some more background, links and more questions below.

    In July 2011, you suggested that IPCC had wrongly replotted the Forster/Gregory06 paper http://judithcurry.com/2011/07/05/the-ipccs-alteration-of-forster-gregorys-model-independent-climate-sensitivity-results/ . IPCC’s replot moved the median climate sensitivity from about 1.6C to 2.3C and gave a much fatter tail.

    The IPCC curve is skewed substantially to higher climate sensitivities and has a much fatter tail than the original results curve. … and the central (median) estimate is increased from 1.6°C to 2.3°C.

    In a comment upthread (@ June 25, 2012 at 10:56 am) you said:

    In the case of Forster/Gregory 06, that is because the IPCC changed the distribution onto an incorrect basis. And the very bumpy and other strange shaped distributions are self-evidently flawed.

    Is that generally accepted? (this is an innocent question; I have no idea what has happened since with that). Does your comment mean that it is now generally accepted that the IPCC replot of Forster/Gregory06 was wrong? Does this mean that the now accepted interpretation of that paper is that it suggests the median climate sensitivity is 1.6C? What have the autjhors, IPCC and the journal that published the paper said and done?

    Could you please elaborate (for a non specialist) on the significance of this.

  67. Andrew says:

    Hm, correct me if I am wrong, but doesn’t much of the work that finds that eventually their could be net economic harm from AGW depend crucially on the PDF of climate sensitivity-and require the fat tails for their conclusions? Such studies already find that the “ideal” policy is very close to doing nothing, even under the assumption of many negative “impacts” which, frankly, do not exist. So without the fat tail, Nordhaus can kiss his Carbon Tax (already small) goodbye, no?

  68. Leo Morgan says:

    I hate acknowledging my ignorance, but honesty compels me to.
    I can’t make head nor tail of these graphs.

    What are the abbreviations, why do they have the axes they do, what do you think they prove and why do you think so?
    I’ve an IQ that’s acceptable, a physics and chemistry education to year 10, Advanced Math for year 11, some statistics at tertiary level, a lifetime science enthusiasm including most of Sagan, Asimov, Gould et al’s popularised science- in short, I’m vastly better educated than the average member of the population(*)- and I still have no clear conception of what these graphs are supposed to prove or why you think they do so. My point being that if I can’t follow them, neither can many others of your readers. Can the OP or commentators please help me?

    (*) I said I was honest, not humble ;)

  69. Gail Combs says:

    Geoff Sherrington says:
    June 26, 2012 at 4:16 am
    …..Is it not self-evident that there should be an active, continuing process of demotion into separate storage of unacceptable papers and correction of acceptable papers when, for example, better data become available, like new temperature series used in proxy calibration? Although Science is seldom expressed in absolutes, should not there be a classification like G for general exhibition, M+ for mature audiences, R for dubious and X for fail? Why, there could even be a ratings journal that logs papers and shows event that overtake them…..
    _______________________________________
    Yes it is sel-evident and it has already started with Retraction Watch

    The latest headline is “Following investigation, Erasmus social psychology professor retracts two studies, resigns”

    The social psychology community, already rocked last year by the Diederik Stapel scandal, now has another set of allegations to dissect. Dirk Smeesters, a professor of consumer behavior and society at the Rotterdam School of Management, part of Erasmus University, has resigned amid serious questions about his work.

    According to an Erasmus press release, a scientific integrity committee found that the results in two of Smeesters’ papers were statistically highly unlikely. Smeesters could not produce the raw data behind the findings, and told the committee that he cherry-picked the data to produce a statistically significant result. Those two papers are being retracted, and the university accepted Smeesters’ resignation on June 21…. http://retractionwatch.wordpress.com/2012/06/25/following-investigation-erasmus-social-psychology-professor-retracts-two-studies-resigns/#more-8354

    WOW, could not produce the raw data… cherry-picked the data…. statistically highly unlikely…. GEE where have we heard that before?

    Other branches of science are starting to clean up their act, isn’t it about time Climatology was held to the same classic science standard?

  70. ferd berple says:

    Kaboom says:
    June 25, 2012 at 10:36 am
    I’d feel for those who used Forest’s work as a key ingredient for their own studies as this pretty much invalidates anything they’ve come up with. But alas they’re likely to be cut from the same cloth of preconcept-dictates-research academics and thus have not added to the body of science with their papers anyway.
    =======================
    The last thing in the world that the 100+ scientists that have used F& al in their citation will want is for the paper to be withdrawn. This would call into question their own papers. They have a very strong vested interest is supporting F & al regardless of quality.

  71. Mr Lynn says:

    Robert Brown says:
    June 25, 2012 at 8:39 am
    I’ve been doing my best to help this along. This should be brought to the attention of Tom Hammond on the House Science Committee. We had a long discussion about exactly this sort of thing, and since the US government almost always pays or helps pay for the work, it isn’t crazy to insist on it.

    Would it not also make sense to file a FOIA request for the raw data, etc. to the granting agencies and academic institutions involved?

    /Mr Lynn

  72. Peter Lang says:

    Andrew,

    So without the fat tail, Nordhaus can kiss his Carbon Tax (already small) goodbye, no?

    Yes. Nordhaus conclusions are headed: “Not so dismal conclusions” (no evidence of strong tail dominance)

    Nordhaus (2012) Economic policy in the face of sever tail events
    http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9779.2011.01544.x/full

  73. Nic Lewis says:

    Peter Lang
    I have answered your question, and supplementary question, on the thread at Climate Etc

  74. Nic Lewis says:

    Leo Morgan
    May I suggest that you read Chapter 9 of the IPCC AR4 WG1 report, from which the first graph comes, and the WG1 Glossary? Available at http://www.ipcc.ch/publications_and_data/ar4/wg1/en/contents.html

    Climate sensitivity is defined as the rise in global temperature resulting from a doubling of carbon dioxide concentration in the atmosphere, after the climate system reaches a new equilibrium, with other external factors being held constant, roughly speaking.

  75. Resourceguy says:

    Remember that the leaps in theoretical physics by Einstein were accompanied quite passionately by the call to verify them with empirical testing. Today the method is hide, profit, and move on before anyone bothers to check. But then who would want to check if the consequences amount to professional tire slashings from the new blacklist enforcers of the climate change ecosystem?

  76. Peter Lang says:

    Nic Lewis,

    Thank you for your replies to my question. In case others are interested, Nic’s replies to my initial question and follow up question are here:
    http://judithcurry.com/2012/06/25/questioning-the-forest-et-al-2006-sensitivity-study/#comment-212919

  77. timetochooseagain says:

    Peter Lang-Thank you for linking to those comments! I especially liked this:

    “Note that, as I understand it, many members of the ‘subjective Bayesian’ school of statisticians would think it OK to use whatever prior they thought fit, notwithstanding that it did not result in objective probablilistic inference. IMO, and I hope in that of the vast majority of scientists, such an approach has no place in science.”

  78. Peter Lang says:

    This discussion is fascinating. It seems the estimates of climate sensitivity may be coming down (but that may be my bias).

    Follow the comments from Professor Forest’s first comment here:
    http://judithcurry.com/2012/06/25/questioning-the-forest-et-al-2006-sensitivity-study/#comment-212944
    He has said he will get back later with more responses to Nic’s follow up questions and others.

    Here is another interesting comment:
    http://judithcurry.com/2012/06/25/questioning-the-forest-et-al-2006-sensitivity-study/#comment-212952
    This gives information on other work that suggests climate sensitivity may be significantly lower than the IPCC AR4 consensus estimate.

    And read my question to Nic Lewis and his response to this and a follow up question starting here:
    http://judithcurry.com/2012/06/25/questioning-the-forest-et-al-2006-sensitivity-study/#comment-212911
    This is about what progress has been made regarding his paper of a year ago which pointed out that the IPCC had replotted the Forster and Gregory (2006) to make the climate sensitivity much higher and the tail of high consequence much thicker (see figure 4 here: http://judithcurry.com/2011/07/05/the-ipccs-alteration-of-forster-gregorys-model-independent-climate-sensitivity-results/ ).

    If we cut to the basics, these are the parameters that are really important for estimating the consequences of man’s GHG emissions, and therefore for informing optimal policy:

    • What the climate will do in the absence of man made GHG emissions (it will cool as we are past the Interglacial maximum and on the cooling part of the glacial-interglacial cycle)
    • Climate sensitivity
    • Damage function (damage costs per degree of climate change (up and down))
    • Rate we will convert to low emissions energy in the absence of high-cost mitigations policies

    It seems to me there is strong and growing evidence that the damages are not potentially catastrophic, not dangerous, and not high cost.

    Therefore, adaptation is the best strategy, IMO.

  79. Spector says:

    Based on the MODTRAN utility provided by the University of Chicago, the *raw* sensitivity (no feedback) for CO2 is about 0.9 deg K per doubling in clear tropical air with a nominal energy flow of 292.993 W/m² (picked from one of the standard program output values) at current CO2 concentrations. I understand MODTRAN to be a program developed by the Air Force for instrumentation calibration, and is a computer calculation based on the measured line-by-line absorbance parameters for the gases in the atmosphere. The surface temperatures were found by a hunt-and-pick process for each CO2 level to achieve the standard energy flow number at 70 km up.

    Ref: http://forecast.uchicago.edu/Projects/modtran.html

    I understand that this web-tool is hosted courtesy of Dr. David Archer, a non-skeptic.

    Ref: http://forecast.uchicago.edu/Projects/modtran.doc.html

    Another example of the minimal forcing change with a doubling (300:600 PPM) of CO2 is provided by a plot from the Wikipedia article on “Radiative Forcing.” The blue curve for 600 PPM CO2 almost completely covers the green curve for 300 PPM.

    Ref: http://en.wikipedia.org/wiki/File:ModtranRadiativeForcingDoubleCO2.png

    It is my understanding that the higher values posited by the IPCC are based on an assumed, dangerously high, positive feedback factor. Although interactions with the water-vapor absorption spectra are sometimes said to be to said to be the cause of this, I see no water-vapor holes in the forcing spectrum. Perhaps convection makes water-vapor a leaky greenhouse gas.

    It is interesting to note that a MODTRAN analysis of radiative transfer by altitude seems to indicate that most of the energy leaving the Earth is actually radiated directly from the atmosphere–specifically the troposphere. Most of the energy radiated directly from the surface (396 W/m²) is returned by back-radiation (333 W/m².) That is clearly shown in the standard IPCC heat transfer diagram.

    Ref: http://climateknowledge.org/figures/WuGblog_figures/RBRWuG0086_Trenberth_Radiative_Balance_BAMS_2008.GIF

  80. Andrew says:

    Spector says: “most of the energy leaving the Earth is actually radiated directly from the atmosphere–specifically the troposphere.”

    Yes. I and another blogger:

    http://troyca.wordpress.com/

    Had a paper written up for, GRL I think it was that made the point, partially based on this fact, that the TOA flux changes are poorly correlated in with the surface temperature variations in part because the bulk of the radiating energy comes from the atmosphere and is thus determined by the atmospheric temperatures-also that clouds don’t magically know what the sea surface is doing instantaneously, they react to the temperatures of their ambient environment, which lags the sea surface temperatures, in terms of anomalies, significantly. We found that if you use atmospheric temps (say UAH or RSS LT) the strength of the correlations improves and you should get a better estimate of climate feedback, and lower sensitivity, as it happened. Sadly the journal rejected our paper, even though our sensitivity estimate was not that low. I thought it was biased high, personally.

  81. Brian H says:

    HankH says:
    June 25, 2012 at 2:26 pm

    Further, all data is mirrored and stored in two data centers and warehoused with a data vault company. Such data is considered so precious that such controls are an absolute requirement.

    Anyone who outright looses [loses] the original data has such bad organization and lack of controls in place that any results of their work must be called into question. I continue to be astounded at the shoddy research practices of these climatologists and even more astounded that their work is not thrown in the waste bin by the publishing journal when such gross negligence is discovered.

    The pattern of brazen disregard for standards and even strict legal requirements seems to be a core strategy and tactic of the B3 crowd (B.S. Baffles Brains.) See the US Administration for a close parallel.

Comments are closed.