Quote of the week #16

It has been a couple of weeks since I posted a QOTW. This is mainly due to me being somewhat disengaged from the normal blogging pace due to some travel I’ve been doing and working on my upcoming papers. – Anthony

qotw_cropped

“If Michael Mann did not exist, the skeptics would have to invent him.”

This is from Roger Pielke Jr’s post on Mann’s new paper on hurricane frequency over the last 1000 years determined by proxy study of “overwash” of sand/silt deposits.

Pielke Jr. says (and have a look at the graph afterward):

I still find this hard to believe, is it possible that Mann has mislabeled his data files such that the smoothed data appears in the annual predictions column in his data file, rather than the raw counts? I find it hard believe that it is otherwise the case.


I was curious how the curve shown in Mann et al. discussed earlier today would look using adjusted data, and thanks to Michael Mann the data is up online allowing a comparison with data adjusted according to work in 2007 by Landsea (i.e., it doesn’t include the analysis from Landsea et al. released this week).

I graphed (above) the adjusted data (red curve) along with Mann et al.’s “predicted” historical data (blue curve, based on the Landsea data) both unsmoothed, just to see what it looks like — using information from these files at Mann’s directory:

Statistical Model Predictions of TC Past Activity
Alternative Case (uses Landsea ‘07 adjustment of historical TC
series) (AD 500-1850)
http://holocene.meteo.psu.edu/~mann/Nature09/TCStatModelReconLandsea.dat

Historical Tropical Cyclone (TC) counts
Alternative case [Landsea ’07 adjustments](1870-2006)
http://holocene.meteo.psu.edu/~mann/Nature09/TCLandsea.dat

I now see why Mann claims that the Landsea adjustment does not matter. And he is right, it does not matter.

The Mann et al. historical predictions range from a minimum of 9 to a maximum of 14 storms in any given year (rounding to nearest integer), with an average of 11.6 storms and a standard deviation of 1.0 storms (!). The Landsea observational record has a minimum of 4 storms and a maximum of 28 with and average of 11. 7 and a standard deviation of 3.75. I suspected that a random number generator for hurricane counts since 1870 would result in the same bottom-line results and when I appended a series of random numbers constrained between 9 and 14 from 1870-2006 to the “predicted” values, lo and behold — 20th century values exceed every other point except about 1,000 years ago.

Mann et al.’s bottom-line results say nothing about climate or hurricanes, but what happens when you connect two time series with dramatically different statistical properties. If Michael Mann did not exist, the skeptics would have to invent him.

Advertisements

50 thoughts on “Quote of the week #16

  1. (1) Mann admits that he “is not a statistician,” and yet he keeps trying to play one. *sigh*
    (2) What is it with Mann having extreme trouble when it comes to the accuracy of data files (if, as Pielke wonders, it is “possible that Mann has mislabeled his data files such that the smoothed data appears in the annual predictions column in his data file, rather than the raw counts”) in the rare instances where he actually does archive them?

  2. If Michael Mann didn’t get so much coverage he would have to kidnap a bunch of actor looking journalists and lock them up in a very hot steam room until someone took notice of him.

    But that’s another reality.

  3. I am under the impression that the current generation of climatologists were programming games on their C-64 when they were kids, and have not changed a bit since.

  4. So in Mann’s world, every year in the past was very much alike and everything was good. No ups, no downs.

    It sounds a lot like the plot of the movie “Pleasantville” where every day is the same and nothing bad ever happens.

  5. Mann is a typical example of scientist who has sold out his scientific integrity to serve a political cause.

  6. Mann might wish to be on the same level as Michael Moore.
    If wishes were dishes, they’d all be broken, like GCM’s breaking all climate records.

  7. Have any of these Climate Scientologists done evn a single introductory course in Statistics? Or even a course in “use your bloody head”? Again and again they produce stuff that would not be acceptable in a final year undergraduate research project.

  8. “Alexej Buergin (11:30:22) :
    I am under the impression that the current generation of climatologists were programming games on their C-64 when they were kids, and have not changed a bit since.”

    C64 games were good, so things have changed.

  9. That’s a hairy blade on that Hockey Stick! Are they sure it’s Mann’s and not Gavin’s? He’s much less clean shaven. ;)

  10. RunFromMadness (12:58:53) :

    “Alexej Buergin (11:30:22) :
    I am under the impression that the current generation of climatologists were programming games on their C-64 when they were kids, and have not changed a bit since.”

    C64 games were good, so things have changed.

    You mean Atari? :)

    Seriously, Mann is using a proper proxy; his work is what throws some doubts. Overwash sand and silt layers are suitable proxies for knowing the frequency of hurricanes striking the coastline in the past. Nevertheless, the researcher must be extremely attentive on sampling by boreholing:

    We could not obtain a good record from boreholes taken only on four locations.

    Oblique borehole samples are defective because we could be missing one or more overwash silt or sand layers or taking one layer as two different layers. The sampling must be entirely vertical.

    We must to take at least 100 samples from a single location and from sectors that must be separated by 100 m^2 parcels, when it is possible. If not, the samples must be taken from areas separated at least by 10 m^2 parcels. Otherwise, the samples would be absolutely misguiding.

    We have to eliminate samples obtained from locations where intensive human activity is registered. As all living beings, humans change habitats, though by far to a higher extent.

    For example, I found 25% of topical magnetite in sand samples from the Pacific Ocean, which is an indication of the recent formation of the uppermost layer and that it is sand dragged by glaciers.

  11. I am still in utter disbelief that Mann’s paper was published with exactly zero SST data on file at the time.

    Unpossible….

  12. I forgot to say two things:

    1. The hurricanes must have been enough powerful as to produce a high surge which could drag sand and accumulate it landwards upper from the shoreline.

    2. Sometimes hurricane surges destabilize dunes above the shoreline; thus the effect could be the opposite of the expected.

  13. From the viewpoint of competence, statistically, how probable is this “model”? Well outside 95% CI?

  14. Ron de Haan (12:26:29) :

    Mann is a typical example of scientist who has sold out his scientific integrity to serve a political cause

    I wouldn’t go that far. I suspect he started off genuinely believing his work and has so much personal reputation and ego attached to it that he has now sold out. not for a political cause, but to his own ego!

  15. I think nobody noticed it, but I made a mistake when I typeset this paragraph:

    “For example, I found 25% of topical magnetite in sand samples.”

    Heh! It should have been:

    “For example, I found 2.5% of topical magnetite in sand samples.”

    Sorry…

  16. if you google michael mann, the first result is:

    HEAT

    a hollywood movie, thats funny, isnt it?

  17. While I am all for questioning a lot of the assumptions and methodology in climate science and having a laugh at some of the circular logic, piling on the ad homs and speculating on the motives of certain climate scientists does tend to make us skeptics look a little petty, and I worry sometimes that it is indicative of the development of the dangerous groupthink mindset that blinds the pro camp.

  18. Invention is the domain of the alarmist :

    Real denial of Manmade Global Warming exists in the science, so alarmists have to invent doubt about the scientists of the denial. This is what William Connolley, and Kim Dabelstein Petersen have been doing for years, literally for years, in Wikipedia.

    references :

    http://network.nationalpost.com/np/blogs/fpcomment/archive/2008/05/03/who-is-william-connolley-solomon.aspx

    &

    http://network.nationalpost.com/np/blogs/fpcomment/archive/2008/04/12/wikipedia-s-zealots-solomon.aspx

    ——————————————

    If you just enter “global warming” in google the first result you get points to the Wikipedia entry Connolley controls – and if you just wanted a two minute briefing on the subject you’d never know that the article is utterly and relentlessly dishonest.

    http://blogs.zdnet.com/Murphy/?p=1190

  19. It may sound bad science but if there is a consensus then it is the good science, the correct way of doing things and the conclusion is right. As climate science is evolving with all the consensus supporting the conclusion, then the data does not matter ( the reason they are brave enough to post it), the analytical procedure to handle the data does not matter, the methodology for gathering the data does not matter ( Anthony’s work on the location of the weather station is consdiered unimportant). In climate science it is right to splice data from whatevr source as long as it it further supports the consensus-(Mann is not alone). Climate science is just at the forefront of post normal science. The scientific method of the future. 1+1=3 because there is a consensus.

  20. To be fair on all those scientists who we think have “sold their integrity” to the highest bidder, I think a lot of them know better than what they are cajoled into but understand they have to do what they are told and produce the desirable results. Otherwise, no funding, and dismissal

  21. Michael Mann is the reason why a lot of us became GW skeptics in the first place. He certainly is the poster-child for bad statistical analysis.

  22. Slightly O/T

    Leif,

    A ‘new’ Cycle 23 sunspot showed up on your database today. Is it accounting for one that popped up very recently? I didn’t see it on the sun images at all yesterday or today, but perhaps the images weren’t yet updated…

  23. Nasif Nahle (14:30:37) :

    Seriously, Mann is using a proper proxy; his work is what throws some doubts. Overwash sand and silt layers are suitable proxies for knowing the frequency of hurricanes striking the coastline in the past.

    But, from what I read of his description of his proxy boreholes, “most” – well over 50%! – of Mann’s proxy data was from a single site in Massachusetts. Then he extrapolated from that New England site (rocky beachs and winter storms and northeasters and all) to claim he can count all of the last 1000 years of Gulf Coast hurricanes that have hit the Atlantic coast, FL coast, AL coast, MS coast, LA coast, and TX coast and Mexican coasts.

    Maybe – if he were digging pristine swamps in LA, I’d let him estimate the number of hurricanes hitting LA. It’d be possible – with wider error bars – to extrapolate to hurricanes hitting AL, MS and TX coasts.

    But to make a “count” of ALL hurricanes on ALL coasts with a standard deviation of 1.0 ????

    By the way, I thought a “standard deviation” was a measure of the variation of the actual “data” = How can you have a “standard deviation” of a calculated value based on unknown data? That is, if he actually “counted” four storm strikes (disturbed mud layers) from 4 boreholes at 4 sites in the year 1152 , then his “data” is 4. If Mann then “corrects” his count” of “4” and says, well this means there really were “12” storms that year because some didn’t go over the bore holes, then “12” can’t have a standard deviation.

    Error bars? Sure – but you can ONLY create error bars where an “error” could exist. Thus, to have “data” on all the “blue points” of Mann’s hurricane curve, he’d have to have consistant, repeated, boreholes of the entire 1000 years of history he is recreating for all beaches where he has surveyed. (Fine, that’s possible. We don’t know where he has sampled, but that’s possible.)

    But how many boreholes did he (somebody else?) actually bore? Where were these holes? What beach geography and geology? How did he track and count and co-relate the bore holes from several (many ?) different sites so he does have a valid assignment of year_id to mudlayer position in the tube? Did he assign/guess/ignore any “deviation” in the assignment of any given disturbance to any given year?

  24. Lets try this again… don’t know why this information was snipped.

    [simple – your argument is with Pielke Jr.’s blog post. Take it there. ]

  25. Paul K

    Take your arguments to Pielke Jr. and post them there. If he retracts based on what you want to discuss, then I’ll post it here too.

    For the third time. Take it up with the source. If your argument is so strong, I’m sure it will be a slam dunk to get Pielke Jr. to see it.

    Come back only when you’ve got his attention. I’ll send him a note to expect you. – Anthony

  26. RACookPE1978 (19:20:42) :

    Nasif Nahle (14:30:37) :

    Seriously, Mann is using a proper proxy; his work is what throws some doubts. Overwash sand and silt layers are suitable proxies for knowing the frequency of hurricanes striking the coastline in the past.

    But, from what I read of his description of his proxy boreholes, “most” – well over 50%! – of Mann’s proxy data was from a single site in Massachusetts. Then he extrapolated from that New England site (rocky beachs and winter storms and northeasters and all) to claim he can count all of the last 1000 years of Gulf Coast hurricanes that have hit the Atlantic coast, FL coast, AL coast, MS coast, LA coast, and TX coast and Mexican coasts.

    Rocky terrains stop us from boreholing. Usually, the oldest layer is immediately above the rocky bed and the latest sand layer is about one hundred meters shorewards. Usually, we find the layer 1300 years old at a core depth of 60 cm. The layer from 30 cm to the surface represents an age of 820 to 970 years old. So, your observation for New England is correct and Mann couldn’t get homogeneous results for all the remainder places that you have mentioned.

    Maybe – if he were digging pristine swamps in LA, I’d let him estimate the number of hurricanes hitting LA. It’d be possible – with wider error bars – to extrapolate to hurricanes hitting AL, MS and TX coasts.

    But to make a “count” of ALL hurricanes on ALL coasts with a standard deviation of 1.0 ????

    By the way, I thought a “standard deviation” was a measure of the variation of the actual “data” = How can you have a “standard deviation” of a calculated value based on unknown data? That is, if he actually “counted” four storm strikes (disturbed mud layers) from 4 boreholes at 4 sites in the year 1152 , then his “data” is 4. If Mann then “corrects” his count” of “4″ and says, well this means there really were “12″ storms that year because some didn’t go over the bore holes, then “12″ can’t have a standard deviation.

    Error bars? Sure – but you can ONLY create error bars where an “error” could exist. Thus, to have “data” on all the “blue points” of Mann’s hurricane curve, he’d have to have consistant, repeated, boreholes of the entire 1000 years of history he is recreating for all beaches where he has surveyed. (Fine, that’s possible. We don’t know where he has sampled, but that’s possible.)

    But how many boreholes did he (somebody else?) actually bore? Where were these holes? What beach geography and geology? How did he track and count and co-relate the bore holes from several (many ?) different sites so he does have a valid assignment of year_id to mudlayer position in the tube? Did he assign/guess/ignore any “deviation” in the assignment of any given disturbance to any given year?

    I’d like to know it also. Overwash sand and silt, dinoflagellates, organic matter, and other materials, are OK as hurricanes proxies if the researcher follows the protocol in the standard way; Mann’s work is dubious because he doesn’t permit the systematic verification of his results. It seems he rubs the wrong way only for irritating the scientific community.

  27. This stuff is really petty. As RP originally said, Mann mislabelled one of his files. In a normal world, you just point that out and the label gets fixed. But here we get a whole lot of spurious analysis, random numbers generated etc, presumably with the aim of propagating the notion that bad science has been done.

    It isn’t bad science. It’s bad file labelling.

  28. If Mann’s housekeeping is so sloppy that one of his files is merely mislabeled, what other sloppy data keeping has he done? How many people have been mislead from mere sloppy mislabeling that has not been corrected?

  29. Another thing to remember with using silt layers in lagoonal muds or whatever as a proxy is that you have to assume that there is a continuous record of deposition. With trees you can be fairly sure that there will be a growth ring every year but in sediments the norm is for the vast majority of data to be missing. Layers get deposited but most of them get eroded by later sedimentological or biological processes.

  30. Nasif Nahle (14:30:37) :

    RunFromMadness (12:58:53) :

    C64 games were good, so things have changed.

    You mean Atari? :)

    ZX Spectrum surely?

    I hope, against hope, that one fine and sunny day the “hockey-stick” shape will enter mainstream teaching as a known indicator of bad technique or data sampling.

    Something along the lines of “So, class 4A, if you see this shape, there’s a pretty good you’ve all f*cked up. Again. Now, for your homework, write out 100 times ‘Hockey stick – silly pri.. Johnston, leave Smedley alone!’. Class dismissed”.

    Cheers

    Mark.

  31. Well Nick, apart from anything else Mann’s latest paper seems to have discovered the MWP; and the role of the El Nino in global heating. I knew the man, Mann, had some good in him!

  32. RACookPE1978 (19:20:42) : “But, from what I read of his description of his proxy boreholes, “most” – well over 50%! – of Mann’s proxy data was from a single site in Massachusetts. Then he extrapolated from that New England site (rocky beachs and winter storms and northeasters and all) to claim he can count all of the last 1000 years of Gulf Coast hurricanes that have hit the Atlantic coast, FL coast, AL coast, MS coast, LA coast, and TX coast and Mexican coasts.”

    Before you make comments about rocky beaches exposed to winter storms and northeasters you should read the reference for the MA sediment samples in Mann’s paper:

    15. Boldt, K., Lane, P., Woodruff, J. D. & Donnelly, J. P. Sediment evidence of
    hurricane-induced coastal flooding in southeastern New England over the last two
    millennia, Mar. Geol. (submitted).

    Ooops. Not available yet.

    On a more serious note, there are other articles published by one of Mann’s co-authors on sediment samples in the RI/SE Mass area.

    I have not yet personally inspected the location of the Mattapoisett, MA samples, but living in the adjacent town I can assure you that there are lots of mucky swamps in this Buzzards Bay area. Furthermore, although severe winter storms occur frequently, the shape of Buzzards Bay tends to minimize the effects of the typical Nor’easter while amplifying the storm surge of a hurricane with winds from the southwest.

    My house, a couple miles west of Mattapoisett and right on the water has not been damaged by even the most severe Nor’easter. But water from 1991 hurricane Bob was surging beneath my piling-elevate house and going across the street. More recently, an October 28 2006 a wet storm that came up through the south damaged my boat ramp and washed away portions of my lawn. http://www.fema.gov/emergency/reports/2006/nat102806.shtm

    I’ll be looking further into the Mattapoisett sediment sampling due to personal curiosity.

  33. Jimmy Haigh (01:22:46) :

    Another thing to remember with using silt layers in lagoonal muds or whatever as a proxy is that you have to assume that there is a continuous record of deposition. With trees you can be fairly sure that there will be a growth ring every year but in sediments the norm is for the vast majority of data to be missing. Layers get deposited but most of them get eroded by later sedimentological or biological processes.

    You are correct. We use a methodology aside for being sure ourselves about the continuity of the sedimentary layers. We date the layers, firstly by proportions of quartz and feldspar, and secondly by 14C for verification. If we find a gap, we discount the sample.

  34. Mark Fawcett (03:55:46) :

    Nasif Nahle (14:30:37) :

    RunFromMadness (12:58:53) :

    C64 games were good, so things have changed.

    You mean Atari? :)

    ZX Spectrum surely?

    That’s it, yes!

    I hope, against hope, that one fine and sunny day the “hockey-stick” shape will enter mainstream teaching as a known indicator of bad technique or data sampling.

    I hope so too.

    (Please, don’t tell my mom I didn’t make my homework… Hah!)

  35. If a researcher does not include the name of the statistician and that researcher does not also have a degree in statistics, I have to at least question the analysis.

    When I did my research, I audited a graduate level statistics class. It was not a class that normally was taken for the degree I earned. I performed all the requirements for the class, all done by hand so that we could understand what computers do to data. When I was ready to submit my own data for analysis, I performed a covariance matrix by hand and well as by computer using Statview on my little MAC SE. The data was also submitted to the statistician at the medical center I was stationed at. His computer results were the same as mine. He then directed the compilation of the results so that my conclusions were limited to the results and only the results. He hated it when medical researchers made the data say more than it significantly could. Had I been able to direct who would be listed as authors of my final work, I would have listed the statistician’s name, or at least given him credit at the end of the published paper (I was not the director of the lab and had no power to direct who’s names were on the paper).

    If a laboratory researcher designs to collect and interpret data, he or she needs to be humble enough to submit their work to a trained and degreed statistician.

  36. So we’re NOT talking about the Michael Mann who produced “Public Enemies” and “Miami Vice”?

    That’s a relief.

  37. Since this isn’t the hottest link on the page; maybe this is a good place to observe that that crazy red polar ice line has already crossed above the green 2005 line, and we are barely half way into August; despite the efforts of some to have August cancelled.

    Just think; this might be a ho-hum ice year, and a ho-hum Hurricane year all rolled into one.

    Maybe we need a Congressional review of the bad news that si coming out of the National Hurricane Center and the Colorado Ice box.

  38. M White (11:48:34) :

    Piltdown Mann, sorry couldn’t help myself

    Surely you meant “Meltdown Mann”?

  39. Do you think that: ‘To be? Or not to be? That is the question!” was quote of the week back in 1600?

  40. I hereby propose a new quote of the week:

    “An unsophisticated forecaster uses statistics as a drunken man uses lamp-posts – for support rather than for illumination. ”
    Andrew Lang 1844-1912

Comments are closed.