Guest Post by Willis Eschenbach
Anthony has pointed out the further inanities of that well-known vanity press, the Proceedings of the National Academy of Sciences. This time it is Michael Mann (of Hockeystick fame) and company claiming an increase in the rate of sea level rise (complete paper here, by Kemp et al., hereinafter Kemp 2011). A number of commenters have pointed out significant shortcomings in the paper. AMac has noted at ClimateAudit that Mann’s oft-noted mistake of the upside-down Tiljander series lives on in Kemp 2011, thus presumably saving the CO2 required to generate new and unique errors. Steve McIntyre has pointed out that, as is all too common with the mainstream AGW folks and particularly true of anything touched by Michael Mann, the information provided is far, far, far from enough to reproduce their results. Judith Curry is also hosting a discussion of the issues.
I was interested in a couple of problems that haven’t been touched on by other researchers. The first is that you can put together your whiz-bang model that uses a transfer function to relate the “formaminiferal assemblages” to “paleomarsh elevation” (PME) and then subtract the PME from measured sample altitudes to estimate sea levels, as they say they have done. But how do you then verify whether your magic math is any good? The paper claims that
Agreement of geological records with trends in regional and global tide-gauge data (Figs. 2B and 3) validates the salt-marsh proxy approach and justifies its application to older sediments. Despite differences in accumulation history and being more than 100 km apart, Sand Point and Tump Point recorded near identical RSL variations.
Hmmm, sez I … so I digitized the recent data in their Figure 2B. This was hard to do, because the authors have hidden part of the data in their graph through their use of solid blocks to indicate errors, rather than whiskers as are commonly used. This makes it hard to see what they actually found. However, their results can be determined by careful measurement and digitization. Figure 1 shows those results, along with observations from the two nearest long-term tidal gauges and the TOPEX satellite record for the area.
Figure 1. The sea-level results from Kemp 2011, along with the nearest long-term tide gauge records (Wilmington and Hampton Roads) and the TOPEX satellite sea level records for that area. Blue and orange transparent bands indicate the uncertainties in the Kemp 2011 results. Their uncertainties are shown for both the sea level and the year. SOURCES: Wilmington, Hampton Roads, TOPEX
My conclusions from this are a bit different from theirs.
The first conclusion is that as is not uncommon with sea level records, nearby tide gauges give very different changes in sea level. In this case, the Wilmington rise is 2.0 mm per year, while the Hampton Roads rise is more than twice that, 4.5 mm per year. In addition, the much shorter satellite records show only half a mm per year average rise for the last twenty years.
As a result, the claim that the “agreement” of the two Kemp 2011 reconstructions are “validated” by the tidal records is meaningless, because we don’t have observations accurate enough to validate anything. We don’t have good observations to compare with their results, so virtually any reconstruction could be claimed to be “validated” by the nearby tidal gauges. In addition, since the Tump Point sea level rise is nearly 50% larger than the Sand Point rise, how can the two be described as “near identical”?
As I mentioned above, there is a second issue with the paper that has received little attention. This is the nature of the area where the study was done. It is all flatland river delta, with rivers that have created low-lying sedimentary islands and constantly changing border islands, and swirling currents and variable conditions. Figure 2 shows what the turf looks like from the seaward side:
Figure 2. Location of the study areas (Tump Point and Sand Point, purple) for the Kemp 2011 sea level study. Location of the nearest long-term tidal gauges (Wilmington and Hampton Roads) are shown by yellow pushpins.
Why is this important? It is critical because these kinds of river mouth areas are never stable. Islands change, rivers cut new channels, currents shift their locations, sand bars are created and eaten away. Figure 3 shows the currents near Tump Point:
Figure 3. Eddying currents around Tump Point. Note how they are currently eroding the island, leading to channels eaten back into the land.
Now, given the obviously sedimentary nature of the Tump Point area, and the changing, swirling nature of the currents … what are the odds that the ocean conditions (average temperature, salinity, sedimentation rate, turbidity, etc.) are the same now at Tump Point as they were a thousand years ago?
And since the temperature and salinity and turbidity and mineral content a thousand years ago may very well have been significantly different from their current values, wouldn’t the “formaminiferal assemblages” have also been different then regardless of any changes in sea level?
Because for the foraminifera proxy to be valid over time, we have to be able to say that the only change that might affect the “foraminiferal assemblages” is the sea level … and given the geology of the study area, we can almost guarantee that is not true.
So those are my issues with the paper, that there are no accurate observations to compare with their reconstruction, and that important local marine variables undoubtedly have changed in the last thousand years. Of course, those are in addition to the problems discussed by others, involving the irreproducibility due to the lack of data and code … and the use of the Tiljander upside-down datasets … and the claim that we can tell the global sea level rise from a reconstruction in one solitary location … and the shabby pal-review by PNAS … and the use of the Mann 2008 temperature reconstruction … and …
In short, I fear all we have is another pathetic attempt by Michael Mann, Stefan Rahmstorf, and others to shore up their pathetic claims, even to the point of repeating their exact same previous pathetic mistakes … and folks wonder why we don’t trust mainstream AGW scientists?
Because they keep trying, over and over, to pass off this kind of high-school-level investigation as though it were real science.
My advice to the authors? Same advice my high school science teacher drilled into our heads, to show our work. PUBLISH YOUR CODE AND DATA, FOOLS! Have you been asleep for the last couple years? These days nobody will believe you unless your work is replicable, and you just look stupid for trying this same ‘I won’t mention the code and data, maybe nobody will notice’ trick again and again. You can do all the hand-waving you want about your “extended semiempirical modeling approach”, but until you publish the data and the code for that approach and for the other parts of your method, along with the observational data used to validate your approach, your credibility will be zero and folks will just point and laugh.
w.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Excellent work as always, Willis.
There are multiple factors contributing to the abnormally high rate of the “Hampton Roads” measurement. Those 1.7 Million people many of them important to national and international security (the guys who killed Osama were from here).
But…as always…and excellent exposition on the subject.
As you know and understand well before Mann and his henchmen…sea level is a very very complicated thing.
Land level related:
Isostatic rebound from the last glaciation (this is a BIG issue)
35 million year old meteor impact crater (the mouth of the Chesapeake Bay)
Land use and development issues
Aquifer Depletion (this could be a big issue, as well)
Soft coastal plain silts and river deltas
Sea Level related:
Global redistribution of the ocean water masses over many decades
Fluctuations in the Gulf Stream (salinity, speed, and temperature)
Long term atmospheric fluctuations such as the NAO and oceanic such as the AMO
Eustatic sea level change (negligible…if at zero at all)
Thanks again Willis.
Your contributions to the scientific pool continue to make waves. Keep it it up!
Your friend,
Chris
Norfolk (Hampton Roads), VA, USA
Dave Springer,
I provided a link that supports my statement that “the theory of a thin mantle is at least questionable,” and your comeback is that I’m not even at the 7th grade science level?? That’s the kind of gratuitous insult people make when they can’t refute the substance. The fact is that the discovery of hundreds of thousands to several million new volcanoes is changing the basic theory. When the facts change in a major way it’s best to reassess the 7th grade science.
All I originally said was that “the theory of a thin mantle is at least questionable.” It is being questioned, as the link I posted shows. And the ‘ring of fire’ hypothesis is being questioned as well; there are fewer volcanoes than expected near Iceland and Hawaii. If you don’t like it, argue with the vulcanologists who said it. I’m merely relaying the information.
KR: Pf-f-f-f-t. Willis is running rings around you. If you think you’re so smart, man up and submit an article for WUWT peer review.
I left the following comment on RC.
“Only those who have no idea about what controls sea level would try to create a global sea level curve from basically one locality. Absolutely ludicrous.”
It sat in moderation for a while: http://i919.photobucket.com/albums/ad34/Jimmy1960/RCcomment.jpg
It never made it through.
I would rather believe the satellite data than any models of past sea levels produced by these people, i would also take the mass of proxy data over the last one thousand years over the temperature model produced by these people for the last thousand years.
KR says:
June 23, 2011 at 5:57 pm
Not sure what you mean by “data included”. My point is that the reputable journals require the authors to archive their datasets at the time of publication. This allows other researchers around the world to check the work of the original investigators. It also avoids anyone having to pester the researcher for data. Even Michael Mann should have learned that lesson. He archived nothing about the Hockeystick, but (as a result of persistent pressure from Steve McIntyre and others) he archived the complete dataset for Mann 2008.
Kemp et al. (including Mann) have archived nothing.
It appears that there is some misunderstanding here. I am interested in finding out if their work passes the simple tests. You know, is their “transfer function” something logical? Is it reasonably employed? I’m also interested in how they have chosen their datasets, to see if perhaps they have made an adventitious selection among competing possibilities.
I also want to do the bozo tests. You know, is their math correct, are there radian/degree mistakes, that kind of thing.
The first step in replication is not rushing out with a shovel to get your hands dirty. The first step is to see if there is anything in the paper worth discussing, much less replicating. For any researcher to be able to do that, Kemp et al. need to archive their data and explain things like their magic “transfer function”. Without that, it’s just conjecture and anecdote.
w.
The whole NC sea level rise and the Al Gore polemic of recent days make me think the hockey team is getting desperate.
Desperate to saturate the media, try and fool the masses, and ram something through before old Mr. Sun gives us a nice cooling, sea level declining trend, showing their folly.
-Jay
Wllis – don’t you have anything better to do than reply to KR’s absurdities? By following his principles , every sorcerer, every homeopath and every faith-healer should send a “paper” to the PNAS for publication, as they can claim anything with words like “transfer function” and then blame you for being unable to replicate as it only works with the right type of very rare water or a hare’s leg collected during the second full moon of the Millennium (what? you’ll have to wait 2990 years for that? to bad – it means sorcery is a science for another 2990 years!).
My point is that these threads always get hijacked by people saying the most stupid and antiscientific things, and then everybody feels compelled to show them how wrong they are, and then we get even more absurd remarks and so on and so forth. What for? The only consequence is drowning the good comments and challenges, ruining the point of blogging . There has to be discernment in whom and to reply to, and having the last word doesn’t mean winning any argument.
“Bill Jamison says:
June 23, 2011 at 2:49 am
I just read recently about some of the underwater artifacts found in Alexandria Egypt. Apparently some dating from ~300BC were found under 5 to 8 meters of water.”
Not to mention. From memory parts of Egypt including Alexandria have been hit by some quite substantial seismic activity over the centuries, thereby ‘sinking’ parts of coastal areas into the Mediterranean Sea!
“You know, is their “transfer function” something logical? Is it reasonably employed?”
They use weighted averaging partial least squares – this is explicit in their earlier Geology paper. It is a good method – often the best, and a reasonable choice for their data.
Scientist’s authority is controlling the data, not hiding how the data analyzed.
By transparent on how the data analyzed, readers can replicate the study on different location.
Replication is how the measure credibility of the data (and the scientist).
Let’s support the transparent and verified research.
@Willis
I take work that hasn’t been replicated with a grain of salt of course. This particular line of research needs to be replicated at more salt marshes and preferably some where there weren’t huge agriculural and industrial booms in the region. If the gratuitous global warming linkage is left out it’s good stuff and adds to our practical knowledge of the world. I don’t say that about a lot of this kind of research because it often has little practical benefit. Say they’d been counting foraminifera in Burgess shale instead. That was 500 million years ago in the middle Cambrian and woudn’t tell us anything very useful about the world we live in today or how our activities effect it. Finding out more about modern aquifers is uber important and this appears to be a great way to reconstruct land subsidence (or rise) due to underlying aquifer level and what natural and unnatural factors effect those aquifers. The authors didn’t intend that of course but that’s how science works. Data like this often tells you things you didn’t expect to learn. These guys I’m sure saw themselves as knights on a quest to prove something, which is not how science generally works unless it’s something along the lines of young earth creation science, and inadvertantly stumbled upon something else. Think about how Teflon was discovered for instance. If they’d left out the AGW conclusions the data can be left to speak for itself about what it means. Among other valid concerns of seriously depleted natural resources fresh water is at the top of the list. I’m more worried about running out of fresh water than ancient oil and this is helpful in understanding things that effect our fresh water supply. Nobody really knows how, when, and where forests effect aquifers or what deforestation does to the aquifers. This appears to be a smoking gun for at least one case where deforestation caused aquifer stress. The link to anthropogenic CO2 causing an abrupt quadrupling of steric sea level rise circa 1880 is absurd on the face of it and ignoring isostatic sea level rise due to ground water depletion in a huge anthropogenic land use change is a monumental boner that no objective earth scientist should have let slip. I don’t know whether to blame it on incompetence or being on a AGW crusade but there’s nothing else to explain it.
“we’re looking for tiny difference in long-term-average sea levels.”
No Willis, that is an assumption. The political push for dramatic change in human behaviour in the Western world is based on the belief that sea level is rising dramatically. This would mean that al the ice melting all over the world was pushing sea levels up by meters not a few centimeters, causing loss of land on a grand scale and consigning great cities to the waves. But this level of sea level change can most easily be detected by looking at that land lost to the sea – and looking and gently shelving beaches protected from the waves would be a perfectly reasonable place to start,
This current study actually is not a threat to the skeptics – it shows a linear rate of rise equivalent to one foot per century – hardly anything to get vexed about. Why the rush to change our behaviour now when it seems we can play “wait and see” and do some proper observations of climate over the next 100 years to see what is really happening.
Willis
The ‘transfer function’ is a simple look-up table of species ratios to depth, a calibration which you apply to determine how deep a particular sample resided when the foraminifera was alive. You might profitably look at Kemp et al’s references, where this technique is described in earlier well established fashion. I agree with Dave Springer, that part of the paper is very strong and well based.
So – I’ll ask again, because you have not actually responded to my previous queries on this:
– Have you asked Kemp et al for any of the data? If you have, say so, if not, enough with the complaining!
– Which parts of the mathematical treatment of the data as described in the paper and supplemental information do you find opaque?
omnologos says:
“Wllis – don’t you have anything better to do than reply to KR’s absurdities? By following his principles , every sorcerer, every homeopath and every faith-healer should send a “paper” to the PNAS for publication, as they can claim anything with words like “transfer function” and then blame you for being unable to replicate as it only works with the right type of very rare water or a hare’s leg collected during the second full moon of the Millennium (what? you’ll have to wait 2990 years for that? to bad – it means sorcery is a science for another 2990 years!).
“My point is that these threads always get hijacked by people saying the most stupid and antiscientific things, and then everybody feels compelled to show them how wrong they are, and then we get even more absurd remarks and so on and so forth. What for? The only consequence is drowning the good comments and challenges, ruining the point of blogging . There has to be discernment in whom and to reply to, and having the last word doesn’t mean winning any argument.”
Repeated for effect. KR is being a crank.
richard telford says:
June 24, 2011 at 1:50 am
Richard, that is an assumption for which you have absolutely no evidence. All you’ve shown is what they did last time. And more to the point, while it might be a “good method”, until they show exactly what they did, we also have no evidence that they have used it correctly.
Nor can we say that it is a “reasonable choice for their data” until we see their data.
Why are these simple things so hard to get across? Why are people so willing to trust Mann when he has shown himself to be totally untrustworthy?
Richard, you and KR have to catch up with the times. In climate science 2011, if you don’t archive your data and show your methods, folks won’t believe you … especially if Mann is a co-author. He’s famous for screwing with the data when everyone’s back is turned, and renowned for using “good methods” incorrectly, which makes your assurance that Kemp et al. are using a “good method” totally meaningless.
What evidence do you have that Mann is not lying to us again, either by commission or omission? Can you say authoritatively that Mann has not done that here, that he is not hiding contrary data and making stupid mistakes and then fibbing about what he has done?
Didn’t think so … which is why we need the data and the code, rather than your and KR’s assurances that everything is perfectly fine. You don’t know that, and me, I’m real tired of being lied to and patted on the head and told things are wonderful. Come back when you have evidence, Richard, you know, the data and code. Your constant Pollyanna reassurance that everything is for the best in this, the best of all possible worlds for climate science, is past its use-by date.
w.
Ryan says:
June 24, 2011 at 7:37 am
Context is everything, Ryan, and that was said in the context of trying to see sea level rise in old pictures compared to new pictures. So let’s see if we’re looking for a big difference or a tiny difference.
Tides are often from three to six feet, sometimes up to 20 feet.
Waves are often from thee to six feet, sometimes up to 20 feet.
Old photos might be what, 75, a hundred years old? And the sea level rose maybe eight inches in the last century.
So we’re looking in photographs for what might be a six-inch difference in long-term-average sea levels, with waves and tides that are typically ten times that large. Good luck with that, Ryan.
That is the context in which I made the statement about a “tiny difference”. I called it “tiny” because you’ll never be able to detect that in an old photo.
You, of course, are free to call it a “huge difference that is so big we can’t conceivably see it in a photo comparison” if you wish …
w.
KR says:
June 24, 2011 at 7:47 am
Willis
So what you are saying is that you can’t show me the transfer function they used, all you can do is wave your hands and make generalities about it. In addition to not being able to say what it is, you cannot say if it was used properly or if they have made some mistake in its use. Color me unimpressed.
I gave up on asking Michael Mann for data long ago. If you are foolish enough to want to continue down that path, be my guest.
If, on the other hand, you’ve never asked Mike for data, go ahead and ask him, and report back to us. It could be an important part of your education about climate reality. Me, been there, done that, I have much more important things to do than waste my time that way. He only gives data to his friends, and for unknown reasons he doesn’t number me among them. But heck, give him a shout-out, maybe you are one of his friends that can get data from him. I can’t, and I’ve given up trying.
But in any case, which part of “the author is responsible for archiving their data” do you not understand? It is not my task to hound innocent researchers for their data. It is their task to archive it if they wish to be taken seriously, and they have failed to do that.
The opaque part of what they did is the part that they haven’t described in the paper and the supplemental information … and which part is that, you ask?
Well, we don’t know, do we, since they have revealed neither their code nor their data.
You asking me to point out the errors in what Kemp et al. haven’t revealed could serve as a perfect metaphor for climate science these days …
w.
omnologos says:
June 24, 2011 at 12:45 am
Thanks, omnologos (and Smokey) for the comments. I reply to most people that seem to me to be serious, whether or not they are absurd—one man’s absurd is another man’s reasonable.
I do this for several reasons:
1. I write as much for the lurkers as for the commenters. For every KR pointing out something, there are assuredly others who have his same question or point of view. And even if I can’t reach KR, I certainly may be reaching them.
2. I have often been ignored because my questions didn’t fit the common paradigm or seemed to come “out of left field”. I don’t like that when it happens, and so I don’t want to do the same to others.
3. I don’t want people to say that I am dodging or avoiding their concerns. That’s RealClimate and Tamino’s kind of game, censorship and avoidance, and I won’t play it.
4. It gives me a chance to re-state my points, offer new evidence or re-present old evidence in a new light, and generally gives me an opportunity to present my point of view anew. What’s not to like?
5. It is useful for people to see the AGW absurdities repeated by the various adherents.
6. It is also useful for people to come to understand the debating style of the AGW supporters and the lack of evidence for their claims. The only way to point that out is … well … to ask people for evidence for their claims.
7. Sometimes even the wise man can be corrected by a fool, and I’m not sure which side of that equation I might be sitting on at any given instant.
8. What seems clear and perfectly obvious to you and me may not seem that way at all to someone else.
So, for those reasons, I try to answer all of what I see as honest inquiries, and even some that I think might be just trolling. If I had a perfect troll-ometer this would be easier, of course, but until then I’ll err on the side of caution.
w.
Willis
So no, you have not asked Kemp et al for the calibration data.
I find the focus on Mann very interesting in this thread. Michael Mann is author 4 of 6 – contributing, yes, but not the lead author. The proper reference to this paper is Kemp et al 2011 (http://www.pnas.org/content/early/2011/06/13/1015619108.full.pdf+html?with-ds=yes) , not Mann 2011. But I suppose Mann makes a better target for skepticism. That’s an Ad hominem argument, of course, but an easier target.
In my personal opinion (and yes, others may differ) I find that the data is sufficient – in particular, table S1, page 11 of the published supplemental data containing age, depth, and uncertainties for each of the core samples, plus the descriptions of the mathematical treatment of the data including Bayesian priors in table S3 – these add up to describing what they did, why, and how. You could certainly, for example, run a statistical analysis of those time/depth points and see if their reconstruction is statistically significant.
No, they did not include their raw data from the 193 calibration samples that had been studied for species ratio versus depth. But that’s a technique that’s been used for years, and the method should be common knowledge to those in the field. They also didn’t include multi-semester courses in radioisotope dating, foraminifera identification, tidal gauges, or GIA adjustments. Not to mention a private tutor and masseuse to help you through the material… /sarcasm
The point of putting sufficient information out with a paper is to permit others familiar with the field to judge it, to replicate it, and to (if they agree with it) extend the work. And I’m going to have to disagree with you, Willis – I believe Kemp et al did a reasonable job with this paper.
The community that reconstructs sea level from foraminifera or diatoms always seems to use weighted averaging partial least squares or a closely related method, weighted averaging (see for example http://repository.upenn.edu/ees_papers/50). I have not see a single paper attempting to use neural networks or random forests or some other exotic method to reconstruct sea levels, which is fortunate, as these methods are not very robust, unlike the weighted averaging methods.
Since the community has decided that these are the best methods, it would be very surprising if Kemp et al. 2011 use a different method from their earlier work. Even if you had the data, how would you test if it is a reasonable method for the data. Either you’ve got to spend some time learning the theory and methods used in palaeoecology, or you have to trust those who have.
And how do you propose that they could have used the method incorrectly? If they have used the usual software, it is difficult to do anything wrong. There are plenty of cases of people reconstructing inappropriate environmental variables, but I don’t know of any where they have done the reconstruction incorrectly.
You might also want to check their taxonomy. And to look for the hole in the marsh they claim to have cored. Perhaps start by looking through their travel claims for fieldwork. I’ll sure you’ll find a mistake somewhere and blow it into some imaginary scandal.
Willis
It has been pointed out to me that the “transfer function”, i.e., the calibration data for foraminifera to depth, is indeed included in one of the Kemp et al references, http://www.sciencedirect.com/science/article/pii/S0377839809000693 – Kemp et al 2009. This is reference number 7 in Kemp et al 2011.
Claims that this data were unavailable are therefore, well, wrong.
KR says:
June 24, 2011 at 1:56 pm
Thanks, KR. It’s paywalled. Since you claim to know what the transfer function is, I’m sure you can actually tell us what it is and quote from the document … have you actually seen it, or is this just secondhand?
I ask because they say that their transfer function also calculates “unique vertical errors … for each PME estimate”. They go on to say that their errors are less than “0.1 m [four inches]”. Let’s be generous and call that their two sigma error.
Despite that, their Figure 2B (inset) shows a Tump Point two sigma error of ± 0.04m (an inch and a half). How does the transfer function two sigma error of four inches get converted to a final reported error of an inch and a half?
These are the kinds of questions that could be easily answered if they published their data and code.
w.
“How does the transfer function two sigma error of eight inches get converted to a reported error of an inch and a half?”
Because there are different sources of error, some common to all samples, and some sample specific. See Birks et al. 2010. Strengths and Weaknesses of Quantitative Climate Reconstructions Based on Late-Quaternary Biological Proxies. The Open Ecology Journal, 3, 68-110.
richard telford says:
June 24, 2011 at 1:24 pm
After the Hockeystick debacle where Mann improperly used PC analysis, and the improper use by Steig of principal components (leading to spurious Chladni patterns), for you to ask that question means that you are not following the climate story.
And not just not following the story. Actively avoiding the story.
That, I can’t help you with. You’ll have to apply elsewhere. I can’t deal with that kind of ruthless optimism. People involved in this and other studies have done that exact thing before, used standard methods incorrectly. How do I propose they did such a foolish thing? By not understanding what they were doing, and by eschewing the assistance of actual statisticians …
w.
KR says:
June 24, 2011 at 11:35 am
Not my responsibility. You need to catch up, this is 2011, that kind of nonsense won’t fly any more. People both in and out of climate science have been fooled too many times.
If he wants his results taken seriously it is his responsibility to archive the raw data. That’s why Science and Nature have policies that require archiving. To avoid just these kinds of difficulties.
Which in turn may be why they chose to publish in a vanity press rather than a real journal that would require that they put their data where their mouth is …
w.