Guest Post by Willis Eschenbach
Anthony has pointed out the further inanities of that well-known vanity press, the Proceedings of the National Academy of Sciences. This time it is Michael Mann (of Hockeystick fame) and company claiming an increase in the rate of sea level rise (complete paper here, by Kemp et al., hereinafter Kemp 2011). A number of commenters have pointed out significant shortcomings in the paper. AMac has noted at ClimateAudit that Mann’s oft-noted mistake of the upside-down Tiljander series lives on in Kemp 2011, thus presumably saving the CO2 required to generate new and unique errors. Steve McIntyre has pointed out that, as is all too common with the mainstream AGW folks and particularly true of anything touched by Michael Mann, the information provided is far, far, far from enough to reproduce their results. Judith Curry is also hosting a discussion of the issues.
I was interested in a couple of problems that haven’t been touched on by other researchers. The first is that you can put together your whiz-bang model that uses a transfer function to relate the “formaminiferal assemblages” to “paleomarsh elevation” (PME) and then subtract the PME from measured sample altitudes to estimate sea levels, as they say they have done. But how do you then verify whether your magic math is any good? The paper claims that
Agreement of geological records with trends in regional and global tide-gauge data (Figs. 2B and 3) validates the salt-marsh proxy approach and justifies its application to older sediments. Despite differences in accumulation history and being more than 100 km apart, Sand Point and Tump Point recorded near identical RSL variations.
Hmmm, sez I … so I digitized the recent data in their Figure 2B. This was hard to do, because the authors have hidden part of the data in their graph through their use of solid blocks to indicate errors, rather than whiskers as are commonly used. This makes it hard to see what they actually found. However, their results can be determined by careful measurement and digitization. Figure 1 shows those results, along with observations from the two nearest long-term tidal gauges and the TOPEX satellite record for the area.
Figure 1. The sea-level results from Kemp 2011, along with the nearest long-term tide gauge records (Wilmington and Hampton Roads) and the TOPEX satellite sea level records for that area. Blue and orange transparent bands indicate the uncertainties in the Kemp 2011 results. Their uncertainties are shown for both the sea level and the year. SOURCES: Wilmington, Hampton Roads, TOPEX
My conclusions from this are a bit different from theirs.
The first conclusion is that as is not uncommon with sea level records, nearby tide gauges give very different changes in sea level. In this case, the Wilmington rise is 2.0 mm per year, while the Hampton Roads rise is more than twice that, 4.5 mm per year. In addition, the much shorter satellite records show only half a mm per year average rise for the last twenty years.
As a result, the claim that the “agreement” of the two Kemp 2011 reconstructions are “validated” by the tidal records is meaningless, because we don’t have observations accurate enough to validate anything. We don’t have good observations to compare with their results, so virtually any reconstruction could be claimed to be “validated” by the nearby tidal gauges. In addition, since the Tump Point sea level rise is nearly 50% larger than the Sand Point rise, how can the two be described as “near identical”?
As I mentioned above, there is a second issue with the paper that has received little attention. This is the nature of the area where the study was done. It is all flatland river delta, with rivers that have created low-lying sedimentary islands and constantly changing border islands, and swirling currents and variable conditions. Figure 2 shows what the turf looks like from the seaward side:
Figure 2. Location of the study areas (Tump Point and Sand Point, purple) for the Kemp 2011 sea level study. Location of the nearest long-term tidal gauges (Wilmington and Hampton Roads) are shown by yellow pushpins.
Why is this important? It is critical because these kinds of river mouth areas are never stable. Islands change, rivers cut new channels, currents shift their locations, sand bars are created and eaten away. Figure 3 shows the currents near Tump Point:
Now, given the obviously sedimentary nature of the Tump Point area, and the changing, swirling nature of the currents … what are the odds that the ocean conditions (average temperature, salinity, sedimentation rate, turbidity, etc.) are the same now at Tump Point as they were a thousand years ago?
And since the temperature and salinity and turbidity and mineral content a thousand years ago may very well have been significantly different from their current values, wouldn’t the “formaminiferal assemblages” have also been different then regardless of any changes in sea level?
Because for the foraminifera proxy to be valid over time, we have to be able to say that the only change that might affect the “foraminiferal assemblages” is the sea level … and given the geology of the study area, we can almost guarantee that is not true.
So those are my issues with the paper, that there are no accurate observations to compare with their reconstruction, and that important local marine variables undoubtedly have changed in the last thousand years. Of course, those are in addition to the problems discussed by others, involving the irreproducibility due to the lack of data and code … and the use of the Tiljander upside-down datasets … and the claim that we can tell the global sea level rise from a reconstruction in one solitary location … and the shabby pal-review by PNAS … and the use of the Mann 2008 temperature reconstruction … and …
In short, I fear all we have is another pathetic attempt by Michael Mann, Stefan Rahmstorf, and others to shore up their pathetic claims, even to the point of repeating their exact same previous pathetic mistakes … and folks wonder why we don’t trust mainstream AGW scientists?
Because they keep trying, over and over, to pass off this kind of high-school-level investigation as though it were real science.
My advice to the authors? Same advice my high school science teacher drilled into our heads, to show our work. PUBLISH YOUR CODE AND DATA, FOOLS! Have you been asleep for the last couple years? These days nobody will believe you unless your work is replicable, and you just look stupid for trying this same ‘I won’t mention the code and data, maybe nobody will notice’ trick again and again. You can do all the hand-waving you want about your “extended semiempirical modeling approach”, but until you publish the data and the code for that approach and for the other parts of your method, along with the observational data used to validate your approach, your credibility will be zero and folks will just point and laugh.