Reduce your CO2 footprint by recycling past errors!

Guest Post by Willis Eschenbach

Anthony has pointed out the further inanities of that well-known vanity press, the Proceedings of the National Academy of Sciences. This time it is Michael Mann (of Hockeystick fame) and company claiming an increase in the rate of sea level rise (complete paper here, by Kemp et al., hereinafter Kemp 2011). A number of commenters have pointed out significant shortcomings in the paper. AMac has noted at ClimateAudit that Mann’s oft-noted mistake of the upside-down Tiljander series lives on in Kemp 2011, thus presumably saving the CO2 required to generate new and unique errors. Steve McIntyre has pointed out that, as is all too common with the mainstream AGW folks and particularly true of anything touched by Michael Mann, the information provided is far, far, far from enough to reproduce their results. Judith Curry is also hosting a discussion of the issues.

I was interested in a couple of problems that haven’t been touched on by other researchers. The first is that you can put together your whiz-bang model that uses a transfer function to relate the “formaminiferal assemblages” to “paleomarsh elevation” (PME) and then subtract the PME from measured sample altitudes to estimate sea levels, as they say they have done. But how do you then verify whether your magic math is any good? The paper claims that

Agreement of geological records with trends in regional and global tide-gauge data (Figs. 2B and 3) validates the salt-marsh proxy approach and justifies its application to older sediments. Despite differences in accumulation history and being more than 100 km apart, Sand Point and Tump Point recorded near identical RSL variations.

Hmmm, sez I … so I digitized the recent data in their Figure 2B. This was hard to do, because the authors have hidden part of the data in their graph through their use of solid blocks to indicate errors, rather than whiskers as are commonly used. This makes it hard to see what they actually found. However, their results can be determined by careful measurement and digitization. Figure 1 shows those results, along with observations from the two nearest long-term tidal gauges and the TOPEX satellite record for the area.

Figure 1. The sea-level results from Kemp 2011, along with the nearest long-term tide gauge records (Wilmington and Hampton Roads) and the TOPEX  satellite sea level records for that area. Blue and orange transparent bands indicate the uncertainties in the Kemp 2011 results. Their uncertainties are shown for both the sea level and the year. SOURCES: Wilmington, Hampton Roads, TOPEX

My conclusions from this are a bit different from theirs.

The first conclusion is that as is not uncommon with sea level records, nearby tide gauges give very different changes in sea level. In this case, the Wilmington rise is 2.0 mm per year, while the Hampton Roads rise is more than twice that, 4.5 mm per year. In addition, the much shorter satellite records show only half a mm per year average rise for the last twenty years.

As a result, the claim that the “agreement” of the two Kemp 2011 reconstructions are “validated” by the tidal records is meaningless, because we don’t have observations accurate enough to validate anything. We don’t have good observations to compare with their results, so virtually any reconstruction could be claimed to be “validated” by the nearby tidal gauges. In addition, since the Tump Point sea level rise is nearly 50% larger than the Sand Point rise, how can the two be described as “near identical”?

As I mentioned above, there is a second issue with the paper that has received little attention. This is the nature of the area where the study was done. It is all flatland river delta, with rivers that have created low-lying sedimentary islands and constantly changing border islands, and swirling currents and variable conditions. Figure 2 shows what the turf looks like from the seaward side:

Figure 2. Location of the study areas (Tump Point and Sand Point, purple) for the Kemp 2011 sea level study. Location of the nearest long-term tidal gauges (Wilmington and Hampton Roads) are shown by yellow pushpins.

Why is this important? It is critical because these kinds of river mouth areas are never stable. Islands change, rivers cut new channels, currents shift their locations, sand bars are created and eaten away. Figure 3 shows the currents near Tump Point:

Figure 3. Eddying currents around Tump Point. Note how they are currently eroding the island, leading to channels eaten back into the land.

Now, given the obviously sedimentary nature of the Tump Point area, and the changing, swirling nature of the currents … what are the odds that the ocean conditions (average temperature, salinity, sedimentation rate, turbidity, etc.) are the same now at Tump Point as they were a thousand years ago?

And since the temperature and salinity and turbidity and mineral content a thousand years ago may very well have been significantly different from their current values, wouldn’t the “formaminiferal assemblages” have also been different then regardless of any changes in sea level?

Because for the foraminifera proxy to be valid over time, we have to be able to say that the only change that might affect the “foraminiferal assemblages” is the sea level … and given the geology of the study area, we can almost guarantee that is not true.

So those are my issues with the paper, that there are no accurate observations to compare with their reconstruction, and that important local marine variables undoubtedly have changed in the last thousand years. Of course, those are in addition to the problems discussed by others, involving the irreproducibility due to the lack of data and code … and the use of the Tiljander upside-down datasets … and the claim that we can tell the global sea level rise from a reconstruction in one solitary location … and the shabby pal-review by PNAS … and the use of the Mann 2008 temperature reconstruction … and …

In short, I fear all we have is another pathetic attempt by Michael Mann, Stefan Rahmstorf, and others to shore up their pathetic claims, even to the point of repeating their exact same previous pathetic mistakes … and folks wonder why we don’t trust mainstream AGW scientists?

Because they keep trying, over and over, to pass off this kind of high-school-level investigation as though it were real science.

My advice to the authors? Same advice my high school science teacher drilled into our heads, to show our work. PUBLISH YOUR CODE AND DATA, FOOLS! Have you been asleep for the last couple years? These days nobody will believe you unless your work is replicable, and you just look stupid for trying this same ‘I won’t mention the code and data, maybe nobody will notice’ trick again and again. You can do all the hand-waving you want about your “extended semiempirical modeling approach”, but until you publish the data and the code for that approach and for the other parts of your method, along with the observational data used to validate your approach, your credibility will be zero and folks will just point and laugh.

w.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

191 Comments
Inline Feedbacks
View all comments
savethesharks
June 23, 2011 7:28 pm

Excellent work as always, Willis.
There are multiple factors contributing to the abnormally high rate of the “Hampton Roads” measurement. Those 1.7 Million people many of them important to national and international security (the guys who killed Osama were from here).
But…as always…and excellent exposition on the subject.
As you know and understand well before Mann and his henchmen…sea level is a very very complicated thing.
Land level related:
Isostatic rebound from the last glaciation (this is a BIG issue)
35 million year old meteor impact crater (the mouth of the Chesapeake Bay)
Land use and development issues
Aquifer Depletion (this could be a big issue, as well)
Soft coastal plain silts and river deltas
Sea Level related:
Global redistribution of the ocean water masses over many decades
Fluctuations in the Gulf Stream (salinity, speed, and temperature)
Long term atmospheric fluctuations such as the NAO and oceanic such as the AMO
Eustatic sea level change (negligible…if at zero at all)
Thanks again Willis.
Your contributions to the scientific pool continue to make waves. Keep it it up!
Your friend,
Chris
Norfolk (Hampton Roads), VA, USA

June 23, 2011 7:30 pm

Dave Springer,
I provided a link that supports my statement that “the theory of a thin mantle is at least questionable,” and your comeback is that I’m not even at the 7th grade science level?? That’s the kind of gratuitous insult people make when they can’t refute the substance. The fact is that the discovery of hundreds of thousands to several million new volcanoes is changing the basic theory. When the facts change in a major way it’s best to reassess the 7th grade science.
All I originally said was that “the theory of a thin mantle is at least questionable.” It is being questioned, as the link I posted shows. And the ‘ring of fire’ hypothesis is being questioned as well; there are fewer volcanoes than expected near Iceland and Hawaii. If you don’t like it, argue with the vulcanologists who said it. I’m merely relaying the information.
KR: Pf-f-f-f-t. Willis is running rings around you. If you think you’re so smart, man up and submit an article for WUWT peer review.

June 23, 2011 8:03 pm

I left the following comment on RC.
“Only those who have no idea about what controls sea level would try to create a global sea level curve from basically one locality. Absolutely ludicrous.”
It sat in moderation for a while: http://i919.photobucket.com/albums/ad34/Jimmy1960/RCcomment.jpg
It never made it through.

don penman
June 23, 2011 11:02 pm

I would rather believe the satellite data than any models of past sea levels produced by these people, i would also take the mass of proxy data over the last one thousand years over the temperature model produced by these people for the last thousand years.

Jay
June 24, 2011 12:38 am

The whole NC sea level rise and the Al Gore polemic of recent days make me think the hockey team is getting desperate.
Desperate to saturate the media, try and fool the masses, and ram something through before old Mr. Sun gives us a nice cooling, sea level declining trend, showing their folly.
-Jay

June 24, 2011 12:45 am

Wllis – don’t you have anything better to do than reply to KR’s absurdities? By following his principles , every sorcerer, every homeopath and every faith-healer should send a “paper” to the PNAS for publication, as they can claim anything with words like “transfer function” and then blame you for being unable to replicate as it only works with the right type of very rare water or a hare’s leg collected during the second full moon of the Millennium (what? you’ll have to wait 2990 years for that? to bad – it means sorcery is a science for another 2990 years!).
My point is that these threads always get hijacked by people saying the most stupid and antiscientific things, and then everybody feels compelled to show them how wrong they are, and then we get even more absurd remarks and so on and so forth. What for? The only consequence is drowning the good comments and challenges, ruining the point of blogging . There has to be discernment in whom and to reply to, and having the last word doesn’t mean winning any argument.

Marian
June 24, 2011 1:18 am

“Bill Jamison says:
June 23, 2011 at 2:49 am
I just read recently about some of the underwater artifacts found in Alexandria Egypt. Apparently some dating from ~300BC were found under 5 to 8 meters of water.”
Not to mention. From memory parts of Egypt including Alexandria have been hit by some quite substantial seismic activity over the centuries, thereby ‘sinking’ parts of coastal areas into the Mediterranean Sea!

richard telford
June 24, 2011 1:50 am

“You know, is their “transfer function” something logical? Is it reasonably employed?”
They use weighted averaging partial least squares – this is explicit in their earlier Geology paper. It is a good method – often the best, and a reasonable choice for their data.

June 24, 2011 2:04 am

Scientist’s authority is controlling the data, not hiding how the data analyzed.
By transparent on how the data analyzed, readers can replicate the study on different location.
Replication is how the measure credibility of the data (and the scientist).
Let’s support the transparent and verified research.

Dave Springer
June 24, 2011 2:30 am

@Willis
I take work that hasn’t been replicated with a grain of salt of course. This particular line of research needs to be replicated at more salt marshes and preferably some where there weren’t huge agriculural and industrial booms in the region. If the gratuitous global warming linkage is left out it’s good stuff and adds to our practical knowledge of the world. I don’t say that about a lot of this kind of research because it often has little practical benefit. Say they’d been counting foraminifera in Burgess shale instead. That was 500 million years ago in the middle Cambrian and woudn’t tell us anything very useful about the world we live in today or how our activities effect it. Finding out more about modern aquifers is uber important and this appears to be a great way to reconstruct land subsidence (or rise) due to underlying aquifer level and what natural and unnatural factors effect those aquifers. The authors didn’t intend that of course but that’s how science works. Data like this often tells you things you didn’t expect to learn. These guys I’m sure saw themselves as knights on a quest to prove something, which is not how science generally works unless it’s something along the lines of young earth creation science, and inadvertantly stumbled upon something else. Think about how Teflon was discovered for instance. If they’d left out the AGW conclusions the data can be left to speak for itself about what it means. Among other valid concerns of seriously depleted natural resources fresh water is at the top of the list. I’m more worried about running out of fresh water than ancient oil and this is helpful in understanding things that effect our fresh water supply. Nobody really knows how, when, and where forests effect aquifers or what deforestation does to the aquifers. This appears to be a smoking gun for at least one case where deforestation caused aquifer stress. The link to anthropogenic CO2 causing an abrupt quadrupling of steric sea level rise circa 1880 is absurd on the face of it and ignoring isostatic sea level rise due to ground water depletion in a huge anthropogenic land use change is a monumental boner that no objective earth scientist should have let slip. I don’t know whether to blame it on incompetence or being on a AGW crusade but there’s nothing else to explain it.

Ryan
June 24, 2011 7:37 am

“we’re looking for tiny difference in long-term-average sea levels.”
No Willis, that is an assumption. The political push for dramatic change in human behaviour in the Western world is based on the belief that sea level is rising dramatically. This would mean that al the ice melting all over the world was pushing sea levels up by meters not a few centimeters, causing loss of land on a grand scale and consigning great cities to the waves. But this level of sea level change can most easily be detected by looking at that land lost to the sea – and looking and gently shelving beaches protected from the waves would be a perfectly reasonable place to start,
This current study actually is not a threat to the skeptics – it shows a linear rate of rise equivalent to one foot per century – hardly anything to get vexed about. Why the rush to change our behaviour now when it seems we can play “wait and see” and do some proper observations of climate over the next 100 years to see what is really happening.

KR
June 24, 2011 7:47 am

Willis
The ‘transfer function’ is a simple look-up table of species ratios to depth, a calibration which you apply to determine how deep a particular sample resided when the foraminifera was alive. You might profitably look at Kemp et al’s references, where this technique is described in earlier well established fashion. I agree with Dave Springer, that part of the paper is very strong and well based.
So – I’ll ask again, because you have not actually responded to my previous queries on this:
– Have you asked Kemp et al for any of the data? If you have, say so, if not, enough with the complaining!
– Which parts of the mathematical treatment of the data as described in the paper and supplemental information do you find opaque?

June 24, 2011 9:43 am

omnologos says:
“Wllis – don’t you have anything better to do than reply to KR’s absurdities? By following his principles , every sorcerer, every homeopath and every faith-healer should send a “paper” to the PNAS for publication, as they can claim anything with words like “transfer function” and then blame you for being unable to replicate as it only works with the right type of very rare water or a hare’s leg collected during the second full moon of the Millennium (what? you’ll have to wait 2990 years for that? to bad – it means sorcery is a science for another 2990 years!).
“My point is that these threads always get hijacked by people saying the most stupid and antiscientific things, and then everybody feels compelled to show them how wrong they are, and then we get even more absurd remarks and so on and so forth. What for? The only consequence is drowning the good comments and challenges, ruining the point of blogging . There has to be discernment in whom and to reply to, and having the last word doesn’t mean winning any argument.”
Repeated for effect. KR is being a crank.

KR
June 24, 2011 11:35 am

Willis
So no, you have not asked Kemp et al for the calibration data.
I find the focus on Mann very interesting in this thread. Michael Mann is author 4 of 6 – contributing, yes, but not the lead author. The proper reference to this paper is Kemp et al 2011 (http://www.pnas.org/content/early/2011/06/13/1015619108.full.pdf+html?with-ds=yes) , not Mann 2011. But I suppose Mann makes a better target for skepticism. That’s an Ad hominem argument, of course, but an easier target.
In my personal opinion (and yes, others may differ) I find that the data is sufficient – in particular, table S1, page 11 of the published supplemental data containing age, depth, and uncertainties for each of the core samples, plus the descriptions of the mathematical treatment of the data including Bayesian priors in table S3 – these add up to describing what they did, why, and how. You could certainly, for example, run a statistical analysis of those time/depth points and see if their reconstruction is statistically significant.
No, they did not include their raw data from the 193 calibration samples that had been studied for species ratio versus depth. But that’s a technique that’s been used for years, and the method should be common knowledge to those in the field. They also didn’t include multi-semester courses in radioisotope dating, foraminifera identification, tidal gauges, or GIA adjustments. Not to mention a private tutor and masseuse to help you through the material… /sarcasm
The point of putting sufficient information out with a paper is to permit others familiar with the field to judge it, to replicate it, and to (if they agree with it) extend the work. And I’m going to have to disagree with you, Willis – I believe Kemp et al did a reasonable job with this paper.

richard telford
June 24, 2011 1:24 pm

The community that reconstructs sea level from foraminifera or diatoms always seems to use weighted averaging partial least squares or a closely related method, weighted averaging (see for example http://repository.upenn.edu/ees_papers/50). I have not see a single paper attempting to use neural networks or random forests or some other exotic method to reconstruct sea levels, which is fortunate, as these methods are not very robust, unlike the weighted averaging methods.
Since the community has decided that these are the best methods, it would be very surprising if Kemp et al. 2011 use a different method from their earlier work. Even if you had the data, how would you test if it is a reasonable method for the data. Either you’ve got to spend some time learning the theory and methods used in palaeoecology, or you have to trust those who have.
And how do you propose that they could have used the method incorrectly? If they have used the usual software, it is difficult to do anything wrong. There are plenty of cases of people reconstructing inappropriate environmental variables, but I don’t know of any where they have done the reconstruction incorrectly.
You might also want to check their taxonomy. And to look for the hole in the marsh they claim to have cored. Perhaps start by looking through their travel claims for fieldwork. I’ll sure you’ll find a mistake somewhere and blow it into some imaginary scandal.

KR
June 24, 2011 1:56 pm

Willis
It has been pointed out to me that the “transfer function”, i.e., the calibration data for foraminifera to depth, is indeed included in one of the Kemp et al references, http://www.sciencedirect.com/science/article/pii/S0377839809000693 – Kemp et al 2009. This is reference number 7 in Kemp et al 2011.
Claims that this data were unavailable are therefore, well, wrong.

richard telford
June 24, 2011 3:47 pm

“How does the transfer function two sigma error of eight inches get converted to a reported error of an inch and a half?”
Because there are different sources of error, some common to all samples, and some sample specific. See Birks et al. 2010. Strengths and Weaknesses of Quantitative Climate Reconstructions Based on Late-Quaternary Biological Proxies. The Open Ecology Journal, 3, 68-110.