Guest Post by Willis Eschenbach
In my previous post I discussed some of the issues with the paper “Climate related sea-level variations over the past two millennia” by Kemp et al. including Michael Mann (Kemp 2011). However, some commenters rightly said that I was not specific enough about what Kemp et al. have done wrong, so here’s what further investigation has revealed. As there is no archive of their reconstruction results, I digitized their estimate of reconstructed global sea level rise as shown in their Figure S2 (A). First, here is their Figure, showing their reconstruction of sea level.
Figure 1. Kemp Figure S2 (A) SOURCE
I digitized the part of their graph from 1650 onwards, to compare it to recent observation. Figure 2 shows those results:
Figure 2. Kemp 2011 reconstructed global sea level change, 1650-2000
So what’s not to like in these latest results from Kemp and Michael Mann?
The first thing that seems strange is that they are claiming that globally there has been a sea level rise of 200 mm (8 inches) in the last fifty years (1950-1999). I know of no one else making that claim. Church and White estimate the rise 1950-2000 at 84 mm (three and a quarter inches) mm, and Jevrejeva says 95 mm (three and three-quarters inches), so their reconstruction is more than double the accepted estimates …
The next problem becomes apparent when we look at the rate of sea level rise. Figure 3 shows the results from the Kemp 2011 study, along with the MSL rise estimates of Jevrejeva and Church & White from worldwide tidal gauges.
Figure 3. Kemp 2011 reconstructed rate of global sea level rise, 1650-2000, along with observations from Jevrejeva (red circles) and Church and White (purple squares).
Kemp et al. say that the global rate of sea level rise rose steadily since the year 1700, that it exceeded 3mm per year in 1950, that it has increased ever since, and in 2000 it was almost 5 mm/year.
Jevrejeva and Church & White, on the other hand, say it has never been above 3 mm/year, that it varies up and down with time, and in 2000 it was ~ 2 mm/year. In other words, their claims don’t agree with observations at all.
In addition, the Kemp 2011 results show the rate of sea level rise started increasing about 1700 … why would that be? And the rate has increased since then without let-up.
So we can start with those two large issues — the estimates of Kemp et al. for both sea level and sea level rise are very different from the estimates of established authorities in the field. We have seen this before, when Michael Mann claimed that the temperature history of the last thousand years was very different from the consensus view of the time. In neither case has there been any sign of the extraordinary evidence necessary to support their extraordinary claims.
There are further issues with the paper, including in no particular order:
1. Uncertainties. How are they calculated? They claim an overall accuracy for estimating the sea level at Tump Point of ± 40 mm (an inch and a half). They say their “transfer function” has errors of ± 100 mm (4 inches). Since the transfer function is only one part of their total transformation, how can the end product be so accurate?
2. Uncertainties. The uncertainties in their Figure S2 (A) (shaded dark and light pink in Figure 1 above) are constant over time. In other words, they say that their method is as good at predicting the sea level two thousand years ago as it is today … seems doubtful.
3. Uncertainties. In Figure 4(B) of the main paper they show the summary of their reconstruction after GIA adjustment, with the same error bands (shaded dark and light pink) as shown in Figure S2 (A) discussed above. However, separately in Figure 4(B) they show a much wider range of uncertainties due to the GIA adjustment. Surely those two errors add in quadrature, and end up with a wider overall error band.
4. Tidal range. If the tidal range has changed over time, it would enter their calculations as a spurious sea level rise or fall in their results. They acknowledge the possible problem, but they say it can’t happen, based on computer modeling. However, they would have been better advised to look at the data rather than foolishly placing their faith in models built on sand. The tidal range at Oregon Inlet Marina, a mere ten miles from their Sand Point core location, has been increasing at a rate of 3 mm per year, which is faster than the Kemp reconstructed sea level rise in Sand Point. Since we know for a fact that changes in tidal range are happening, their computerized assurance that they can’t happen rings more than a bit hollow. This is particularly true given the large changes in the local underwater geography in the area of Sand Point. Figure 4 shows some of those changes:
Figure 4. The changes in the channel between Roanoke Island and the mainland, from 1733 to 1990.
Note the shallows between the mainland and the south end of Roanoke Island in 1733, which are noted on charts up to 1860, and which have slowly disappeared since that time. You can also see that there are two inlets through the barrier islands (Roanoke Inlet and Gun Inlet) which have filled in entirely since 1733. The changes in these inlets may be responsible for the changes in the depths off south Roanoke Island, since they mean that the area between Roanoke and the mainland cannot easily drain out through the Roanoke Inlet at the north end as it did previously. Their claim that changes of this magnitude would not alter the tidal range seems extremely unlikely.
5. Disagreement with local trends in sea level rise. The nearest long-term tide station in Wilmington shows no statistically significant change in the mean sea level (MSL) trend since 1937. Kemp et al. say the rise has gone from 2 mm/year to 4.8 mm per year over that period. If so, why has this not shown up in Wilmington (or any other nearby locations)?
6. Uncertainties again, wherein I look hard at the math. They say the RMS (root mean square) error in their transfer function is 26% of the total tidal range. Unfortunately, they neglected to report the total tidal range, I’ll return to that in a minute. Since 26% is the RMS error, the 2 sigma error is about twice that, or 50% of the tidal range. Consider that for a moment. The transfer function relates the foraminiferal assemblage to sea level, but the error is half of the tidal range … so best case is that their method can’t even say with certainty if the assemblage came from above or below the mean sea level …
Since the tides are so complex and poorly documented inside the barrier islands, they use the VDatum tool from NOAA to estimate the mean tidal range at their sites. However, that tool is noted in the documentation as being inaccurate inside Pamlico Sound. The documentation says that unlike all other areas, whose tidal range is estimated from tidal gauges and stations, in Pamlico Sound the estimates are based on a “hydrodynamic model”.
They also claim that their transfer function gave “unique vertical errors” for each estimate that were “less than 100 mm”. This implies that their 2 sigma error was 100 mm. Combined with the idea that their VLSI error is 50% of the tidal range, this in turn implies that the tidal range is only 200 mm or so at the Sand Point location. This agrees with the VDatum estimate, which is almost exactly 200 mm.
However, tides in the area are extremely location dependent. Tidal ranges can vary by 100% within a few miles. This also means that the local tidal range (which is very local and extremely dependent on the local geography) is very likely to have changed over time. Unfortunately, these local variations are not captured by the VDatum tool. You can download it from here along with the datasets. If you compare various locations, you’ll see that VDatum is a very blunt instrument inside Pamlico Sound.
That same VDatum site give the Pamlico Sound two sigma errors (95% confidence interval) in converting from Mean Sea Level to Mean Higher High Water (MHHW) as 84 mm, and for Mean Lower Low Water as 69 mm.
The difficulty arises because the tidal range is so small. All of their data is converted to a “Standardized Water Level Index” (SWLI). This expresses the level as a percentage of the tidal range, from 0 to 100. Zero means that the sample elevation is at Mean Lower Low Water, 100 means it is at MHHW. The tidal range is given as 200 mm … but because it is small and the errors are large, the 95% confidence interval on that tidal range is from 90 mm to 310 mm, a variation of more than three to one.
Their standardized water level index (SWLI) is calculated as follows:
SWLI = (Sample Elevation – MLLW) / (MHHW – MLLW) x 100 (Eqn. 1)
When adding and subtracting amounts the errors add quadratically. The sample elevation error (from the transfer function) is ± 100 mm. The MLLW and MHHW two sigma errors are 69 mm and 84 mm respectively.
So … we can put some numbers to Equation 1. For ease of calculation lets suppose the sample elevation is 140 mm, MLLW is 0 mm, and MHHW is 200 mm. Mean sea level is halfway between high and low, or about 100 mm. Including the errors (shown as “±” values) the numerator of Eqn. 1 becomes (in mm)
(Sample Elevation – MLLW) = (140 ± 100 – 0 ± 69)
Since the errors add “in quadrature” (the combined error is the square root of the sum of the squares of the individual errors), this gives us a result of 140 ± 122 mm
Similarly, the denominator of Eqn. 1 with errors adding in quadrature is
(MHHW – MLLW) = (200 ± 84 – 0 ± 69) = 200 ± 109 mm
Now, when you divide or multiply numbers that have errors, you need to first express the errors as a percentage of the underlying amount, then add them in quadrature. This gives us
(140 ± 87%) / (200 ± 55%) *100
This is equal to (.7 ± 103 %) x 100, or 70 ± 72, where both numbers are percentages of the tidal range times 100. Since the tidal range is 200 mm, this means that the total uncertainty on our sample is about 72 percent of that, or ± 144 mm. So at the end of all their transformations, the uncertainty in the sample elevation (± 144 mm) is larger than the sample elevation itself (140 mm).
All of that, of course, assumes that I have correctly interpreted their very unclear statements about the uncertainties in their work. In any case, how they get a Tump Point two sigma error of about 40 mm (an inch and a half) out of all of that is a great mystery.
Those are my problems with the study. Both the rate and the amount of their reconstructed sea level rise in the last fifty years are much greater than observations; tidal ranges in the area are varying currently and are quite likely to have varied in the past despite the authors’ assurances otherwise; and their methods for estimating errors greatly underestimate the total uncertainty.
w.
[UPDATE] One other issue. They say regarding the C14 dating:
High-precision 14C ages (8) were obtained by preparing duplicate or triplicate samples from the same depth interval and using a pooled mean (Calib 5.0.1 software program) for calibration.
This sounded like a perfectly logical procedure … until I looked at the data. Figure 5 is a plot of the individual data, showing age versus depth, from Supplementary Tables DR3 and DR4 here. They have used the “pooled mean” of three samples at 60 cm depth, and three samples at 80 cm depth.
Figure 5. Age and depth for the Sand Point samples in the top metre of the core. Red squares show C14 dates. Horizontal black bars show the 2-sigma uncertainty (95% confidence interval).
Look at the 60 cm depth. The three samples that they tested dated from 1580, 1720, and 1776. None of their error bars overlap, so we are clearly dealing with three samples that are verifiably of different ages.
Now, before averaging them and using them to calibrate the age/depth curve … wouldn’t it make sense to stop and wonder why two samples taken from the exact same one-centimetre-thick slice of of the core are nearly two hundred years different in age.
The same is true at the 80 cm depth, where the ages range from 1609 to 1797. Again this is almost a two hundred year difference.
What am I missing here? How does this make sense, to average those disparate dates without first figuring out what is going on?

Willis
I found out (by, gasp, asking one of the authors) that the authors had submitted additional information and code, but that PNAS hasn’t linked it to the article yet – they are working on that. I believe it’s reproducible without that extra data, but I do understand that mileage may vary.
The curve you are looking at is a very low order polynomial fit to the assorted data – the steep curve in the end of the 20th century is a result of fitting it to the severe change in slope at the end of the 19th century, and hence the overshoot at 2010. I personally would have preferred them to place four linear fits to the different slopes they had identified, based upon breakpoint analysis, but given that some of the changes (MWP, LIA) weren’t all that abrupt, I would have to play more with the stats to see which gave a better fit.
Table S1 in the supplemental data does give the raw values with variances, albeit without the GIA and site offsets – it’s not that hard to drop that into Excel and reproduce the raw datapoint curve and do your own fits.
—
My biggest issues with this column, and with your previous post on the subject, is that rather than starting from the assumption that the authors (of which Mann is again 4th of 6, not the lead) were honestly presenting data, you led out with innuendo and accusations of malfeasance. Where you had questions or points that you thought lacked clarity, you did not ask the authors for clarification (yes, I know you and Mann don’t get along, but there are 6 authors here), or read the references, or look at similar literature (all basic methods for clarifying what you are reading), but instead proceeded with more innuendo and accusations. The transfer function you complained about in the previous post, for example? Reference number 7, fully detailed, yet you didn’t bother to check the references, and just claimed it to be undisclosed with more innuendo. Yes, you would have to pay to read the article where it was published – not all journals are free, and everyone I know has to cough up the subscription prices for numerous journals to keep current.
You simply and straightforwardly attacked a paper due to (in my opinion) who one of the authors was, and because it presented a ‘hockey stick’ – regardless of the fact that the data was processed in a well established fashion for sea level reconstructions. That’s not science, Willis – that’s an agenda.
Given that approach, I really don’t expect you to treat the authors, their methods, or their data with any actual consideration of what’s presented, but rather with more polemic. I have to say that I’ve seen much better from you, and have enjoyed previous discussions on this site – I am quite disappointed.
RoHa says:
June 26, 2011 at 5:24 pm
“O.K. Time for the easy version again.
Is sea level going up, down, or sideways?
Inquiring minds want to know.”
It’s only the sand that’s going up, down, and sideways; with every good storm. The C14 samples only tell us that warmmongers do not possess inquiring minds. In fact, I doubt any warmer will dare to take more than a cursory dismissive glance at this page. Even the trolls won’t touch it.
KR: “In terms of predictive capability, I would definitely go with the tidal gauge data, as it’s far more precise. But in terms of long term reconstruction of the last 2K years, I believe (just my opinion, mind you) that the point of the paper is to establish that foraminiferal data provides a reasonable estimate.”
I think that you can clearly see that it has NO capability for hindcasting since you can look at the tide gauge data for the last 100 years and if you had used the North Carolina data and zeroed it for the present day then by 1900 you would already be seeing a drop of 50% more than the real measured data. If it is 50% out in hindcasting over 100 years then how accurate will it be after 2000 yrs????? You couldn’t use this data to give any clue about the sea level – you couldn’t even tell if the sea level was higher or lower (and this is BEFORE we get into the size of those massive error bars – bearing in mind that you need to go out BEYOND those marked to reach just a 90% confidence interval).
It really is time you opened your mind to the possibility that this paper is a pile of bull-crap.
I make you the same offer I made the others. Come back with the details of their three “transfer functions” and their raw data, and we can move forward. Your claim that it is reproducible without complete details of their methods and their data lacks credibility.
And blaming the lack of archiving on PNAS? Very weak … the archive for these datasets exists already, and responsible authors know that and accordingly they archive the data themselves. Blame PNAS because they haven’t archived their own data? It is to laugh … and not only that, but you believed their excuse and didn’t pursue it any further. Another successful interaction in the world of climate science obfuscation, you came away as empty-handed as you arrived but you think you got something … “(gasp)”, as you say, indeed.
Yes, and you might even believe that they chose that particular polynomial fit by chance or something, rather than picking it because of its “ski-jump” shape. Others of us are more … well let me say “cautious” in these matters.
But my point stands. Regardless of exactly how they did it, they have produced a reconstruction which does an abysmal job of “reconstructing” present temperatures … and your advice is to just ignore that? Why on earth should I ignore it?
KR, since their whiz-bang “Summary of North Carolina Sea Level Reconstructions” gets the present horribly wrong, I think I’ll pass on believing what it says about two thousand years ago. Perhaps you can explain how a reconstruction that does a very bad job reconstructing today should be believed regarding yesterday … but until you do explain that, I’ll say that if it gets today wrong there is absolutely no reason to believe it about the past.
I see that part of the problem is that you are conflating “raw data” with “results”. Raw data is their measurements. Results are what they think the measurements mean. So no, Table S1 doesn’t give the raw data. It gives some of their results. To date they have published neither the raw data nor the “Summary Reconstruction”.
I assume nothing, I look at the past. Michael Mann has lied, concealed critical parts of his work and data, and destroyed evidence. Given that history, if you start out with any other assumption about anything his name is on, you are being unbelievably naive. And anyone who wants Mr. “Delete the Emails” Mann, Mr. “CENSORED_TO_1400” Mann, Mr. “Asking me for my data is intimidation” Mann on their scientific team is equally irresponsible.
So yes, call me crazy but I do expect Michael Mann to do as he did in the past. Do you have any information that he’s changed his ways, or turned over a new leaf? Because if not, you’re a fool not to suspect him and the people he works with of putting their thumb on the scales.
And that’s not my “assumption”. That’s an honest reading of Mann’s documented history. You can start out by assuming Mann is honest if you wish … me, I like to stick closer to the facts.
I looked at both the references and the similar literature. And no, reference number 7 did not give the details of the three transfer functions that they used in the Kemp 2011 study. If you think that they are in reference number 7, I invite you to tell us exactly what the three transfer functions are, and how the three transfer functions differ from each other … I await your answer. However, I fear that you’ll neither admit you were wrong, nor will you be able to produce the three transfer functions and point out their differences. So surprise me.
Oh, please, stop attacking my motives, KR. You don’t have a clue about my motives, and you just look petty when you attack them. I discussed the paper and came to negative conclusions for several perfectly valid scientific reasons.
1. Lack of transparency. They didn’t publish their data. They didn’t publish their code. They didn’t publish their “transfer functions” (despite your claims to the contrary). They didn’t publish their results for the “Summary Reconstruction. They separated out the GIA and other errors rather than adding them all together to give a total error (making it look like their errors are smaller than they are).
2. Results that do not agree with reality. Their “Summary Reconstruction” claims a sea level rise of 8″ since 1950, and says that the rate of sea level rise has been above 3mm per year since 1950 and is now 5mm per year.
3. Poor choice of location. Why would anyone try to establish sea level in an shifting maze of islands and marshes, some sinking and some rising, with changing salinity and varying tidal ranges, in an area where GIA is a huge issue and the nearest long-term tidal records disagree strongly? Is the light better there or something? *
4. Inadequate handling of the question of historical changes in tidal range (they assume there were none, when observations show that they are occurring as we speak in that very area). Their own data shows that the Sand Point site was like the mainland sites for hundreds of years … then like the barrier island sites for hundreds of years … then back to the mainland pattern. They claim these large changes in salinity occurred without any change in the tidal range, which seems very doubtful to me.
Now you might not agree with what I say are problems with their science … but claiming that I’m just making it all up because of a personal vendetta looks like an attempt on your part to avoid dealing with the issues. Might not be one … but that’s how it looks.
Deal with the facts and let your fantasies about my motives go. Do you think the location choice was good? Do you think they should have archived their data? Do you think we’ve seen six inches of sea level rise since 1950? Do you think there have been no changes in tidal range at Sand Point in 2,000 years, despite large changes in both the barrier islands and the channel between Sand Point and the mainland?
Those are the questions here, not whether my motives are pure or not.
Sorry to disappoint you … but attacking my motives means nothing about whether my scientific claims are correct. And arguing “ad hominem” like that just makes you look like you don’t think you can win by discussing the science.
To echo your words … I expect better from you than an ad hominem attack.
w.
PS – As I indicated somewhere above, I used to write to scientists regularly to ask them to archive their data. After a long string of unqualified failures using that method, these days any study where the authors haven’t already archived the data just gets rough treatment … so sue me. I’m done with begging scientists to follow the scientific method. If they don’t follow it, I no longer write them. I just point out, very publicly, that they are not following the scientific method. It’s not like they haven’t been warned, the necessity of archiving of data has been highlighted many times. And as I said, the fact that the PNAS vanity press doesn’t require that may be why they published there (along with the total lack of critical peer review).
I do note that you wrote the authors … and what did you get, KR?
Did you get any data? Did they actually reveal anything? Did they send you their code, or reveal their transfer functions?
Not according to your account. According to you, all they did was blame PNAS … you may want to reconsider your actual results from writing to them. After far too much of that same kind of excuses and tap-dancing, you might see why I’ve given up on that path. I say publish [the code and data] or perish.
PPS – * “Is the light better there” refers to an old Sufi story. A man is wandering around under a street light looking down at the ground. A bystander asks what he’s doing. “I dropped my house keys.” The bystander helps him look for a while and when they don’t find the keys he says, “Are you sure you dropped them here by the street lamp?”
“No,” the man replies, “I lost them down the street, but the light is so much better here”
Willis – Sadly, just the polemical response I expected.
Another successful interaction in the world of climate science obfuscation…
…you might even believe that they chose that particular polynomial fit by chance or something, rather than picking it because of its “ski-jump” shape. Rather than a better statistical fit across the several thousand years with minimum assumptions? Hmmm….
Michael Mann has lied, concealed critical parts of his work and data, and destroyed evidence.
And no, reference number 7 did not give the details of the three transfer functions that they used in the Kemp 2011 study. Actually, it specifically gave the depth/species ratios you were whining about as ‘made up’ in your last post.
Lack of transparency. They didn’t publish their data. They didn’t publish their code. They didn’t publish their “transfer functions” (despite your claims to the contrary). Heh. Give it a week or so on the ‘code’ publishing – the author I contacted, Vermeer, was quite surprised that the data wasn’t linked yet. And the transfer function you complained about is published – in a clear reference you apparently hadn’t looked at until I mentioned it. Besides – reading the paper, I believe I could (with some time in the swamps) replicate and test this paper’s results, the major real concern with a well written paper.
[various complaints about the science] It’s clear that you have either (a) not read the methods, or (b) do not understand them. The authors have specifically addressed concerns about GIA, wetland modification, etc., using methods standard to the field of Foraminifera sea level reconstructions, but you are simply emphasizing uncertainties without addressing how the authors approached them. Weak sauce…
So yes – I have questioned your motives, based upon what you have written – an attack post lacking in scientific content. As I said before, you have attacked a paper by Kemp et al (with Mann a contributing, but not lead author) on sea level from paleo records with additional work to see how it matches up to temperature reconstructions – apparently (since I don’t read minds) because one of the authors is a convenient Ad Hominem target, and more importantly because it clearly indicates climate change. Again – not science, but polemic. And the scientific validity of polemic is, well, zero.
KR, you keep making claims about the availability of enough information for replication, you definitely talk a good fight.
Yet when I ask the simplest of questions (what are the Kemp three transfer functions and the differences between them) you waffle, you duck and weave, you pronounce from on high, you make claims about what I have read, you question my motives and make scurrilous attacks …
But where are the details of the three transfer functions, KR? You’re great at making unsupported claims, and you are clearly expert at spewing all kinds of nasty innuendoes and ad hominem attacks, but when it comes to describing the transfer functions you suddenly develop lockjaw that’s as total as that of the authors …
Funny, that.
w.
PS – Stop the bullsh*t about how “the transfer function [I] complained about is published – in a clear reference you apparently hadn’t looked at until I mentioned it.” You can say that over and over, it proves nothing.
As I told you when you made that claim before, if that is so, then PRODUCE THE THREE TRANSFER FUNCTIONS and explain the differences between them. Anything else is just handwaving.
KR says:
June 28, 2011 at 6:13 pm
Actually, that’s just nasty misinformation from your dark fantasies. Do a search on “made up” in my last post, and what do you find? Nothing. The only thing “made up” is your specious claim. I didn’t say they made things up. I said they ignored things, and that they re-used the Tiljander proxies upside down again in the most laughable long-running scientific idiocies on record. I said that they should archive their data and code. But I didn’t accuse them of making things up, that’s on you.
Thanks for the opportunity to cite my last post, though, some folks might not have read it. I love free publicity like that.
w.
Has there been any research on how tectonic plate movement might influence sea level?
My guess, is that most of the change measured, can be explained that way…
Makes a lot more sense than temperature at least…
KR: you know something, I agree with you on the Dr Mann thing. Kemp et al. have put their own names to this pile of poop and therefore they are all tarred with the same brush. It is unfair to single out Dr Mann as being particularly corrupt in this case – the whole team on this paper are liars.
Normal practice in attempting to use a proxy to extend our understanding of something outside the instrumental record would be to ensure the proxy fits well within the period of the instrumental period. This is something they have bent over backwards to avoid doing with all manner of graphs that attempt to obscure the truth by failing to superimpose the hindcasts as they should. Figure S6 of the supporting document is as close as they get to showing it done properly and it shows that the proxy is widely inaccurate going back 100 yrs. It can’t possibly be relied upon to hindcast back 2000years – that should have been the only conclusion of the paper but they claimed the opposite. They are LIARS. All of them are LIARS. There is plenty of reason to believe that the way the paper has been presented is to cover up the fact it is based on a LIE, from the way they have tried to obscure the weakness of the proxy to hindcast within the instrumental period to the way they have tried to use change point regression analysis in a manner in which it can’t be reliably used, to the way they have used claimed “tide gauge data” which is actually 50% computer modelling based on assumed global temperatures, to the way they sought to avoid a genuine peer review process.
This paper is a travesty of the truth. It should be studied and picked apart by anybody with an interest in the truth until every lie within it has been exposed. It should be held up as a tragic example of how low certain scientists will stoop to promote certain false ideas. It should be used to pillory all the scientists that had the audacity to put their names to it. These people should never have the right to put their name to any scientific paper ever again, let alone any paper that is specific related to AGW. They are condemned by their own words and actions.
Willis,
The earliest mention I can find of the transfer functions is this Horton et al 2006 paper (no Kemp). No SI. Free download from NC state site:
http://dcm2.enr.state.nc.us/slr/Horton%20et%20al%202006%20diatoms.pdf
Discussion section is about development and use, lots of details, type of transfer function used mentioned, but the elusive actual transfer function isn’t there. From the conclusions:
There is mention of assorted caveats with using the method, and possible “cherry picking” among parameters for the best results which might be justified. Well, they say they were first, thus I shall presume the usual warnings against using the “first of” something apply.
This was referenced by:
Engelhart, S.E., B.P. Horton, and A.C. Kemp. 2011. Holocene sea level changes along the United States’ Atlantic Coast. Oceanography 24(2):70–79, doi:10.5670/oceanog.2011.28.
Free download, no SI. Transfer functions not mentioned by name, but much discussion of the methodology of relative sea level (RSL) reconstructions with math on error calculations. From the Concluding Remarks:
Hope this helps.
Curious. Someone comes up with something “new” and “scientific,” and from that comes a rotating assemblage of co-authors churning out a nigh-endless stream of numerous papers for numerous journals which are basically just different presentations of the same data. Has such credential-padding long been a behavior in science, or is this an example of the spreading of Hockey Team Publishing Syndrome?
Re: my previous post:
Sure, after I proofread it ten times and finally posted, I see in the first quoted section that when I corrected the mush from the bad copy-and-paste from the pdf (multiple columns mash together), I somehow duplicated some text while goofing it a bit. Oh well, mistake noted, original source with correct text linked and available.
kadaka (KD Knoebel) says:
June 29, 2011 at 5:22 am
Thanks, KD. I’ve tried tracing the paper’s claims back in various forms and forums. Lots of general mentions of the transfer functions … but no details of any of the actual transfer functions used by Kemp et al. Despite that, some people here keep claiming that there’s enough info to see if the authors made any mistakes, and that the transfer functions are in fact available … but they are strangely unable to produce the actual functions.
w.
Willis,
I’m beginning to think these “transfer functions” are not something that can be printed out and analyzed. Following the Horton trail I found this 2005 paper, free download of this “postprint” version (note: link given for data repository on pg 30 doesn’t work):
http://repository.upenn.edu/ees_papers/39/
From page 11:
Reference citation:
Searching by the names yielded many mentions of the program with different version numbers, years (<2000), and often "unpublished software." It's apparently discipline-specific and used to make educated guesses about environmental variables like moisture and salinity based on the amounts of assorted micro-critters like diatoms.
Just tonight I Googled this (used only "Juggins", didn't see this with both names):
http://www.staff.ncl.ac.uk/staff/stephen.juggins/software.htm
CALIBRATE was an old DOS program by Juggins, elsewhere I've seen it mentioned as a C++ program. Some time ago it was replaced by C2, Windoze (XP, Vista), which also replaces another program mentioned in that Horton paper (M.A.T.). Free download and use, limitation is max 75 samples without license.
Thus the vaunted "transfer functions" may be nothing more than an input file of a proprietary format used for a piece of proprietary software. Black box functioning, just the results for output. Also, see this discussion. Alternatives in R and something called “paltran” are mentioned, one R package is “rioja” which is also by Juggins. But, the paltran version doing WA-PLS gives different results than rioja or C2. Not a good sign when a method can’t be properly replicated by others.
The WA-PLS method was laid out in much mathematical detail in a much-cited book chapter circa 1993 (Hydrobiologia), link to paywall of just that chapter. But what appears to be the same chapter appeared in another book (Multivariate Environmental Statistics), free download of chapter:
http://www.biometris.wur.nl/NR/rdonlyres/71EBBDE7-DE7B-4956-9BEA-2B807C34A8EF/52117/terBraak1993JugginsBirksVoetWAPLS.pdf
So if you could acquire the “transfer functions” you may just have some files for an old piece of DOS software that you’d also have to dig up, and still not know exactly what was being done without getting the source code or much likely-verboten reverse software engineering. Going by the licensing of C2 and the Kemp2011 SI saying the North Carolina dataset has 193 surface samples, you may also need the paid-for version which might not be available if you need the original CALIBRATE program.
You may have better luck waiting for KR to provide that “freely available” info. 😉
Willis, FYI:
Just stumbled upon this thread … over at Prof. Rahmstorf’s blog ‘KlimaLounge’ he pointed me to an archive of data and code since I was interested in the proxy-DCA too:
http://www.pik-potsdam.de/sealevel/en/data.html
Unfortunately the Kemp reconstruction part is missing from the archive (confined to the results of his RSL reconstruction) so I guess they’ve documented their part of the work package. I just asked Rahmstorf wether he knows if Kemp will fill in his part of the documentation as well.
Anyway let’s wait and see what will be available when they’ve finished adjusting the documentation part of the paper.
Wolfgang Flamme says:
July 1, 2011 at 6:58 pm
Yes, I’m still waiting for the input data and code for the Kemp sea level reconstruction. They’ve given the outputs of that reconstruction but not the inputs.
w.