Marcott's proxies – 10% fail their own criteria for inclusion

Note: Steve McIntyre is also quite baffled by the Marcott et al paper, finding it currently unreproducible given the current information available. I’ve added some comments from him at the bottom of this post – Anthony

Guest Post by Willis Eschenbach

I don’t know what it is about proxies that makes normal scientists lose their senses. The recent paper in Science (paywalled of course) entitled A Reconstruction of Regional and Global Temperature for the Past 11,300 Years” (hereinafter M2012) is a good example. It has been touted as the latest hockeystick paper. It is similar to the previous ones … but as far as I can see it’s only similar in how bizarre the proxies are.

Nowhere in the paper do they show you the raw data, although it’s available in their Supplement. I hate it when people don’t show me their starting point. So let me start by remedying that oversight:

all marcott proxies

Figure 1. All of the proxies from M2012. The colors are only to distinguish individual records, they have no meaning otherwise. 

I do love the fact that from that collection of temperature records they draw the conclusion that:

Current global temperatures of the past decade have not yet exceeded peak interglacial values but are warmer than during ~75% of the Holocene temperature history.

Really? Current global temperature is about 14°C … and from those proxies they can say what the past and present global average temperatures are? Well, let’s let that claim go for a moment and take a look at the individual records.

Here’s the first 25 of them:

marcott proxies 1 to 25Figure 2. M2012 proxies 1 to 25. Colors as in Figure 1. Note that each panel has its own vertical axis. Numbers to the left of each title are row/column.

Well … I’d start by saying that it seems doubtful that all of those are measuring the same thing. Panel 3/1 (row 3, column 1) shows the temperature decreasing for the last ten thousand years. Panels 4/4 and 4/5 show the opposite, warming for the last ten thousand years. Panel 4/3 shows four thousand years of warming and the remainder cooling.

Let’s move on to the next 25 contestants:

marcott proxies 26 to 50Figure 3. M2012 proxies 26 to 50. Colors as in Figure 1. Note that each panel has its own vertical axis. Numbers to the left of each title are row/column.

Here we see the same thing. Panels 1/1 and 4/1 show five thousand years of warming followed by five thousand years of cooling. Panel 1/5 shows the exact opposite, five thousand cooling years followed by five thousand of warming. Panel 4/5 show steady warming, panel 5/2 shows steady cooling, and panel 2/2 has something badly wrong near the start. Panel 2/4 also contains visible bad data.

Onwards, we near the finish line …

marcott proxies 51 to 73Figure 4. M2012 proxies 51 to 73. Colors as in Figure 1. Note that each panel has its own vertical axis. Numbers to the left of each title are row/column.

Panel 2/1 shows steadily rising temperatures for ten thousand years, as does panel 3/4. Panels 4/1 and 5/1, on the other hand, show steadily decreasing temperatures. Panel 4/2 has a hump in the middle. but panel 1/2 shows a valley in the middle.

Finally, here’s all the proxies, with each one shown as anomalies about the average of its last 2,000 years of data:

all marcott proxies anomalies

Figure 5. All Marcott proxies, expressed as anomalies about their most recent 2,000 years of record. Black line shows 401-point Gaussian average. N=9,288.

A fine example of their choice of proxies can be seen in the fact that they’ve included a proxy which claims a cooling about nine degrees in the last 10,000 years … although to be fair, they’ve also included some proxies that show seven degrees of warming over the same period

I’m sorry, guys, but I’m simply not buying the claim that we can tell anything at all about the global temperatures from these proxies. We’re deep into the GIGO range here. When one proxy shows rising temperatures for ten thousand years and another shows dropping temperatures for ten thousand years, what does any kind of average of those two tell us? That the temperature was rising seven degrees while it was falling nine degrees?

And finally, their claim of turning that dogs breakfast shown in Figure 1 into an absolute global temperature and comparing it to the current 14°C average temperature estimate?

Don’t make me laugh.

I say the reviewers of this paper didn’t use their Mark I eyeball. The first thing to do when dealing with a multi-proxy study is to establish ex-ante criteria for the selection of the proxies (“ex-ante” meaning choose your criteria before looking at the proxies). Here are their claimed criteria …

This study is based on the following data selection criteria:

• Sampling resolution is typically better than ~300 yr.

• At least four age-control points span or closely bracket the full measured interval.

• Chronological control is derived from the site itself and not primarily based on tuning to other sites. Layer counting is permitted if annual resolution is plausibly confirmed (e.g., ice-core chronologies). Core tops are assumed to be 1950 AD unless otherwise indicated in original publication.

• Each time series spans greater than 6500 years in duration and spans the entire 4500 – 5500 yr B.P. reference period.

• Established, quantitative temperature proxies

• Data are publicly available (PANGAEA, NOAA-Paleoclimate) or were provided directly by the original authors in non-proprietary form.

• All datasets included the original sampling depth and proxy measurement for complete error analysis and for consistent calibration of age models (Calib 6.0.1 using INTCAL09 (1)).

Now, that sounds all very reasonable … except that unfortunately, more than ten percent of the proxies don’t meet the very first criterion, they don’t have sampling resolution that is better than one sample per 300 years. Nice try, but eight of the proxies fail their own test.

I must say … when a study puts up its ex-ante proxy criteria and 10% of their own proxies fail the very first test … well, I must say, I don’t know what to say.

In any case, then you need to LOOK AT EACH AND EVERY PROXY. Only then can you begin to see if the choices make any sense at all. And in this case … not so much. Some of them are obviously bogus. Others, well, you’d have to check them one by one.

Final summary?

Bad proxies, bad scientists, no cookies for anyone.

Regards,

w.

==============================================================

Steve McIntyre writes in a post at CA today:

Marcott et al 2013 has received lots of publicity, mainly because of its supposed vindication of the Stick. A number of commenters have observed that they are unable to figure out how Marcott got the Stick portion of his graph from his data set. Add me to that group.

The uptick occurs in the final plot-point of his graphic (1940) and is a singleton. I wrote to Marcott asking him for further details of how he actually obtained the uptick, noting that the enormous 1920-to-1940 uptick is not characteristic of the underlying data. Marcott’s response was unhelpful: instead of explaining how he got the result, Marcott stated that they had “clearly” stated that the 1890-on portion of their reconstruction was “not robust”. I agree that the 20th century portion of their reconstruction is “not robust”, but do not feel that merely describing the recent portion as “not robust” does full justice to the issues. Nor does it provide an explanation.

Read Steve’s preliminary analysis here:

Marcott Mystery #1

[UPDATE] In the comments, Steve McIntyre suggested dividing the proxies by latitude bands. Here are those results:

marcott proxies by latitude

Note that there may be some interesting things buried in there … just not what Marcott says.

Also, regarding the reliability of his recent data, he describes it as “not robust”. It is also scarce. Only 0.6% of the data points are post 1900, for example. This raises the question of how he compared modern temperatures to the proxies, since there is so little overlap.

Finally, about a fifth of the proxies (14 of 73) have the most recent date as exactly 1950 … they said:

Core tops are assumed to be 1950 AD unless otherwise indicated in original publication.

Seems like an assumption that is almost assuredly wrong. I don’t know if that’s a difference that makes a difference, depends on how wrong it is. If we take the error as half the distance to the next data point for each affected proxy, it averages about ninety years … pushing 1950 back to 1860 … yeah, I’ll go with “not robust” for that.

[UPDATE 2] Yes, I am shoveling gravel, one ton down, six to go … and I do get to take breaks. Here’s the result of my break, the Marcott proxies by type:

marcott proxies by typeAnd here’s a picture of yr. unbending author playing what we used to call the “Swedish Banjo”.

swedish banjo

Best to all,

w.

 

0 0 votes
Article Rating
128 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Skiphil
March 13, 2013 9:42 pm

Thanks for a valuable article, Wills!
Such widely disparate proxy records sure do raise questions about what they are really measuring and how wide the confidence intervals may be. Also, with such a varied assortment the selection of the 73 proxies included seems immensely important. As with all of these multi proxy studies the screening to get a group of proxies may leave out proxies that could/should change the results significantly.

Adam
March 13, 2013 9:42 pm

Ahhh, it’s because you clearly don’t understand statistics! [insert ad-hom attack of your choosing]. You need to imagine a Hockey stick shape and select the data sets which look more like it, and reject those which do not. You can think of a rationale later on, don’t worry about that right now. Oh, and don’t forget to divide by sqrt(n-1) to avoid any unfair bias and to prove to your colleagues that you thoroughly understand statistics. Go ahead and get your chums to fast track a nature paper through the old’ peer review process for you. Pick up a Nobel Peace prize as you pass go and for goodness sake, please try not to let anybody read your emails!

Lew Skannen
March 13, 2013 9:42 pm

Even a kindergarten kid would get an F for that graph as a finger painting exercise but what would I know? I am sure that it truly does reveal the history of the planets climate to the 0.01 C accuracy that warmists claim….
We discuss the temperature of the planet and obtain a value of around 14C and debate it with people as if it really has some meaning. It is a strange game. It is almost like you have started a debate with your child about some pretend invisible pink elephant and what started off as a bit of childish indulgence has become a bit more serious. Your kid has decided that the pretend elephant is sick and you are being hit with real life vet bills! Now you are having to debate the health of the pretend elephant rather than just point out that there is no such thing as a pink invisible elephant because you know that there would be a major tantrum if you did.
So in the real world does the ‘temperature of the planet’ have any meaning? To me it is about as meaningful as the sum of the surface temperatures of ten randomly selected stars, the sum of the lengths of fify assorted rodents, the sum of the weights of everyone in Las Vegas with an ‘R’ in their name….

NikFromNYC
March 13, 2013 10:06 pm

There is no hockey stick in their own published data: only one of the 73 proxies even remotely resembles one.

March 13, 2013 10:36 pm

How they got to where they say they did is a fundamental mystery …

March 13, 2013 10:37 pm

That is amazing. Truly amazing. How the heck do they expect to get away with this nonsense? How many times will these watermelon “scientists” get shown up before the masses turn nasty? Seriously nasty. The more they fake this stuff, the more obvious it is their intention to deceive. One might get it “wrong” or be “misguided”, but so many of them? Over and over and over? They get blown out of the water time and again for faulty science, and they are straight back at it like they are desperate, which of course they are (that’s quite telling, too).
Well done, Willis, for exposing more carp from Those Who Should Be De-funded. Treason still merits capital punishment, doesn’t it? I sure hope these *snips* and *snip-ettes* give some thought to their future. Perhaps they should backtrack while there is still time.

Manfred
March 13, 2013 10:52 pm

Should not be hard to produce more proxies of this quality with a random number generator. Add an instrumental temperature record at the end and there will always be a hockey stick

A Crooks
March 13, 2013 11:01 pm

Count me as mystified.
If the “core tops are assumed to 1950 AD” I’m wondering how much proxy data they have since 1950 to create the hockey stick ending? How many of these graphs are in or out?

Manfred
March 13, 2013 11:01 pm

A while ago Steve McIntyre moaned about being tired of wading through ‘Dreck’ from Mann and others. Going through this paper must be an extraordinary unpleasant and painful endeavour.

thingodonta
March 13, 2013 11:06 pm

I work in mineral exploration and have learnt to have little to no time for ‘averages’. Too many examples to explain, but Steve’s Macintyre’s experience with minerals would probably be similar.
I’ve worked with some within the industry, usually the computer modellers, who tend to love using ‘averages’, and these are very the same guys who usually never find anything. They think if you leave out the ‘weak’ areas, and the ‘weak’ data, as well as the ‘edges’ of supposedly ‘prospective’ ground (i.e. using a form of ‘spatial averaging’), then you can then concentrate on the good stuff; i.e. they average out very incomplete spatial information and think that is how you determine mineral prospectivity and deal with the uncertainties. Sometimes yes, but many times no, especially if the level of uncertainty is high, which is very common in mineral exploration. And in the last 3 cases I have seen, ore bodies were found over ‘weak’ data, in ‘weak’ areas, and at the ‘edge’ of or even outside of the supposedly ‘prospective’ ground. It just doesnt work, averaging spatial data when the level of uncertainty and inconsistency between various datasets is high.
A bit different to the above, but in the same ball park if you get my drift, ‘averaging’ can be a shoddy and a dangerous business if you don’t know what the various uncertainties are in the first place. Modellers tend to understand this much less than people in the field.

Theo Goodwin
March 13, 2013 11:06 pm

NSF funded this disaster and one of their program directors hyped it for the media. If there is congressional oversight over NSF, the overseers should come down hard right now.

John F. Hultquist
March 13, 2013 11:08 pm

Thanks Willis.
It might help if one knew all the “proxies” – what was measured, when, how accurate, how that relates to temperature, and so on. But it all seems such a muddle and the peer review – self correcting “science” fails time and again. For example if I were reviewer of this paper in draft form, I would write something like this:
In the first batch of 25, the bottom 2 in the middle column have these tags: “IOW225517” and “IOW225514” — so maybe they are related. One goes up sharply, then drops. The shorter one drops, but leaves open what the first 5,000 years might look like. How do those proxies support warming? Some proxies seem to bounce around wildly and some have extreme spikes that do not match anything else. Why does this happen if these proxies are supposed to relate to the same thing? Individually, some of these (maybe all of them) may provide useful information about past happenings – but Earth’s atmospheric temperature doesn’t seem to be one of them. Why does the author (s) think they do? I would have to tell the editor to which this was submitted that I think putting this in print would ruin the integrity of the publication. Well, I might not be so polite.

geran
March 13, 2013 11:24 pm

Like the “98% consensus”, this “study” indicates more desperation than science.

March 13, 2013 11:28 pm

“I say the reviewers of this paper didn’t use their Mark I eyeball.”
Nor their white sticks or seeing-eye dogs … LOL !

R
March 13, 2013 11:43 pm

Weak Willis:
“A fine example of their choice of proxies can be seen in the fact that they’ve included a proxy which claims a cooling about nine degrees in the last 10,000 years … although to be fair, they’ve also included some proxies that show seven degrees of warming over the same period …”
Yes there are regions in the planet that have experienced large amplitudes of warming/cooling in the last 10,000 years. This is the particularly plausible at high latitudes. Good Job Willis #thingsIdon’tmean

dalyplanet
March 13, 2013 11:53 pm

how do you do that, Willis? Do you teach a class
Wow! I saw the plots before but you make it so well presented.

Skiphil
March 13, 2013 11:55 pm

R says:
March 13, 2013 at 11:43 pm
“….Yes there are regions in the planet that have experienced large amplitudes of warming/cooling in the last 10,000 years. This is the particularly plausible at high latitudes….”

R, how do you know? And how do you know that Marcott et al. obtained a statistically sound sampling of all the earth’s surface?

tokyoboy
March 14, 2013 12:32 am

Yamal again?

March 14, 2013 1:07 am

Reblogged this on The GOLDEN RULE and commented:
More seriously questionable data, “evidence” and conclusions from the CAGW scientists, but this is not science, it is basically garbage, as pointed out by Willis Eschenbach.

johanna
March 14, 2013 1:16 am

thingodonta says:
March 13, 2013 at 11:06 pm
I work in mineral exploration and have learnt to have little to no time for ‘averages’. Too many examples to explain, but Steve’s Macintyre’s experience with minerals would probably be similar.
————————————————-
I am no mathematician or scientist, but over the years have had to evaluate ‘statistics’ about social policy issues like health and housing. Since I am lucky enough to have some experience on the ground in these areas, I long ago worked out that ‘averages’ are junk when anything remotely complex is under discussion.
Why so-called scientists pretend that this stuff is more than one of many indicators is beyond me. Even a not-too-bright real estate agent knows that ‘average’ prices in one area for a pecific period are only slightly more useful than an ashtray on a motorbike when you get down to cases.
And that’s leaving aside the whole question of how good the data is in the first place.

Manfred
March 14, 2013 1:22 am

“• Established, quantitative temperature proxies”
————————————————
Gergis et al failed with improper screening, and here we go with no screening for proxy quality at all ?
Why did a whole generation of scientists screen data, when it is so much easier to take everything supposed to respond to temperature (among other variables) and regardless of individual proxy issues ?

wayne Job
March 14, 2013 1:26 am

I have always enjoyed reading good science fiction, for whatever man can imagine usually turns out to be possible. This is science fantasy that belongs in a section of the library that is political new age fantasy science, thus beyond my capabilities of understanding.

knr
March 14, 2013 1:45 am

Once again the ‘professionals ‘ work at a standard unacceptable for a student taking a degree course , for the failure to filter out such proxies would probable lead to the failure to get a pass mark .
You have to ask , are there actual any standards within climate ‘science ‘ ?

richard verney
March 14, 2013 1:58 am

Willis an interesting summary.
When dealing with proxies, one needs a very large dolop of caution. They are notoriously unreliable, with wide error margins. As regards averaging, it appears that the usual position in climate science is that an average of a collection of sow’s ears produces a silk purse, whereas, in reality, average of ‘crap’ remains ‘crap’.
I would have thought that the starting point with respect to each and every one of the proxies is to precisely identify what the proxy is, to detail precisiely from where it was taken, and how it was taken. The study author ought to then set out, on a proxy by proxy basis, what he thinks that the proxy (in question) is measuring, how and why it can be concluded that it is a metric for the measurement ascribed to it by the study author. Finally, the study aithor should evaluate each and every one of the proxies used and set out what he considers to be the reliability and the error margin of the proxy and why he holds such analysis.
What bugs me the most about this is how the authors can claim “Global temperatures are warmer than at any time in at least 4,000 years” and “Global temperature….. has risen from near the coldest to the warmest levels of the Holocene within the past century.” A heat spike like this has never happened before, at least not in the last 11,300 years”” when those very authors acknowledge that the 1890-on portion of their reconstruction was “NOT ROBUST”
In other words, on their own admission, there is no robust evidence to support the contention that it is now warmer than anytime in the past 4,000 years, nor that in the past century, the temperature has risen from near the coldest to the warmest levels, nor that a heat spike like this has never happened before etc.
The upshot of their own admission is that there study simply suggests that there is a stick and does not evidence that there is a blade attached to the stick.
Why MSM were not told that there is no robust evidence for the 1890s onwards beggars belief.

ducdorleans
March 14, 2013 2:09 am

Willis, thanks for the effort …
since you have the data available, have you tried leaving out the 8 “failed their own criteria” proxies, and what effect that would have on your fig. 5 ?
tia

Nick Stokes
March 14, 2013 2:22 am

Willis,
Re the 300 years condition, I presume your numbers on resolution come from col 7 of Table S1 of the SM. These are headed just resolution, and I think they are not the sampling resolution.
If you take the Dome F d180O, it lists a resolution of 500 years. But if you look at the raw data, you’ll see values listed every 250 yr. Each 250 years is typically 7-12 m depth in the last 10000 yr, and in the header, it says that they analyze 10 cm slices.
So I think sampling resolution is frequent. The discussion on p 6-7 of the SI of the cited reference seems to focus on “age model error” limiting the resolution.

mwhite
March 14, 2013 2:27 am

“A number of commenters have observed that they are unable to figure out how Marcott got the Stick portion of his graph from his data set.”
Mikes “Nature Trick”???????

Paul Matthews
March 14, 2013 2:28 am

Thank you Willis. This is a far better way to debunk the Marcott study than what Easterbrook was doing. You just need to look at their own data, as you’ve done here.
But there is something wrong with your horizontal axis labelling. In their notation year 0 is 1950. If you are using the same numbering system your plots go about 1000 years into the future. Please could you check\clarify? 🙂
Here is a plot of their proxies over the last 300 years (note my time axis goes the other way) showing that there is no 20th century upturn in their data.comment image
Later today I will do an averaging plot like your fig 5.

Jimbo
March 14, 2013 2:31 am

Current global temperatures of the past decade have not yet exceeded peak interglacial values but are warmer than during ~75% of the Holocene temperature history.

So in other words they are saying that ~75% of the Holocene temperature is less than the current average global temperature of 14°C? I could be wrong here but I vaguely recall that previous inter-glacials were warmer than the Holocene. Should I be worried?

michaelozanne
March 14, 2013 2:43 am

“I must say … when a study puts up its ex-ante proxy criteria and 10% of their own proxies fail the very first test … well, I must say, I don’t know what to say.”
The word you are groping for is “Bullshit”……

March 14, 2013 2:46 am

Did they or did they not splice the instrument data onto the proxy data ?
I haven’t studied the paper behind the pay-wall but I am assuming they first performed an area weighted average of the proxy data, to yield a global proxy anomaly. Looking at your curves Willis, I see no evidence of a spike in the proxy data.
So the question is are they claiming a spike in the proxy only data, or just in the proxy + instrument data ? If it is the latter then they are comparing apples and pairs, because the former have a resolution of ~100years and the latter a resultion of 1 year.

Editor
March 14, 2013 2:55 am

One of the main flaws is this particular criterion:
“Chronological control is derived from the site itself and not primarily based on tuning to other sites. ”
This is akin to correlating wells in the Gulf of Mexico using only paleo reports and without correlating the well logs… Which would be the same as not correlating the wells.

Herbertdouglas
March 14, 2013 2:56 am

Willis,
Cannot the National Science Foundation (NSF) who financed this paper be called to account or to publicly explain these glaring anomalies? Is there any peer reviewed paper that can be published to refute this nonsense?

Ouluman
March 14, 2013 2:58 am

Spaghetti factory explosion springs to mind, really this is too much, how can anybody look at this garbage and pretend it is constructive science. But I suppose it will in some people’s warped minds back up the original hockey stick debacle.

johnmarshall
March 14, 2013 3:09 am

Another way to produce toilet paper.
Thanks Willis.

Michael Larkin
March 14, 2013 3:14 am

Thanks, Willis, for a presentation that even someone as mathematically challenged as I am can readily grasp.

Adam Gallon
March 14, 2013 3:38 am

It really is a complete load of doggy-doo!
How can any, intelligent, being believe that all (ANY?) of these “proxies” do really measure temperature?

Scarface
March 14, 2013 3:40 am

When I compare your figure 5 with Marcott’s HockeyStick
http://i90.photobucket.com/albums/k247/dhm1353/Marcott1a_zpsed12aa62.png
their 1 degree uptick is well within the noice of their proxys. So imho it has no information value whatsoever. Not to mention that the inherent smoothing of the proxys already flattened the proxys… So the 1 degree uptick is stealthy enlarged and still does not show anything significant. Yet the climate scene is cheering. How desperate must they all be?
Marcott’s HockeyStick seems to be a complete and utter scientific failure.

Ryan
March 14, 2013 4:00 am

“I’m sorry, guys, but I’m simply not buying the claim that we can tell anything at all about the global temperatures from these proxies. We’re deep into the GIGO range here. When one proxy shows rising temperatures for ten thousand years and another shows dropping temperatures for ten thousand years, what does any kind of average of those two tell us?”
So right Willis!
Why, why, why are so many reputable scientists sitting back and letting this utter NONSENSE go through???? Are they all corrupt? Is western science rotten to its core? Can’t they see that this kind of outright LIE does absolutely nothing for their cause? I just don’t get it. I’m just shaking my head in total disbelief.

Bill_W
March 14, 2013 4:04 am

When some said they just wanted to get a hockey stick in for IPCC 5, I was skeptical. So I double-checked. Yep, sure enough – the deadline is March 15, 2013 to be included in IPCC 5.

Snotrocket
March 14, 2013 4:20 am

Willis, you say “… today and yesterday I spent shoveling about 12 tonnes of gravel. And writing that post …”
So…more shovelling then. Just a different consistency of material – and colour!
Great work!!!

steveta_uk
March 14, 2013 4:24 am

On the timescales they’re looking at, do not other long-term effects come into play? For example, rift valley activity in eastern Africa may have significantly changed climate for a fairly large region, which in turn may have had global impacts.

AndyG55
March 14, 2013 4:34 am

Seems most of the proxies are ocean proxies, so would generally NOT show up any major changes in temperature (there is one heck of a lot of water)
This totally explains the lack of major peaks and troughs.
Willis, can you determine if those proxies that are wildly inconsistent are land or sea proxies ?
A lot of things have happened during the Earth’s history that might explain some of those inconsistencies, but I really can’t imagine events that would disrupt ocean based proxies all that much. Land temps , maybe, but water is a great regulator, and there is one heck of a lot of it. !

Chuck Nolan
March 14, 2013 5:06 am

Theo Goodwin says:
March 13, 2013 at 11:06 pm
NSF funded this disaster and one of their program directors hyped it for the media. If there is congressional oversight over NSF, the overseers should come down hard right now.
—————————————
When you’re watching a magic show, I don’t believe the emcee is going to tell the audience how the magician did the trick or even that it was a trick. The sponsors don’t care either as long as their product is sold by whatever means.
In short, the overseers won’t come down at all much less, “hard”.
cn

Bill Illis
March 14, 2013 5:13 am

I think the next step is to throw out the random proxies – the Tex86 and mg/ca -based ones. We already threw out the tree-rings (climate science has almost done so now); these proxy methods are next.
But maybe before that, the main question is how does Marcott get the uptick. Marcott writes back to McIntyre that the results past 1890 are not robust.
Oh I see, “Not robust”. Then why do 5 news releases and 100 headlines around the world and a dozen media interviews sound so clear that recent temperatures are higher than at any time in the past 11,300 years. The spin sounded robust enough.
I don’t think this is right. I don’t think this is ethical. Why should we believe scientists that show they have a propensity to act unethically. Its simple; we should not. We should double-check and be clear about what the results really are.
Thanks Willis. This took alot of effort. Especially after shoveling gravel which is probably the most taxing thing a person can do.

CodeTech
March 14, 2013 5:32 am

Instead of taking proxies and splicing recent instrumental records onto the end, wouldn’t it be more logical to calibrate the proxies using the instrumental record? I mean, a proxy by definition is not actual data, it’s some OTHER thing being recorded that is a representation of the value you actually want to determine. Since the value we want to determine (temperature) is so far off from the values recorded, it would make far more sense to assume the proxies are inaccurate and should be adjusted to match the real observed world.
This would, of course, completely eliminate the entire concept of a hockey stick.
Then again, what we are seeing here is the same level of dishonesty that is required to show “average” sea ice extent while excluding the last 10 years of data from the “average”. When I was in school, an average meant adding up all of the data available for a given number of years, then dividing by the number of years. This is an “average”. You don’t get to cherry pick a few years that you KNOW from anecdotal evidence were unusual, then cackle about how the years since have been way off of that “average”. If the last decade of lower sea ice extent was included in the “average”, then the “average” would become lower, which would make the current trends a lot less frightening to the weak minded.
These people wonder why “we” are skeptical of their claims. Well, we are skeptical because the claims are not credible, and are easily shown to be ridiculous by, well, even my 8 year old.
It really doesn’t matter how many proxies you add to the mix. Because a proxy IS NOT DATA, conclusions drawn from multi-proxy spaghetti rate low on the credibility scale.
No known temperature proxy can possibly be accurate other than on the most macro scale. We’ve already figured out that even with modern instruments and even satellites, we have no really valid current planetary temperature… the thought that any kind of proxy is any more accurate is starting to look more like insanity.

NikFromNYC
March 14, 2013 5:41 am

Rachael Maddow showed the graph on her show, celebrated by Mann, here:
http://www.facebook.com/photo.php?fbid=503540216368852&l=eac15ddfb7

Just an engineer
March 14, 2013 5:42 am

majormike1 says:
March 13, 2013 at 10:36 pm
How they got to where they say they did is a fundamental mystery …
————————————-
If that is a riddle, then I propose as possible answers:
“Leap of Faith”
or
“Jumping to forgone conclusions”

Chris Wright
March 14, 2013 5:43 am

So they didn’t include the individual proxy data in their paper? I wonder why?
It looks like climate science is plumbing new depths…..
Chris

Jim Johnson
March 14, 2013 5:49 am

One wonders…
If they used all these proxies, and got a drop off in the final century, whether they would still have published a paper saying that no evidence exists of global warming. I think they would. Right? Wouldn’t they?

March 14, 2013 5:49 am

“Marcott’s response was unhelpful: instead of explaining how he got the result, Marcott stated that they had “clearly” stated that the 1890-on portion of their reconstruction was “not robust”.”
NOT ROBUST! It might not be correct, or at least not able to be proven to be correct!
How is this not simply a bald statement admitting the lack of validity of the very data he is providing to back up someone else? In fact to back up the whole CAGW premise?
In other words, from his own mouth/keyboard, an admission of the virtual worthlessness of his hypothesis.
And this is the sort of data/evidence upon which the carbon trading schemes are built???
On which people are building and staking their careers. On which the media risk their reputation.
On which the public and “global warming” supporters have placed their faith.
Geez!

JJB MKI
March 14, 2013 6:08 am

“Not robust?” What the..? Have these celebrity seeking third rate data-analysts-for-hire moved so far beyond any criticism of their own peer group that they are free to completely make stuff up now? Or are they all on crack? My new paper shows definitively that climatic fluctuations are linked to planetary orbits. The part that actually proves this claim isn’t robust – in fact it just consists of random clippings from the back of cereal packets along with a couple of extracts from Wikipedia, but the paper shows it anyway because it is called ‘climatic fluctuations linked to planetary orbits’ and contains a bit of real astronomical data – can I have some grant money and fame now?

JJB MKI
March 14, 2013 6:16 am

Thanks for your analysis Willis. Is there any way the proxy panels could be arranged by type of proxy, or even type of proxy / latitude? It would be interesting to see what kind of apples and oranges are being lumped together here.

bernie
March 14, 2013 6:18 am

Nick:
So what is your reaction to the profiles of the 73 proxies? If Willis’ graphs are accurate would you be willing to acknowledge a GIGO event?

Jim Clarke
March 14, 2013 6:31 am

I went to the bank yesterday and told them that, based on the relative wealth of my ancestors over the last 2000 years, I should have 150 million dollars more in my account than I currently do. (I am missing about 6 zeros.) When they asked how I came to such a conclusion, I simply responded that my calculations for the 20th Century were not robust, but that I fully suspected them to put the money in my account and announce my wealth to the media.
After signing myself out of the psych ward this morning, I found nothing in the papers about my new wealth and there was no additional money in my account! I wonder how Marcott et al, got away with that excuse?
sarc/off

March 14, 2013 6:50 am

Lew Skannen at 9:42 pm
Ah, man, that’s a beautiful analogy.
I have a 6 year old and having been down the path of various fairies (tooth etc…) and know what a slippery slope it is.
Well done, Sir!
P.S. As always, thanks to Sir Willis for his keen dissections.

Steve McIntyre
March 14, 2013 7:19 am

Willis, I agree 1000% that plotting the proxies is a necessary first step, but in this case, you need to plot them by latitudinal zone. Over the Holocene, the effect of orbital changes on tropics, NH and SH extratropics are going to be different. One “expects” NHX proxies to show a mid-Holocene maximum, but the same is not expected for SHX proxies. Better to replot this in three zones. One does see a difference between NHX and SHX results, which might or might not mean something. Having said that, you and I are very much on the same page about proxy inconsistency – an issue never squarely addressed in these papers. But your graphic here conflates the issues and leaves an excuse.

March 14, 2013 7:35 am

I couldn’t help but notice how much figures 2 through 4 look like bingo cards. Perhaps they could be made into scratch off lottery tickets. Find five MWP’s in a row and BINGO! Find a MBH hockey stick and win the grand prize!
Also, if they’re claiming a composite hockey stick, it seems like more of the individual proxies would look like one.

vigilantfish
March 14, 2013 7:41 am

Paul Matthews says:
March 14, 2013 at 2:28 am
Thank you Willis. This is a far better way to debunk the Marcott study than what Easterbrook was doing. You just need to look at their own data, as you’ve done here.
==================
Hear, hear! I’ve got very little time, unfortunately, to spend at WUWT lately. Your presentation highlights the flaws (understatement) of Marcott so succinctly that no one with Mark 1 eyeballs (or ‘average’ brains – not sure what ‘mark’ that would be?) and a few minutes to spare can miss the point. These authors have made a huge contribution to ‘climate science’ as the laughingstock of real science.

DirkH
March 14, 2013 7:42 am

Nick Stokes says:
March 14, 2013 at 2:22 am
“If you take the Dome F d180O, it lists a resolution of 500 years. But if you look at the raw data, you’ll see values listed every 250 yr. ”
So Marcott et al have not managed to declare the resolutions correctly in their table? And no peer reviewer has noticed? Understandable; given the rush to the IPCC publishing deadline and the meager funding for climate science these days. /sarc

Rud Istvan
March 14, 2013 7:44 am

To several of the posters, it is almost certain that Marcott et. al. spliced in modern temperature records for the 73 proxy sites. They have the information, and use it in S14 to show that blended anomalies for the sites are representative of NCDC global temperature anomaly. Figure 1B shows an increase of about 0.7C from about 1850-1900 (hard to be exact) to about 2000 (again, hard to be exact, although ‘0’ is nominally 1950. It is proudly the same result as MANN 2008 (“statistically indistinguishable”). Mann 2008 used ‘Mike’s Nature trick’. Marcott et. al. is silent on the matter. It cannot be a matter of uncertainty, as Marcott said to Steve M. That would be to confound Figure S3 (a Monte Carlo generated uncertainty metric) with the actual proxy anomalies Willis plotted above. I note in passing that many on the misinformation highway have already made this mistake, since there is an ‘uncertainty’ hockey stick. Only 9 of the 73 proxies extend to or past 1950. 2 of the 9 show a slight uptick, and the simple mean is a downtick, requiring more ‘hide the decline’.
Some of the responses to my recent posting (rather heavily edited by Dr. Curry) at Climate Etc suggested Figure 1B is merely “illustrating” the abstract’s final sentence about all AR4 scenarios being above the Holocene optimum by 2100. That plainly cannot be, since 1B shows a rise about equal to what AR4 says has already happened, and the paper’s figure 3 shows how much more this would be by scenario.
The paper and the SI are silent on how this supposed ‘actual’ got generated to agree so perfectly with Mann 2008. The most likely answer is simple: just use Mike’s trick again.

Physics Major
March 14, 2013 7:47 am

Good work, Willis.
Can you tell me what software you use to produce the analysis and charts?

March 14, 2013 7:50 am

It has reached the point where even the premium scientific journals are engaged in little more than marketing regarding climate. Sure, they pick up short term PR in the secondary rags, but can’t they see that betting the ranch on Carbon dioxide will only hasten their inevitable demise?

March 14, 2013 8:15 am

“Well … I’d start by saying that it seems doubtful that all of those are measuring the same thing.”
A most elegant refutation. When looking at the original data it becomes obvious that the data is insuficiant to draw any conclusions about anything from. That anyone could go from these data to the conclusions speaks to a thought processess perverted beyond belief. I must admit though I have as much or perhaps more contempt for the reviewers than the authors.
I feel sorry for the taxpayers who paid for this junk and have zero effective recourse to address it.

Stu Miller
March 14, 2013 8:55 am

Willis, you have forgotten that, in climate science, you can use a proxy in whatever orientation is necessary to support your cause-as in upside down Mann. I think the breakthrough in this paper is in tilting the proxies to get the uptick at the end. (sarc)

AFPhys
March 14, 2013 9:06 am

This is a spectacular way to explain to even a complete layman the way that the claims of the “97% of climate scientists” are based more on simple religious faith instead of solid data.

Latimer Alder
March 14, 2013 9:09 am

My personal summary of this paper from he various analyses I have seen is
‘This is total crap’
Does anybody violently object?

Jeff Norman
March 14, 2013 9:20 am

Willis (and Brandon),
Thank you for making the proxies readily available. Talk about bizarre.
Some of these “temperature proxies” that extend back far enough do not reflect the recovery from the Younger Dryas(YD): 2/4 ODP.1084B; and 5/1 ME005A.43JC.
Some show a recovery from the YD but at the wrong time: 2/2 Hanging.Lake
Only a few suggest the 8.2 ka event: 1/5 N16P.905..UK37; 5/1 MR005A.43JS; 5/4 X74KL..TEX86; 3/4 Homestead.Scar; and 3/2 Flarken.Lake.
I am of the opinion that more than the surface temperature went into the blade. I suspect an IPCC non-forecast was also include. This from the Revkin interview.

Rud Istvan
March 14, 2013 9:22 am

One more observation strengthening the ‘Mike’s trick’ supposition. The paper’s figure 1H, which corresponds to 1B, shows that the number of proxies used for the average reconstruction “purple line” started at about 60 of the 73 back 11300 years ago, ran the full 73 for several thousand years, than began to drop off around 2000 years ago. By 1500 there were only about 50 (the figure is hard to read due to scaling). By about 1800 it was down to about 20, as would be expected if the average proxy resolution was 180 years and the median was 120. By 1920 (or maybe only for 1940) it was down to ZERO. Zero means you cannot plot anything. Yet a hockey stick blade was. But you could still plot thermometer temperatures, since those aren’t proxies.

March 14, 2013 9:35 am

I’m up and down on this one, so I would make a good proxy, if the price was right.

MT Geoff
March 14, 2013 9:53 am

When it comes to proxies, it seems like, if you’ve seen one, you’ve seen Yamal.

Louis Hooffstetter
March 14, 2013 10:01 am

So, to summarize the issues with Marcott, Shakun, Clark, and Mix 2013:
• Their proxy data can support any conclusion you desire.
• They include Mann et al’s. (2008) tree ring reconstructions.
• 10% of their proxies have sampling resolutions of greater than 300 years.
• The “hockey stick” shape is simply an artifact produced by appending high resolution (annual) temperature data onto low resolution (140 years to >300 years) foraminifera data from marine sediment cores.
• In correspondence with Steve McIntyre Marcott says they “clearly” stated that the 1890-on portion of their reconstruction was “not robust”.
• Yet they conclude “Our global temperature reconstruction for the past 1500 years is indistinguishable within uncertainty from the Mann et al. (2) reconstruction.”
Another shining example of Climastrology at its finest by authors who are definitely “Team” players! Who knew reading goat entrails and interpreting temperature proxies were essentially the same thing?

Annie
March 14, 2013 10:09 am

Sorry to be a bit facetious but Fig. 5 reminded me of nothing so much as one of my grandchildren’s scribbles with coloured pencils.
Intersting article Willis.

Barbara
March 14, 2013 10:20 am

If Marcott himself admits there is no robust data for the last 60 years of the study (ie 1890-1950), how is it that co-author Prof Clarke and Candace Major (from the body which sponsored the research) are making claims about warming spiking over ‘the last 100 years’?
http://www.dailymail.co.uk/sciencetech/article-2290138/Earth-warmest-ice-age–temperatures-rising.html?ito=feeds-newsxml
The max that could possibly be claimed of that 100 would be the *first* 40 years, ie 1850-1890, surely?
Am I missing something?

Reply to  Barbara
March 14, 2013 12:44 pm

These two statements are contradictory !
Marcott to Steve McIntyre:

Marcott stated that they had “clearly” stated that the 1890-on portion of their reconstruction was “not robust”.

Marcott to Daily Mail;

Lead researcher Dr Shaun Marcott, from Oregon State University in the US, said: ‘We already knew that on a global scale, Earth is warmer today than it was over much of the past 2,000 years.
‘Now we know that it is warmer than most of the past 11,300 years.

Steve McIntyre
March 14, 2013 11:15 am

Willis, I’ve parsed their coretop dating assumption. I doubt that specialists realize the effect. I’ve got most of a post done on this.

Reply to  Steve McIntyre
March 14, 2013 11:31 am

McIntyre – the modern portion of the proxies seems to be the least understandable, and the most messy. I wonder if we have a YAD061 type problem once again, where a single proxy with a modern jump becomes the most influential of the entire set?

john robertson
March 14, 2013 11:31 am

But this work of art is Grade A Climatology.
Which has zero interest in atmospheric science or any other discipline that incorporates the scientific method.
Imagining information, from noise appears to be the forte of these wizards of Climatology.

nutso fasst
March 14, 2013 11:35 am

OT, but relevant:
[snip – No, it is not. I won’t have this thread hijacked over a gun discussion – Anthony]

Hmmm
March 14, 2013 12:20 pm

Steve, a point of clarification please. When they do these types of reconstructions, do they calibrate the average of proxies to global temperature (assuming tele-connection etc…), while skipping the step of evaluating/calibrating individual proxies to local temperatiure at the proxy site? If that is the case, consider my mind blown. I’ll post this question on your blog as well…

Tad
March 14, 2013 12:30 pm

How did Marcott’s paper get published? This looks like rubbish.

HaroldW
March 14, 2013 2:18 pm

Willis –
With regard to resolution criterion, while the published article states, “Sampling resolution is typically better than ~300 yr,” an earlier version has “Sampling resolutions better than ~400 yrs.”
This suggests that a 300-year resolution wasn’t intended to be a hard-and-fast criterion for selection. More interesting stuff as you look deeper.

A. Scott
March 14, 2013 2:57 pm

I wonder if we have a YAD061 type problem once again, where a single proxy with a modern jump becomes the most influential of the entire set?

Anthony … the reply Steve got from Marcott seemed to admit this to be true – Marcott:

Regarding the NH reconstructions, using the same reasoning as above, we do not think this increase in temperature in our Monte-Carlo analysis of the paleo proxies between 1920 − 1940 is robust given the resolution and number of datasets. In this particular case, the Agassiz-Renland reconstruction does in fact contribute the majority of the apparent increase. The reason is that the small age uncertainties for the layer-counted ice core chronology relative to larger uncertainties for the other lake/ocean records. The Monte Carlo analysis lets the chronology and analytical precision of each data series vary randomly based on their stated uncertainties in 1000 realizations. In this analysis method, the chronologically well constrained Agassiz-Renland dataset remains in the 1920 − 1940 interval in all the realizations, while the lake/ocean datasets do not (and thus receive less weight in the ensemble).

Doug Proctor
March 14, 2013 3:06 pm

First, it looks like there are at least 3 groupings with different patterns. So regional changes are THE pattern, not global.
Second, it looks like putting them all together you get a mathematically correct result but a result that has no meaning. Like saying the average height of a group of people is 6′ 7″ because you have a 3 NBA players in a crowd of 12.

Nick Stokes
March 14, 2013 3:13 pm

clivebest says: March 14, 2013 at 12:44 pm
“These two statements are contradictory !”

Not so. Present temperatures are known from thermometers, not proxies.

Reply to  Nick Stokes
March 15, 2013 2:51 am

Nick Stokes says:
“That doesn’t mean they aren’t measuring temps way back, and we do know what temperatures are now. Of course they can be compared – why not?”
They don’t compare temperatures they compare anomalies to “1961 – 1990” period. How did they do that – since they have no proxy data 1961-1990 ? To properly calculate anomalies you should first independently fine the average between 1961-190 at each site, then subtract this from each of the time series. Finally you make a global area weighted average on a 5×5 grid. Did they do that ?
As far as I can see they simply subtracted off a fixed temperature offset. This then conveniently allows you to overlay the instrument data, which I am predict will appear in AR5.
Therefore, IMHO their result is NOT independent of the instrument data

manicbeancounter
March 14, 2013 3:15 pm

One of the novelties of this temperatures reconstruction from previous major ones is the use of alkenones as temperature proxies. 31 of the 73 are alkenones, gathered from both the sea and lakes. In the comments at Climate Audit, Keith DeHavelle points to a three minute video explaining in alkenone temperature proxies in layman’s terms.
Marcott et al. is a global surface (air) temperature reconstruction. Alkenones are used as a proxy for changes in water temperatures, with water temperatures then used as a proxy for surface temperatures. Alkenones are thus a double proxy. Possible issues are:-
1) The large changes in surface temperatures will lead to much smaller temperatures in sea temperatures.
2) An important factor in water temperature particular area of ocean over time is changes in ocean currents. For instance, the Gulf Stream is hugely important in the North Atlantic, where five of the proxies are located.

johninoxley
March 14, 2013 3:55 pm

Willis, love your style. What you have failed to realise, unless the proxies are run with a dollar filter no output will make any sense.

Paul Matthews
March 14, 2013 4:04 pm

Willis, in fact there was no need for you to plot out all the proxies, because Marcott has done it already in his thesis, see comments from Ian and Jean S at CA.
The Marcott thesis is at
http://ir.library.oregonstate.edu/xmlui/handle/1957/21129
and chapter 4 is a first version of the paper currently being discussed.
Look at figs 4.2 and 4.3 – no 20th century spike!!!

March 14, 2013 4:17 pm

My impression is that these proxies are essentially random sets of data, of no particular value when it comes to past temperatures, and that naturally, if one averages random data, one gets a fairly flat line. Then, tack on recent non-proxies trends to complete the hockey stick. Seems like a lot of work for so little result.

Lars P.
March 14, 2013 4:25 pm

Paul Matthews says:
March 14, 2013 at 4:04 pm
Look at figs 4.2 and 4.3 – no 20th century spike!!!
It comes even worse. Hank did a reconstruction here and it has an inverse hockey stick in the data, there are several posts by James worth to be read, enjoy:
http://suyts.wordpress.com/2013/03/14/hockey-stick-found-in-marcott-data/
http://suyts.wordpress.com/2013/03/14/more-fishing-for-hockey-sticks-in-marcott-et-al-2013/
http://suyts.wordpress.com/2013/03/10/the-hockey-stick-resurrected-by-marcott-et-al-2012/

A. Scott
March 14, 2013 4:32 pm

Willis – when you plot the “Global” sets from the Temperature Stack section of the data worksheet you do get “blades” from most of the sets – primarily starting just after 1900.
And the Agassiz-Renland data shows 1.38 deg C warming from 1900-1940, or 0.83 deg C from 1900 – 1960. Which fits well, considering the lack of data points within the 1900-1950 window, with Marcott admission it is the primary driver of the hockey stick –
Marcott Agassiz-Renland:
http://tinyurl.com/Marcott-Agassiz
Marcott Temp Stack “Global”:
http://tinyurl.com/Marcott-Global
There are only a handful of remaining data sets with usable data during the 1900 to 1950 period, and with fair amount of those showing cooling during that time. The question seems to be exactly what process did Marcott use to create the “Global” data sets in the Temp Stack section.
And second, and more important it would seem, since the entirety of the data and the reconstruction behind the hockey stick portion is by the authors admission largely based on one set of data and they themselves do not consider it “robust” – why is it even included in the paper … and why was this not picked up and addressed by peer review?

Skiphil
March 14, 2013 5:12 pm

I’ve started reading Marcott’s 2011 PhD dissertation, and I want to make one general point to anyone who may have the statistical/scientific background to write to him or to engage with him in public scientific discussion. He strikes me (whatever the weaknesses of the 2013 article) as a real human who should be amenable to sincere scientific discussion, unless the Mannians manage to ruin him. He now has a large opportunity for public engagement and science education for many — let’s hope he uses it well, and let’s encourage him to use it well.
i.e., let’s not assume bad faith or a Mann-style political engagement unless he should prove it later. He is a young post-doc who may have made some mis-steps as he and co-authors got drawn along in this process with “Science” mag. and preparing for AR5, etc. but he strikes me as potentially far more salvageable than someone like Mann (hard core activist from the start).

paulhan
March 14, 2013 6:29 pm

Which Ice core records were they using? Because none of the ones in the seperated out graphs look like any of the Ice core reconstructions I’ve seen. As for the rest, their assertions can only be explained if you tiljanderise the data.
About your other “endeavour”, I bet you feel cleaner after shovelling twelve tonnes of gravel than having shovelled through this muck. Having been there and done that, I feel for you :-).

March 14, 2013 8:37 pm

Nick Stokjes writes “Not so. Present temperatures are known from thermometers, not proxies.”
To make the comparison valid between present and past, the present temperatures MUST be produced by the proxies, not the thermometers. This is not optional Nick.

Skiphil
March 14, 2013 8:59 pm

For anyone who needs to see/review what Willis said last year about the Shakun et al. (2012) in Nature, which included Marcott as co-author, I put links in this comment:
http://wattsupwiththat.com/2013/03/14/another-hockey-stick/#comment-1248510
I’m getting less willing to cut Marcott, Shakun, and Clark any slack when it is apparent that these shenanigans have been going on for awhile in a variety of places. Do they ever get around to addressing critical challenges?

Nick Stokes
March 14, 2013 9:16 pm

TTTM,
“the present temperatures MUST be produced by the proxies, not the thermometers.”
You may be thinking of treerings, where the proxies have to overlap to be calibrated. But Marcott et al are using mainly marine proxies where the temperatures are calibrated independently of air temp data.
In fact, their proxies can’t produce present temperatures reliably. Their time resolution is too low to capture fast change, and they frequently don’t go beyond a point many years past. That doesn’t mean they aren’t measuring temps way back, and we do know what temperatures are now. Of course they can be compared – why not?

Lesley McKay
March 14, 2013 9:59 pm

In fact, their proxies can’t produce present temperatures reliably. Their time resolution is too low to capture fast change, and they frequently don’t go beyond a point many years past. That doesn’t mean they aren’t measuring temps way back, and we do know what temperatures are now. Of course they can be compared – why not?

In other words they are guessing.
How can you compare “accurate” instrumental temperature with a temperature reconstructed via a proxy via a proxy, unless you can verify the accuracy of said recon. temperature?
You are guessing!

Nick Stokes
March 14, 2013 10:30 pm

Lesley,
How can you compare “accurate” instrumental temperature with a temperature reconstructed via a proxy via a proxy, unless you can verify the accuracy of said recon. temperature?”
They say:
“Unlike the reconstructions of the past millennium, our proxy data are converted quantitatively to temperature before stacking, using independent core-top or laboratory culture calibrations with no post-hoc adjustments in variability.”

george e. smith
March 14, 2013 11:22 pm

Willis,
I think I see a few bars of a Beethoven piano sonata in one of those green lines. Are these graphs drawn by a chimp; or an orangutan. Sometimes it’s hard to know the difference.
This has to be one of your all time classic posts. These people must think we are all stupid.

Lesley McKay
March 14, 2013 11:45 pm

They say:
“Unlike the reconstructions of the past millennium, our proxy data are converted quantitatively to temperature before stacking, using independent core-top or laboratory culture calibrations with no post-hoc adjustments in variability.”

Converted, based on what?
What are those “laboratory culture calibrations ” based on?
If you know please answer.
Anyone can make up a standard and calibrate to that.
Notice I prefer not to say they cheated but I’d like to know how they arrived at the temperature they arrived at?
I’ve read as much as is available and is not clear to me.

george e. smith
March 14, 2013 11:50 pm

“”””…..Skiphil says:
March 13, 2013 at 11:55 pm
R says:
March 13, 2013 at 11:43 pm
“….Yes there are regions in the planet that have experienced large amplitudes of warming/cooling in the last 10,000 years. This is the particularly plausible at high latitudes….”
R, how do you know? And how do you know that Marcott et al. obtained a statistically sound sampling of all the earth’s surface?…..””””
So just what the hey means a “statistically sound sampling”. There is nothing even remotely statistical about sampling.
Sampling means you read the value of some continuous function at a specific “point”; i.e. some specific values of all of the variables the function has. For “climate” related things, this would typically be time and location at least .
The only requirement the samples must satisfy, is the Nyquist sampling theorem. If they don’t do that, then they aren’t valid data of anything, and no amount of statistication will extract meaningful information from them.
Statistics is a process of throwing away information, to derive pseudo information that was never observed by anybody, anywhere, anytime.

March 14, 2013 11:57 pm

Nick writes “That doesn’t mean they aren’t measuring temps way back, and we do know what temperatures are now. Of course they can be compared – why not?”
The question is …why should they be included and compared if they cant/dont reproduce modern temperatures, not the other way around.
I dont have access to the paper but were the individual proxies detrended for whatever selection criteria that was made based on temperature?

Lesley McKay
March 15, 2013 12:26 am

PS, and if the independent core-top or laboratory culture calibrations are accurate why can’t they use recent actual sediments?

March 15, 2013 1:36 am

Reblogged this on Climate Ponderings.

Geoff Sherrington
March 15, 2013 1:41 am

Nick Stokes says: March 14, 2013 at 2:22 am “The discussion on p 6-7 of the SI of the cited reference seems to focus on “age model error” limiting the resolution.”
My apologies here. I’m commenting before finishing reading. Question is, Nick, if the problem is as stated, surely the visual effect would be a series of curves of similar shape merely displaced to left or right?
At this stage I have no confidence that sufficient of the measurements are related in a direct way to temperature with enough confidence to reconstruct temperature.
As for the spike, in the absence of a direct explanation and the presence of an excuse about non-robust data, it would not be unfair to call it invented. Would you agree?
If you were a reviewer, would you pass this paper? I suspect I would not, on what I have read to date, especially after a dish of spaghetti marinara with a strong odour of fish.

Ryan
March 15, 2013 4:00 am

“Mount Honey is the highest point on Campbell Island, one of New Zealand’s subantarctic outlying islands”
Hmmm, so what kind of proxy were they using there? Because Google images show it to be a pretty barren place.

Ryan
March 15, 2013 4:05 am

@Skiphill “He strikes me (whatever the weaknesses of the 2013 article) as a real human who should be amenable to sincere scientific discussion”
I think anybody that takes the data that Willis has presented it here and then claims it has anything useful to tell us about climate other than “trees don’t grow well on ice sheets”, whether it is aimed at getting those all important letters “Dr” in front of your name or to claim government funding for research is an irredeemable liar and if he says something you agree with to your face he will say the exact opposite the next second if it suits him.

Ryan
March 15, 2013 4:22 am

I realise since my last post that the proxies are from plankton samples. Which begs the questions “how do you get plankton from a lake that is frozen?”. Several of the proxies seem to be close to the poles or above the snow line. How reliable is a proxy that can only grow in the summer when compared against a proxy nearer the tropics that can grow all year?
Is there any science in this paper at all?

Nick Stokes
March 15, 2013 4:27 am

Clive,
“Finally you make a global area weighted average on a 5×5 grid. Did they do that ? “
As I understand, each proxy is converted to °C without reference to other proxy or air measurements. Then anomalizing is essentially the same as for thermometer readings, but the actual base interval, 1961-90, is approached indirectly:
“To compare our Standard5×5 reconstruction with modern climatology, we aligned the stack’s mean for the interval 510 to 1450 yr B.P. (where yr B.P. is years before 1950 CE) with the same interval’s mean of the global Climate Research Unit error-in-variables (CRU-EIV) composite temperature record (2), which is, in turn, referenced to the 1961–1990 CE instrumental mean (Fig. 1A).
So it’s the same process to get an anomaly base – there is some loss of accuracy in aligning it exactly to 1961-90 because of the two-step, but the main thing is to get the base the same.

Nick Stokes
March 15, 2013 4:35 am

Geoff S,
“If you were a reviewer, would you pass this paper?”
I probably would have objected to the spiky part, on the basis of what I understand so far. I just think it’s unhelpful. The proxies don’t have the resolution to be reliable, and we have much better information from thermometers. Leave it out.
But there are many good things about the paper. Low freq info is bad on a decade scale, but useful for millennia.

Ryan
March 15, 2013 8:21 am

Ok, so now I see how this works. We have a spike that comes only from relatively high resolution data from another source. This spike is actually shown in only one of the proxies in Marcott, but it has been stitiched on to the proxy data from Marcott. In reality, the Marcott proxies, such as they are, when treated as the same resolution throughout, show a more or less flat line throughout the entire period 0-10,000yrs bp.
The same trick is being used as before. Take the adjusted temperature date from 1950 to 1990 which happens to show a spike – because it was adjusted to give a spike. Then stitch the tree ring data on to that lowest part of that to get an estimate from 1950 back to 1450. Then stitch the Marcott proxies onto the lowest part of that data to extend the same load of old cobblers back a further 10,000 yrs. But still the reality is that the only data that actually shows an uptick is the adjusted thermometer data from 1950 to 1990. Everything else has been simply stuck on the bottom end of that gradient to give the impression they know what temps were going back 10,000 years. The reality is the Marcott proxies don’t have the resolution to tell you anything useful about temperature trends today compared to the last 10,000 yrs. Short duration spikes could be quite common for all we know – what we are living through now (if it is real) could be just one of them. It is yet another mathematical falsehood dressed up to look like real science.
You would only believe that Marcott tells you anything if you are absolutely certain already that temperatures since 1950 have been hotter than at any time for 10,000 yrs. If you are a true believer than the Marcott process of simply projecting their data going backwards from the 1950 temperature record is a perfectly reasonable thing to do. And that is how this big pile of BS is getting through review. It is a religion based on circular arguments. Anybody else can see this pile of poop proves nothing, other than how corrupt publicly funded science has become.

Ben Of Houston
March 15, 2013 8:30 am

The only conclusions that I can make from this are
A: At least of half of the proxies are completely invalid.
and/or
B: The concept of global temperature is fundamentally flawed.
Since the basic trend of each proxy is so different, either global average temperature is not a dominant player in the local or regional temperatures measured by the proxies, or they’re a steaming pile of manure.
Of course averaging these proxies will get a flat line, just as averaging white noise or red noise will get a straight line.

FGM
March 15, 2013 10:16 am

I was working with the appendixes proxy data, and get those problems. Are they normal?
Proxy 46 – TN057-17
Published temperature is the mean between warm and cold season only until 236 cm depth, 7182 years BP. Deeper (and older) samples have an added step of -1.34+-.15
Proxy 40 – BJ8 13GGC
There are certain noise in the relation between published proxy and published temperature, in any case it doesn’t seem to have a great influence
Proxy 20 – 74KL (TEX86)
I cannot see any relation between proxy values and temperature values. Are the values in the excel file really correspondent to those of the original publication?.

Lars P.
March 15, 2013 11:30 am

Nick Stokes says:
March 14, 2013 at 10:30 pm
They say:
“Unlike the reconstructions of the past millennium, our proxy data are converted quantitatively to temperature before stacking, using independent core-top or laboratory culture calibrations with no post-hoc adjustments in variability.”

You must be joking. what they had in the proxy data was showing totally different trends for current times:
http://suyts.wordpress.com/2013/03/14/hockey-stick-found-in-marcott-data/
The splicing of some thermometer data is giving a totally different calibration and totally different trends.

Steve (Paris)
March 15, 2013 2:18 pm

I’m descending to earth in Apollo 11. All is fine and dandy. Only 50 feet till landing. But then suddendly I recall: NASA said the data for the last 50 feet was not ‘robust’…

Steve (Paris)
March 15, 2013 2:23 pm

“aboard Apollo 11” for Pete’s sake… been living in France far too long.

DaveR
March 15, 2013 9:14 pm

Willis,
great detailed work as usual.
I want to take another tack with this: I have looked at the 73 individual proxies, and only 4 or 5 have an uptick in the most recent data (Radiolaria dominated?) Any equally weighted combination of the 73 cannot result in the published Marcott graph, unless one of the uptick proxies is given very heavy weighting, probably >50%.
Therefore I think the published graph has been fabricated.

Jeff Norman
March 16, 2013 12:05 am

The single diatom record does not show a variation for the end if the ice age, the start and end of the Younger Dryas or any other climate event since then. How is this a temperature proxy?

nutso fasst
March 16, 2013 3:36 am

I won’t have this thread hijacked over a gun discussion – Anthony

Apologies for the offending word, but that was not my subject. I wrote about how confirmation bias prompted historians–along with popular media and the most prestigious historical publications–to champion a charlatan whose revisionist ‘research’ was later proved to be a complete fabrication. Unlike climate scientists, the cabal of historians admitted their bad judgment and threw their lying fellow under the proverbial bus. But perhaps they wouldn’t have been so relatively virtuous if endless grant money had been offered to promote the historical revisionism.
Maybe not relevant enough to this thread and worthy of snipping, but it was not an attempted hijacking.
Best regards,
nutso

Eliza
March 16, 2013 10:11 am

What is REALLY interesting is that it appears realclimate (Gavin and Co) have not even discussed this paper on their website? (to be corrected if they posted it earlier etc) but it appears not so.

Eliza
March 16, 2013 10:38 am

Dave R “Therefore I think the published graph has been fabricated” There is a site where fraud publications are listed.
http://retractionwatch.wordpress.com/ maybe it should be reported