Marcott – 3 spikes and you are out

Guest post by Nancy Green

Tamino claims he has added 3 spikes to the Marcott et al proxy data and the Marcott et al process detects them.

many_vs_unpert

Source: http://tamino.files.wordpress.com/2013/04/many_vs_unpert.jpg

This, he then proposes, is proof that there are no 20th century spikes in the Holocene.  This claim appears to run counter to a prediction I made recently in a WUWT post; that as you increase the proxy resolution you are more likely to find spikes.

See:

http://wattsupwiththat.com/2013/04/03/proxy-spikes-the-missed-message-in-marcott-et-al/

Having had my reply disappeared at Tamino’s site, I thought readers at WUWT might be interested.  I don’t believe Tamino’s conclusion follows from his results.  Rather, I believe he has demonstrated the truth of my original prediction.  What needs to be understood is that adding a spike to the proxy data is not the same as adding a spike to the proxies. This is where people get confused.

The proxies are ocean cores or similar sitting in some repository. They are real, physical objects.  To truly add a spike to the proxies you would need to travel back in time and change the temperature of the earth. This would then affect the proxies in some fashion, depending on the resolution of the proxies, how they respond regionally, including lags, gain or damping. The proxy response might also be affected by other unknown factors at the time that are not visible in the proxies.  In other words, the spikes that you add to the proxies would have all the resolution problems that the proxies themselves have.

However, adding spikes to the proxy data is an entirely different animal. The proxy data is an abstract representation of the proxy.  It is numbers drawn on a sheet of paper or electronic equivalent. Now you are adding (drawing) high resolution spikes onto low resolution proxy data, with no accounting for regional affects, lag, gain, damping or confounding factors. It should be no surprise at all that these high resolution spikes jump out.  If they didn’t, it would point to a serious flaw in Marcott et al.

An analogy might help better understand the problem.  Imagine for a moment that we are not dealing with temperature, but rather trying to detect planets around stars.  We have before us a photograph of a star taken by a telescope on Earth.  We look at this under the microscope.  However, we find no planets because the telescope lacks the angular resolution to distinguish them from the star itself.

Now let’s go out to the star in question and add planets around the star and take more photos with our telescope.  These planets are real objects.  We know they exists.  However, it will make no difference; we still can’t see the planets with our telescope.  In this example we have added a spike to the actual proxy and it has made no difference.

Now let’s add a spike to the proxy data.  Instead of placing planets around the star, take the photo from the telescope and draw a picture of a planet on it.  This is an example of adding a spike to the proxy data.  The photo is an abstract representation of the star and its planets, equivalent to the proxy data.  Now examine the photo under a microscope and voila, the planet (spike) will now be visible.

What we are seeing in action is actually a form or misdirection used in stage magic.  It fools us on the stage just as it does in science.  It is our minds that create the confusion (illusion) between what the proxies actually are and what the proxy data actually is.  The proxies are ocean cores – they are real objects.  The proxy data is an abstract representation of the real object.  However in our minds we are so used to dealing with real objects as abstract representations that we are fooled into thinking they are one and the same.

If anything, what Tamino has actually done is to prove the point of my original article.  He has added high resolution spikes to the low resolution data and as predicted they are detectable.  To conclude however that this somehow proves there are no 20th century type spikes in the Holocene makes no sense.  As we have seen in this example, no matter how many planets you physically add around a star it makes no difference if you lack the resolution to detect them.  This is no proof that they don’t exist.  It is only after you examine them at sufficiently high resolution that they become visible.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

111 Comments
Inline Feedbacks
View all comments
Pamela Gray
April 8, 2013 6:01 am

Reminds me of the before and after affects of getting glasses in 5th grade. Turns out the greenish black wall in front of the classroom had writing on it! WHO KNEW?!?!?!

Pamela Gray
April 8, 2013 6:06 am

RACookPE1978, your Freudian enviro-extremistic “creaming” slip is HILARIOUS!!!!

April 8, 2013 6:08 am

Even with high resolution proxie data such as the Greenland O18 depletion from ice cores, the magnitude of spikes decreases exponentially with increasing depth. http://www.kidswincom.net/climate/pdf.

Peter Miller
April 8, 2013 6:14 am

I think this model manipulation also proves the Vikings never settled in Iceland and Greenland, as it was clearly too cold 1,000 years ago.
Models are like that, when your conclusion is pre-determined and the data manipulated, then the facts are guaranteed to go out of the window

Chuck L
April 8, 2013 6:22 am

So what was Tamino’s point? That the 20th century spike shown in Marcott was NOT unprecedented? If true, it still exposes the Marcott paper’s unabashed agenda-driven alarmism and the media’s blind embrace of CO2 Armageddon.

To the left of centre
April 8, 2013 6:37 am

DirkH I’m afraid your comment (which I think was aimed at me) was a little too subtle for me to quite understand what you were suggesting. Maybe you could clarify.

Theo Goodwin
April 8, 2013 6:58 am

clivebest says:
April 8, 2013 at 1:04 am
“Therefore you need to determine the sensitivity of the proxies to short to medium term climate excursions. I think it is important to show that the raw measurements would be able to detect such changes, rather than some Monte-Carlo interpolations based on those measurements.”
I would like to ask a question which might be off topic. Please tell me if it is.
It seems to me that climate scientists have played fast and loose with proxy data and that they have been a bit too imaginative in attaching instrument data to proxy data. Don’t they have the responsibility to test their proxy data against instrument data? If the last proxy data point was collected 60 years ago, isn’t the scientist duty bound to get in the mud and collect additional data to the present time? By doing so, one might learn that the proxy in question is no longer a reliable substitute for instrument records. (As an aside, isn’t this exactly what Mann, Jones, Briffa and the team had discovered about their tree ring proxies but then chose to hide the decine?)
It is very dissatisfying to read studies such as Marcott’s and learn that no justification is offered for the proxies that they used. One gets the feeling that there is a proxy cafeteria somewhere and climate scientists choose their proxy series according to individual taste. In addition, the folks who run the cafeteria never talk to their customers. As regards any individual series, the attitude seems to be something like “tree rings have long been accepted as proxies for temperature.” Surely, scientists are duty bounrd to do better.

Nancy Green
April 8, 2013 7:03 am

NZ Willy says:
April 7, 2013 at 10:28 pm
But see Clive Best’s site for an authoritative demolition: http://clivebest.com/blog/?p=4833
==========
Agreed. I read Dr Best’s article in preparation for publishing this paper. Not only does it show Tamino’s spikes would be significantly attenuated, it shows that Marcott’s method does reveal small spikes at known warming periods.
Steve M at his site makes an interesting point. Marcott is using a “home grown” statistical method. It hasn’t been put forward in a Mathematical or Statistical journal. In effect it is invented mathematics. Yet it has been accepted uncritically by the Climate Science community without regard for its accuracy. (where have we seen this before?)
This begs the question. Why not use well researched and accepted statistical methods to analyses statistical data? Mathematics wasn’t invented yesterday. Some pretty sharp minds have been at work on the subject for a couple of thousand years. Why assume your “new” method is somehow superior?
There is also an amusing post at Steve M’s site in which a medical researcher re-invents calculus and had 75 citations for his “new” method. Went so far as to name this “new” method after himself. Evidently medical students don’t take much math as part of their curriculum. Might Climate Science suffer from the same problem?

Nancy Green
April 8, 2013 7:27 am

Richard Briscoe says:
April 8, 2013 at 5:03 am
Anyone who still can’t grasp the point would do well to study this painting by Magritte.
http://en.wikipedia.org/wiki/The_Treachery_of_Images
============
A powerful example of the ability of the mind to confuse us. When I first looked at the painting my mind immediately said “but of course it is a pipe”. I had to consciously will myself to say no, it is a painting of a pipe.
The famous pipe. How people reproached me for it! And yet, could you stuff my pipe? No, it’s just a representation, is it not? So if I had written on my picture “This is a pipe,” I’d have been lying!

Steve Keohane
April 8, 2013 7:27 am

To the left of centre says:April 7, 2013 at 11:51 pm
I think you need to be a little careful with your Astronomy analogy. What you’re referring to (I believe) is astrometry in which you attempt to measure the motion of a star (by observing it’s position change on a photographic plate or CCD) around the center-of-mass of its planetary system.

Stellar dislocation by planetary orbital perturbations are a usual way of detecting planets that are beyond the resolution of your telescope. That is not what is being discussed here, rather the actual detection within resolution, if you have the resolution. Not a proxy for resolution, ie. perturbation.

April 8, 2013 7:30 am

Historical records of the Egyptian, Roman and Chinese societies should show 50 year warm and cold periods. Cherry blossoming dates have been kept for perhaps two thousand years, and a correlation with the temperatures of recent and the blossoming dates should show – or not show – the value of this proxy.
Or not. That is the point, I suppose.

dscott
April 8, 2013 7:39 am

Over how many years do the 3 spikes last, approximately what year on the time does each spike occur? The interval between the spikes looks uniform, is it? Does anyone have an idea on what coincides with the timing of the spikes?
Finally, the obvious overall negative trend/curve over 10,000 years is troubling as it looks like it is following the downward fall in earth’s obliquity. The rapid rise in the trend corresponds to the earth’s maximum obliquity of 24.2 degrees approximately 10,000 years ago.
Ironic, setting aside the hub bub over spikes, Tamino and Marcott actually proved obliquity is the major driver of earth’s climate… Ask the wrong question and you get the wrong answer.

markx
April 8, 2013 7:44 am

Gary Pearse says: April 8, 2013 at 5:56 am
“ONE LOOK AT THE MARCOTT ET AL GRAPH and what does one see. A temperature proxy that shows (assuming proper science) A SCARY, INEXORABLE SLIDE TOWARD THE NEXT ICE AGE!!”
Agred. Very important point – the whole debate glosses over the fact that at least 25% of the Holocene was warmer than today (and incidentally, mankind flourished in that time) and the Marcott work points to an inexorable slide in temperature.

To the left of centre
April 8, 2013 7:48 am

Steve Keohane Maybe you could clarify your point. I understand quite well how planets are detected around stars and suspect you may be confusing radial velocity measurements (which require good spectral resolution) with astrometry (which requires good angular resolution). The point I was trying to make is that if what you’re trying to detect has a motion (in wavelength space, or real space, or temporal) that is much smaller than the resolution of your data, then indeed you will not see it in your data. On the other hand, if it is comparable, you will still see an effect even if you don’t have the resolution to characterise it in any specific way.

Jean Parisot
April 8, 2013 7:52 am

At what point did they remove stats from the climatology curriculum?

Steve Keohane
April 8, 2013 8:32 am

To the left of centre says:April 8, 2013 at 7:48 am
I believe this discussion is about being able to resolve by actual observation, in this analogy, resolution of reflected light from the planet, not some gravitational perturbation that indicates one or more planets are likely there.

markx
April 8, 2013 8:34 am

Seems to me perhaps Tamino asks the wrong question, and uses a generously inflated measure by tacking a huge proxy signal onto what should have been a damped signal, and then assumes a little too much about 20th century warming to come up with his “answer”.
The recent warming has been approx 0.7 C (not 0.9 C) over about 120 years, and it has not necessarily been above the mean. (We need accurate proxies to tell us that! … and therein lies the problem.)
The question is, is this a normal fluctuation, and would this scale of fluctuation have been detectable about the mean of Marcott et al’s result using the proxies Marcott used.
So lets assume there was an actual warming peak starting at 0.35C below the mean and over a 100 year period this extended to 0.35 above the mean of Marcott’s proxy average (to match the recent warming average and Tamino’s calculation).
Marcott points out these proxies naturally ‘smear’ the data due to leakage into adjacent time brackets.
I assume perhaps only 80% of this 0.7 C spike would be ‘recorded’ by the proxies, if temperatures prior to and after the “spike” oscillated in a similar manner about the mean, and proxy evidence leaked into the adjacent time brackets.
So now we would have a total spike of 0.56 C, with a 0.28 degree spike above the proxy mean. Furthermore, not all of the proxies would show it.
Considering that the some of the proxies are deep ocean measures, and our top 2000 metres of ocean have (supposedly) warmed a total of 0.09 degrees C over the last 56 years I’m sure quite a few proxies would miss it by their nature, and certainly others would do so by their resolution/timing.
Very possibly this sort of fluctuation could have been occurring throughout the whole timespan.
So this leads me back to the question – Would a 0.7 degrees C fluctuation in atmospheric temperature about the mean really be detectable in the Marcott proxy data?

Rod Everson
April 8, 2013 8:35 am

eric1skeptic says:
April 8, 2013 at 5:04 am
Here’s a response from a warmy friend on another blog to me:
“No wonder you and your ilk assault the Marcott study, which conclusively shows that what we are seeing in our llfetimes is indeed unprecedented during the past 11 millenia. Oh, and smoke this while you’re at it: Tamino has shown that Marcott would indeed have detected any similar past spike in temperature. As he states, “Let’s find out, shall we?” (Tamino URL)
I’m done with your nonsense.”

I think everyone is missing the point of Tamino’s exercise. It’s not to prove anything at all; it’s simply to provide talking points to the scientifically-illiterate followers who need to be reassured that they’ve not been led down a garden path all these years. The example above is a perfect illustration of his success. And, of course, such a strategy absolutely requires the “disappearing” of contrary opinions, lest those being led (misled) begin to wonder. The average person avoids events that generate cognitive dissonance. As eric’s friend said above, “I’m done with your nonsense.” Thus is another follower captured for the cause.
Gary Pearse at 5:56 writes:
ONE LOOK AT THE MARCOTT ET AL GRAPH and what does one see. A temperature proxy that shows (assuming proper science) A SCARY, INEXORABLE SLIDE TOWARD THE NEXT ICE AGE!! This is what the Hockey Team saw at once. This is what motivated the apparently successful attempt to divert sceptics away from it and ruined the study in the process.
It didn’t ruin the study at all, from their point of view. The believers still believe in the hockey stick, no matter how discredited it’s become, and they’ll also now continue believe that the Marcott study shows how dramatically humans have changed the earth, converting a steady cooling into a dramatic warming in just 100 years. They don’t need science to convince them; just unchallenged verbiage, unchallenged, that is, in the popular press and on the warmist blogs, due in both cases to censorship of opposing views. Marcott is not likely to ever be cited in serious research papers, but the original press reports of the study will be parroted by hundreds of thousands of followers for years.
Science is not the purpose of any of these studies. Think propaganda, nothing more.

Steve Keohane
April 8, 2013 8:36 am

To the left of centre says:April 8, 2013 at 7:48 am
Perhaps a better way of presenting my point is to say that Marcott says they are using a 60mm telescope, Tamino claims it’s really 300mm.

Steve from Rockwood
April 8, 2013 8:40 am

When climate scientists create their low resolution proxy data do they throw out any spikes or are we looking at all of “raw” data that just doesn’t have any spikes? While I agree with this post (the planet analogy was great) I wonder if there isn’t some “editing” going on before the temperature proxies are released so that we never see the spikes. After all what is the difference between a spike and a bad data point?

To the left of centre
April 8, 2013 8:42 am

Okay, I see. Still not convinced it’s a great analogy. In that case you have resolution plus contrast issues. Planet much fainter than the star. If, however, you assume the contrast isn’t an issue, then my point still stands. If your resolution was close to what you’d need to directly detect a planet, then the planet could have a noticeable influence on the observation, even if you couldn’t actually characterise it.

dscott
April 8, 2013 8:52 am

It didn’t ruin the study at all, from their point of view. The believers still believe in the hockey stick, no matter how discredited it’s become, and they’ll also now continue believe that the Marcott study shows how dramatically humans have changed the earth, converting a steady cooling into a dramatic warming in just 100 years.
Except one wee little problem with that explanation, what caused the three previous and very similar spikes of GREATER increase than the current one? Which begs the real question, what caused the spikes in the first place?

dorsai123
April 8, 2013 8:57 am

did Marcott splice air temperature records onto sea temperature proxies ? if so that seems to be clearly an apples to oranges comparison …

Leo Geiger
April 8, 2013 8:59 am

What appears to be lost in many posts critical of Marcott is that whether or not an individual spike can be resolved is a related, but still different question, than what the distribution of temperatures was. The distribution is what Marcott used in their Figure 3. It, in conjunction with the instrument record, led to the main conclusions of their paper:

These temperatures are, however, warmer than 82% of the Holocene distribution as represented by the Standard5×5 stack, or 72% after making plausible corrections for inherent smoothing of the high frequencies in the stack (6) (Fig. 3). In contrast, the decadal mean global temperature of the early 20th century (1900–1909) was cooler than >95% of the Holocene distribution under both the Standard5×5 and high-frequency corrected scenarios.

Notice they have widened the distribution by adding missing high-frequencies (sentence highlighted in bold) . The histogram they use to make their conclusions is the widened one, not the one directly from their proxy reconstruction. The details of the widening are found in the supplementary materials. Two different methods were used and checks were done to assess the sensitivity of the temperature distribution (which their conclusions were based upon) to different amounts of lost high frequencies. Part of this was creating a simulated pseudoproxy temperature stack where they

degraded the output using the resolution, chronologic uncertainties, and temperature uncertainties of the real proxy records through 200 Monte Carlo simulations

This is doing what Nancy Green is saying should be done – modifying what the proxies themselves would look like, not modifying the proxy data – when they did their cross check.
Regardless of what Tamino did or didn’t do, Marcott’s conclusions are not based on whether a spike could be resolved after smoothing, nor do they depend on knowing the answer to this. They are based on a statistical widening of the overall temperature distribution, something which is not very sensitive to individual spikes. Their primary conclusions based on this still seem to be robust.

markx
April 8, 2013 9:18 am

Leo Geiger says:
April 8, 2013 at 8:59 am
Re Marcott et al. Their primary conclusions based on this still seem to be robust.
Yes, true enough, and the debate has become complicated.
No doubt the overall chart of the Holocene temperature is useful and interesting work, once the uptick artifact was dealt with. Very interesting to see it shows at least 25% of the Holocene was warmer than today, a point sometimes disputed.
But some of their discussion compares “their” Holocene temperature averages with 20th century warming, and this point is in contention.
The question naturally arises, have they in fact show that and is this a valid discussion point taking into account their methodology and data?