Proxy spikes: The missed message in Marcott et al

Story submitted by WUWT reader Nancy Green

There is a message in Marcott that I think many have missed. Marcott tells us almost nothing about how the past compares with today, because of the resolution problem. Marcott recognizes this in their FAQ. The probability function is specific to the resolution. Thus, you cannot infer the probability function for a high resolution series from a low resolution series, because you cannot infer a high resolution signal from a low resolution signal. The result is nonsense.

However, what Marcott does tell us is still very important and I hope the authors of Marcott et al will take the time to consider. The easiest way to explain is by analogy:

50 years ago astronomers searched extensively for planets around stars using lower resolution equipment. They found none and concluded that they were unlikely to find any at the existing resolution. However, some scientists and the press generalized this further to say there were unlikely to be planets around stars, because none had been found.

This is the argument that since we haven’t found 20th century equivalent spikes in low resolution paleo proxies, they are unlike to exist. However, this is a circular argument and it is why Marcott et al has gotten into trouble. It didn’t hold for planets and now we have evidence that it doesn’t hold for climate.

What astronomy found instead was that as we increased the resolution we found planets. Not just a few, but almost everywhere we looked. This is completely contrary to what the low resolution data told us and this example shows the problems with today’s thinking. You cannot use a low resolution series to infer anything reliable about a high resolution series.

However, the reverse is not true. What Marcott is showing is that in the high resolution proxies there is a temperature spike. This is equivalent to looking at the first star with high resolution equipment and finding planets. To find a planet on the first star tells us you are likely to find planets around many stars.

Thus, what Marcott is telling us is that we should expect to find a 20th century type spike in many high resolution paleo series. Rather than being an anomaly, the 20th century spike should appear in many places as we improve the resolution of the paleo temperature series. This is the message of Marcott and it is an important message that the researchers need to consider.

Marcott et al: You have just looked at your first star with high resolution equipment and found a planet. Are you then to conclude that since none of the other stars show planets at low resolution, that there are no planets around them? That is nonsense. The only conclusion you can reasonably make is that as you increase the resolution of other paleo proxies, you are more likely to find spikes in them as well.

==============================================================

As a primer for this, our own “Charles the Moderator” submitted this low resolution Marcott proxy plot with the Jo Nova’s plot of the Vostok ice core proxy overlaid to match the time scale. Yes the vertical scales don’t match (numerically on the scales due to the ticks being different and the offset difference), but this image is solely for entertainment purposes in the context of this article, and does make the point visually.

Spikes anyone? – Anthony

marcottvostok2[1]

(Added) Study: Recent heat spike unlike anything in 11,000 years  “Rapid” head spike unlike anything in 11,000 years. Research released Thursday in the journal Science uses fossils of tiny marine organisms to reconstruct global temperatures …. It shows how the globe for several thousands of years was cooling until an unprecedented reversal in the 20th century. — Seth Borenstein, The Associated Press, March 7th

Note: If somebody can point me to a comma delimited file of both the Marcott and Vostok datasets, I’d be happy to add a plot on a unified axis, or if you want to do one, leave a link to the finished image in comments using a service like Tinypic, Imageshack or Flickr. – Anthony

Advertisements

  Subscribe  
newest oldest most voted
Notify of

You cannot use a low resolution series to infer anything reliable about a high resolution series.
Yes you can. You can infer the long-term trend among other things. Don’t overstate your case.

I’ll take strong analogies over weak anomalies anytime! Kudos!

Karl W. Braun

Would it be possible then to produce a Marcottian type compilation but in high resolution? Is such data available?

Nancy Green rocks.
That’s everything these days. Willis rocks too, by way of comparison.

Ian W

What Nancy Green says is true. However, the astronomers looking for planets wanted to find them. Climate ‘scientists’ looking for hockey sticks want a nice straight shaft then a single blade. This was shown in the CG1 emails with the discussion about getting rid of the Medieval Warm Period. I doubt very much that we will see any concerted search for high resolution proxies for paleo climates as ‘The Team’ would see finding a twentieth century type spike several thousand years ago as a threat to their hypothesis (aka funding).

Anthony, the vertical scales DO match. I did by eyeball only but did it carefully. The baseline is shifted with a WAG based on the smoothed lines.
REPLY: Perhaps I wasn’t clear, I’m saying they don’t match numerically on the scales due to the ticks being different and the offset difference, the amplitude of the scales looks like a reasonable match (added clarification to body of story)- Anthony

Hoser

Oh, come on Anthony, you can convert a tab-delimited file or even fixed width file to CSV. We all know you are much better than you sometimes pretend to be.

Mark T

Leif: look up the term aliasing.
Mark

trafamadore

“Marcott tells us almost nothing about how the past compares with today, because of the resolution problem. Marcott recognizes this in their FAQ. The probability function is specific to the resolution. Thus, you cannot infer the probability function for a high resolution series from a low resolution series, because you cannot infer a high resolution signal from a low resolution signal. The result is nonsense.”
Perhaps you should reread his the para in the Marcott paper starting with, “Because the relatively low resolution and time-uncertainty of our data sets should generally suppress higher-frequency temperature variability, an important question is whether the Holocene stack adequately represents centennial- or millennial-scale variability.”
I know the FAQ helps most WU readers, but I don’t think there is anything new there. It’s all in the original paper, including the points make above.

Skiphil

fyi, Tamino has a new post claiming to have tested via “three spikes” to show that any previous increase comparable to the past century would have shown up in the Marcott 11,300 year study period.
Here is an interesting comment in response:
http://climateaudit.org/2013/04/02/april-fools-day-for-marcott-et-al/#comment-409677

I don’t believe Tamino’s post on “Smearing Climate Data”
http://tamino.wordpress.com/2013/04/03/smearing-climate-data/
shows what it claims, namely that the Marcott et al methodology would be able to detect thermal spikes such as the one we’ve seen in the past century and a half. Tamino’s result is not meaningful because for such spikes to exist in the Marcott data (where there is intrinsic natural and measurement smearing) the actual non-smeared spike would have had to be an order of magnitude larger, so as to appear smeared and flattened out as in the actual data.
In other words, the proxies are intrinsically smeared by natural processes over the millennia, as well as by the collection procedure, and by the averaging performed in the calculation. For a spike as used in Tamino’s demonstration to exist in the physical data, the actual temperature excursion, over a century, would have had to be one to two orders of magnitude greater than the current one. That is, Tamino may have shown that Marcott et al can rule out spikes of 10 or 100 degrees. But he did not show that spikes of 1 degree would have been detected.

pottereaton

It is refreshing to see an analogy that actually fits. I don’t know what it is about climate science, but the number of bogus analogies you find in discussions on the subject is disturbing.
Good analogies are invaluable in making difficult concepts comprehensible to those who lack understanding, which includes me.

Mark T says:
April 3, 2013 at 8:11 pm
Leif: look up the term aliasing.
Won’t make any difference as the data is not sampled but averaged [as far as I can assess]

toto

Strangely enough, the comment I left at RC applies perfectly to this post:
“1- That’s not “spikes”, that’s “noise”. Why do you think the authors performed this Monte-carlo simulation? Because the raw data itself is not interpretable on short timescales.
2- A short “spike” in the past could not be comparable to modern warming, because modern warming is not just fast, it is also *durable*. Even if CO2 emissions completely stop in 2100, the warmth will remain for centuries. If something like *that* had happened in the past, Marcott’s proxies and methods would have detected it.
They didn’t, so it hasn’t. Hence, “unprecedented”.”

MattS

Leif Svalgaard,
“Yes you can. You can infer the long-term trend among other things.”
To my eyes it looks like the long term trend in the Marcott series is going the wrong way for the CAGW crowd.

Mooloo

A short “spike” in the past could not be comparable to modern warming, because modern warming is not just fast, it is also *durable*. Even if CO2 emissions completely stop in 2100, the warmth will remain for centuries.
You may believe that. But you can’t believe it based on a low resolution study such as Marcott.
This attempting to defend the “spike” by bringing in material completely external to the study is sadly typical of its defenders.
1) Marcott can’t show whether a spike is precedented or not.
2) Marcott can’t show the cause of anything at all, let alone a spike.
3) Marcott can say nothing of any long term trends (note, the base trend is down).
Your defence is the “look! flying monkeys!” defence. Please stop.

Mooloo

Marcott can say nothing of any long term trends into the future.
Sorry.

davidmhoffer

toto says:
April 3, 2013 at 8:41 pm
Because the raw data itself is not interpretable on short timescales.
and
A short “spike” in the past could not be comparable to modern warming,
>>>>>>>>>>>>>>>>>>
So you are saying that modern warming is different from the past warming even though you’ve stated that the data can’t be used to interpret the past warming so you don’t actually know what it looked like at all? Yet you are confident that it is different?

markx

toto says: April 3, 2013 at 8:41 pm
“…A short “spike” in the past could not be comparable to modern warming, because modern warming is not just fast, it is also *durable*. Even if CO2 emissions completely stop in 2100, the warmth will remain for centuries…”
And you know this how, exactly? Crystal ball, perhaps? Or simply unquestioning faith in what can only be described as a “plausible theory with several major suppositions”?
Any suggestions as to what may have caused the “temporary short spikes” in the past?

Puppet_Master_Blaster_Master

[snip]
OK that’s it you are banned. While I don’t like Mann’s issues either, that was uncalled for, further, you have been abusing your welcome here by shape shifting
Puppet_Master_Blaster_Master
Enter_Sand_Man
Sad-But-True-Its-You
Are all you, plus there’s the fake email addresses. Congratulations. You have a triple policy violation all in one comment.
Get off my blog. – Anthony

William McClenney

Correct. Your conclusions can only be as accurate as your least most accurate data.

Toto

toto says (April 3, 2013 at 8:41 pm):
“1- That’s not “spikes”, that’s “noise”.
A variant of weather is not climate.
2- A short “spike” in the past could not be comparable to modern warming, because modern warming is not just fast, it is also *durable*. Even if CO2 emissions completely stop in 2100, the warmth will remain for centuries.
Durable, that’s a good one. Even if I don’t see the evidence for it, I like the idea that the only thing that humans have done which is sustainable is to fix the climate. /sarc

Manfred

The case is also understated, because the issue is not only the low frequency sampling.
There is further spreading and flattening of maxima and minima due to
1. dating errors (suppose Hadcrut year 2001 temperatures would be computed by averaging 1905 temperatures from location A and 2003 temperatures from location B)
2. non temperature influences on proxies.
The latter appears to be very severe in this reconstruction. McIntyre’s hemispheric reconstructions have very little similarity with instrumental temperature for the whole instrumental record.

markx

The very good think about Marcott et al is it has brought this ‘spike’ issue out into a blazing bright spotlight.
The spike created by grafting high resolution instrumental data onto very low resolution, ‘smoothed by nature and statistics’ proxies, even though that was not the cause of Marcott’s uptick (as Steve McIntyre and others have shown) had CAGW proponents bringing up the defensive argument “well, we already a have an instrument record that shows the same thing”.
I love that Vostok overlay by Anthony, I’ll run that under a few noses. Thanks!

Eugene WR Gallun

Who? Hah! Who? Hah!
Nancy Green!!!!!
Yea!!!!!
Eugene WR Gallun

Leon0112

One thing I don’t understand. With 11000 years of history indicating the current trend is downwards, wouldn’t you think there might be a problem with a projection that goes straight up? The analysis of the paleo data would indicate CAGW projections are likely wrong. What am I missing?

markx

Dang … typos!! …. The very good think = The very good thing

Mark T

Leif: sorry dude, but do you know what kind of average? Neither do I. It matters a lot. Without a priori knowledge of the actual process, it’s a wag what the original data looked like. Moving averages decimated down are particularly messy in this regard.
You’d think Tamino, the guy with a real world time series analysis gig, would realize how attenuated an output “spike” would be, and would understand the implications, after passing through some apparently low frequency filtering mechanism.
Mark

Mark T

Don’t get me wrong Leif, I’m more saying that if you highlight the word “reliable” in the quote you chose, rather than “anything,” the distinction is there. There is always an unknown, likely incalculable, uncertainty going in that direction.
MattS: I don’t recall Leif arguing differently.
Mark

Christopher Hanley

I think I follow toto’s argument: the current temperature trend is unprecedented in the past 10,000 years. We know that because it is mainly caused by human CO2 emissions and we know that because it is unprecedented in the past 10,000 years.

Thomas

“What Marcott is showing is that in the high resolution proxies there is a temperature spike. This is equivalent to looking at the first star with high resolution equipment and finding planets. To find a planet on the first star tells us you are likely to find planets around many stars.”
This analogy only works if you start by assuming that there is nothing special about the 20th century, i.e. that there can be no antropogenic warming, and since Watts is using it to claim there is nothing special about the 20th century it becomes a circular argument. What Watts is doing is like starting with the observation that there are planets around our sun concluding there should be planets around others, neglecting that if there weren’t planets here we would be able to live here to observe the fact. You always have to consider if there is something special about one observation before you start extrapolating from it!

Brian Eglinton

Marcott has certainly generated a lot of different posts now.
I made the observation on an early post at Climate Audit that the uptick at the end was very important. It may not have been “robust” so that Nick & Richard Telford can dismiss it as irrelevant, but it was a match to the instrumental record – leading to a sense of “confidence” in the proxy.
Critics jumped on the uptick as poor science, but within the paradigm of climate science, it was the uptick that made the paper useful. Get the headlines and wear the flack – mission accomplished. And in the climate science establishment it would be seen as the correct way to treat the data.
Now in the face of so many criticisms, their reaction is “who cares if the uptick was an artifact and not robust, the thermometers say it is actually real”, so they say the critics are running around over molehills.
But this post picks another aspect of the analysis. It is a feature of this analysis that it smooths out the past and gets rid of spikes. It is not a bug. The clear issue is to convey a smoothly varying climate so that the present appears way out of line with natural processes. This paper accomplishes this – and is rewarded with congratulations even from the more reasonable contributors like Richard Telford.
The third great “feature” of this study was to smear the data from various far flung places in order to come up with the most wonderful and physically meaningless statistic of “world” temperature. Using comparisons with ice cores can now be dismissed as looking only at regional effects, not global data.
I can understand why this study is very important to the establishment. And I suspect it may well survive the adverse comments.
For me though, it is clear that proxies for past temperatures or climate conditions are untestable and cannot be realistically calibrated. They are the unleashing of the tremendous power of human imagination onto a very noisy and mysterious pile of data and drawing out patterns based on very strong assumptions and world views to make grand pronouncements about the past. The fact that scientist are generally happy with such studies [that curiously seem to overturn each other with regular frequency!] is actually a sad commentary on the nature and direction of modern science [or is that post-modern science?].
As I have noted before, such highy suspect proxies should not even be considered as in the same category as very large scale effects – such as villages and roads now abandoned under ice, the shifting of forest lines, written histories, etc

John Blake

There is a saying in Product Development circles to effect that, “Just because you don’t know how to do it, doesn’t mean it can’t be done.” Resolution aside, we’d also cite Milutin Milankovich and Alfred Wegener, both universally condemned by geophysicists for decades on the specious grounds that equinoctial precession was solely an astronomical phenomenon while deep-ocean (bathyscaphic) geology was similar to that of continental landmasses.
Since no-one can experiment with global climate, “climatology” is not an empirical scientific discipline but a classificatory exercise akin to botany. “Peer review” of such as Marcott accordingly is not in any sense a validation, which can only come from Nature, but mere semantics indicative of a scholastic Aristotelian consensus. Characterizing this as “mere word-smithery” understates the case.

Oakwood

So Thomas, what you’re saying is the 20th C temperature is not itself evidence for unusual behaviour. Its because we know something unusual is happening that the temperature must therfore be unusual. But Mann et al 1998 started by teling us ‘look 20th C temperature is usual, so something unusual must be happening’. Why am I feeling a little dizzy?

This analogy only works if you start by assuming that there is nothing special about the 20th century
But that is the way science works. Science ASSUMES that the null hypothesis is true until it is PROVEN false. Since no one has proven this false, we must as scientists conclude that there is nothing special about the 20th century.
This makes the rest of your comments, Thomas, rather unneccessary since the null hypothesis is that any changes witnessed are due to natural variation. This is most definitly not a circular argument, this is THE actual science today. We must assume that there is nothing special about an event or a particular time period because if you do assume there is something special you are subjecting data and methods to what is known as observer bias.
And I do not think you quite understood the mistake in logic there either. This is the problem with observer bias. If you incorrectly dictate your logic in such a way that you assume a certain time period or a certain star system (such as our sun) is special, that is the second you are not looking at actual data but jumping at a whim. If a certain star system is special for instance, the science will eventually find the evidence of this and the entire conjecture WILL be proven by the science. This will never happen the other way around. You can not FORCE science to rotate around your point of reference and actually find evidence that way. The mistakes in logic and technique that you already applied shows that instead of ever finding proof of the special, you will instead just double-down on assumptions and observer bias and never actually find the proof that you need. Indeed, we find this time and time again in climate change.
So that is why climate science is doomed in other words to be stuck in a quagmire of tribalism and belief. The entire process was wrong from the start because the entire process made an assumption that was in other words not proven scientifically and instead of leaving it at the hypothesis stage, scientists jumped the shark and made proclamations that we live in special times. This is the why the scientists find “special” things in the data. They hunt and seek as a quest to find this missing evidence and instead of ever finding it through this flawed approach, they instead find statistical mistakes that they made more times then not.
Or they simply find an artifact in the data. How many times have we found that through torture of the data that the scientist simply outputs the first hockey stick they find? I am always curious on how long it took them to find that hockey stick in the data which does not show it. And the reason their methods are hard to duplicate?
Why that is the easy answer. I bet they really can not recall what they did exactly to get that result. But that is not truly incompetence, that is just observer bias in action. The incompetence part comes in when they subjected themselves to the observer bias in the first place.

Jer0me

A very good analogy that explains the main problem with the paper very clearly.
Thank you.

Jer0me

Thomas says:
April 3, 2013 at 10:55 pm

This analogy only works if you start by assuming that there is nothing special about the 20th century, i.e. that there can be no antropogenic warming, and since Watts is using it to claim there is nothing special about the 20th century it becomes a circular argument. What Watts is doing is like starting with the observation that there are planets around our sun concluding there should be planets around others, neglecting that if there weren’t planets here we would be able to live here to observe the fact. You always have to consider if there is something special about one observation before you start extrapolating from it!

I’m afraid the converse is true as well. If you see planets around our perfectly normal sun, then it is fairly safe to assume that we are NOT the anomaly. The only way you could assume that we were, is if you assume that life will happen anywhere at all, so it was likely to happen around the only star with planets, otherwise we are an extreme outlier in the universe.
Likewise, it is more sensible to assume, without any evidence to the contrary, that temperature fluctuates quite a bit, and the that the recent temperature increase is not unusual.
In order to convince the world that this is NOT the case, and that extreme measures are required that could plunge us into relative poverty for decades, we really need extraordinary proof. This isn’t it, and none has been generated to date.

Herni MASSON

. When analyzing time series, ther are two things to keep in mind:
1- the Shannon theorem: it is impossible to detect a frequency higher than half the sampling frequency of your data (which is rather long for most of the proxies, and from there no “peaks” are detectable) (in other (symetrical) words: the shortest period you can detect exceeds ALWAYS two times the time interval between two samples)
2- to avoid “bord effects”, the longest period you can detect with some accuracy may not exceed say about 1/4 th of the total duration of the time series. This is why, when using time series obtained exclusively from direct measurements of temperature (more accurate than proxies), you cannot detect periodicites longer than 60 years (as the longest continuous time series for temperature measurements do not exceed 4*60 = 240 years). I think this is the reason why Scafetta underlines the importance of a 60 years periodicity in temperature records + a long linear trend corresponding to an exit from the Dalton little ice age (circa 1830). If he had mixed proxies and temperature records, he could have seen that this “linear trend” is actually the ascending branch of another sinusoid (of period approximatively equal to 360 years). The superposition of a 11 + 60 +360 years sinusoid describes quite well all the climatic features detected since 2000 years, including the present standstill lasting for the last 15 years. No significant contribution of anthropogenic greenhouse gas must be evoqued at all to explain experimental evidence. I can provide more details on this, if wanted /needed.

Thanks, Nancy.

Peter Miller

The current warming is around 100 years old.
The Vostok dataset shows 33 (eyeballed from chart) similar, or longer, warming periods over the past 11,000 years.
So, if you argue smoothing (low resolution) the data to achieve a long term trend is correct, then the current warming will be smoothed away.
If you argue the above provides insufficient resolution, then the argument has to be that we have seen the current situation circa 33 times before in the last 10,000 years, so the current warming is just another example of natural climate change.
Arguing the fact there is more carbon dioxide now in the atmosphere and this is solely responsible for all of the recent warming is clearly nonsense. However, arguing that CO2 was responsible for some of the recent warming seems does not seem unreasonable. Despite what the IPCC says, we still do not have a clue how much of the recent warming is man made and how much of it is natural.
Anyhow, far too much of ‘climate science’ is falsely concentrated on eliminating the previous warm periods (MWP, Roman, Minion and Holocene Optimum) of the Holocene Era and exaggerating what is happening today.

Mike McMillan

I look forward to the Marcott/Vostok revised chart. Let’s make it a GIF instead of a JPG, though, so we don’t get all those compression artifacts.

markx

Thomas says: April 3, 2013 at 10:55 pm
This analogy only works if you start by assuming that there is nothing special about the 20th century
A fairly incredible statement, Thomas, when you have just seen it demonstrated that the recent spike disappears in the noise of a higher resolution proxy, and would probably be even less visible if we had long term precise instrumental data.
So, how do we really know whether the 20th century spike is ‘special’ or not?
I look forward to your answer to this important question, as the whole CAGW story rests upon that particular assumption.
(Please try to answer without mentioning the phrases “97%”, “deniers” or “conspiracy”).

How about this analogy:
I’m still alive and at my last check-up last week, doctors claim that I’m in a healthy condition.
This proves that my average temp was around 37°C from my birth, till the last check-up. (proxy)
Since last week however, I started keeping my temp at regular time intervals and as I started to get a fever, by the end of the week, my average temp over this week was an alarming 38,5°C.
If I plot these together, we would see a large uptick as well.
But does it mean that I never had a fever before?

J

[snip . . dead link . . mod]

izen

Comparing local temperature spikes to a global reconstruction is apples-/-oranges and meaningless.
There are finer resolution global temperature data which also show no past spikes comparable to the last century warming.

johnmarshall

Sounds good to me—–MWP, RWP and all the others.

@ Nancy Green……. excellent post! And correct.
To the others believing this study would or could pick up a spike similar to our recent temp averages…… stop. Move away from the PC. Pick up something useless like in I-something or use your cell phone for communicating. And no, don’t ever pick up a calculator, again. That includes you, Tammy. It’s an abhorrently stupid conversation to be having. The time resolution doesn’t allow for this. I wrote a post on this last month.
Instead of directing traffic to my site, I’ll offer this. ….. Just go to the WFT site. Pick one of the land/ocean data sets. It doesn’t matter which one GISS or HadCrut…. it doesn’t even have to have ocean. Just pick one. Be generous and select from 1900, so as to get the full MONTHLY data set for last century and to the present. Click on “plot graph”. Note the high and low points as to where they are in relation to the vertical axis. Regardless if you picked HadCrut or GISS the difference between the high and low should be about 1.5 C difference. Now, we don’t say that is the amount of warming we have. We typically say we have had about 1/2 to 1 deg warming or less. I believe Marcott’s spike showed about 0..8 C warming. Everyone with me?
Now Marcott seems confused about his resolution as well, in that they state in their SI and the FAQ….. ” the average resolution of the 73 paleoclimate series is 160 years, and the median is 120.” and “there is no statistically valid resolution to the combined proxy set for anything less than 300-year periods.” ……. okay, whatever. so, let’s be generous and take them at the lowest figure…… no, let’s even go lower. Go back to your WFT app…… now, set the mean at 100 year resolution. or for the math challenged, 1200 months. Now, click on “plot graph” again. Now mark the values of the low and high points and do that tricky math stuff. We should note about a 0.11 deg C increase. This is the “averaging” over time. What people are trying to do is the same thing as comparing one day’s temperature to the average annual temperature. Or even a month’s average. It’s insidiously stupid.
If Marcott or anyone else wishes to compare the recent temp changes to what’s on Marcotts graph, this….. http://www.woodfortrees.org/plot/gistemp/from:1900/mean:1200 is what should be what it is compared to. Or, to graphically show this, it should look something like this ….. http://suyts.files.wordpress.com/2013/03/image_thumb181.png?w=629&h=376 The little red tick mark at the end of the stick. That’s a near apples to apples comparison with time. With a generous resolution to favor a larger uptick. If someone wants or needs to see what I’m saying illustrated just click on my name to go to my site and search for “dagger” and you’ll find the post.
To those who think Marcott spliced the temp record onto his graph, I don’t think that’s what he did. I think he simply manipulated his proxies to emulate Mann’s hs. In other words he used the temp record as a template and fit his garbage. So it’s a distinction, but without a real difference.
PS. Anthony, I can appreciate the desire to plot Vostok with Marcott’s, but, wouldn’t that also be a distortion of the spatial resolution? But, heck, Marcott’s proxies swung faster than the thermometer record. MD01-2421 recorded a 1.15 deg uptick going from 356 BP to 334 BP….. thats 22 yrs. That’s just the second one I looked at.
PPS The Marcott data is in a workbook form, in other words is has many pages. So moving it to a flat file would be challenging.

richard verney

Leif Svalgaard says:
April 3, 2013 at 7:37 pm
You cannot use a low resolution series to infer anything reliable about a high resolution series.
Yes you can. You can infer the long-term trend among other things. Don’t overstate your case.
/////////////////////////////////////////////////////////////////////
To be more accurate, should you not be saying :”You can infer the long-term trend SMOOTHED BY THE LIMITATIONS OF THE LOW RESOLUTION USED IN THE STUDY among other things”?

HLx

Quote Leif Svalgaard
> You cannot use a low resolution series to infer anything reliable about a high resolution series.
Yes you can. You can infer the long-term trend among other things. Don’t overstate your case.
Unquote
As I see you comments on over 50% of the posts here on WUWT, it appears to me that your comments may be perceived as aggressive.
There is a nice way of saying things, and there are a number of more or less aggressive ways of saying the same thing. Maybe it would be more productive to consider a less hostile way of addressing problems with both posts and comments?
HLx

knik

lsvalgaard says:
April 3, 2013 at 8:40 pm
Mark T says:
April 3, 2013 at 8:11 pm
Leif: look up the term aliasing.
Won’t make any difference as the data is not sampled but averaged [as far as I can assess]

Actually, such an average sampling is considered worse, more distorted and has less high frequency content than perfect sampling.

richard verney

Brian Eglinton says:
April 3, 2013 at 11:23 pm
“…For me though, it is clear that proxies for past temperatures or climate conditions are untestable and cannot be realistically calibrated.”
////////////////////////////////////
There are 2 main problems with proxies.
1.
Proxies respond to favourable environmental growing conditions in general, and not to one factor in isolation. It is therefore all but impossible to isolate the response to say temperature from the more general response of the proxy to beneficial environmental conditions in general. In otherwords, it is difficult to seperate the signal that we are looking for from the noise.
2.
Tuning and calibration. One would need a very lengthy overlap with the thermometer record to properly and reliably tune the proxy to the thermometer record. Consider Mann’s tree rings. He was able to tune them to the 1910 to 1960 thermometer record, but when tuned to this they diverged from the post 1960 thermometer record showing a proxy response to temperature or a tuning issue. Had mann tuned his tree proxies to say the post 1960 thermometer record, the hockey stick would have taken a diiferent shape.
In summary there are always huge error bars and uncertainties. They should always be taken with a pinch of salt, merley giving a wide ball park indicator of what may be happening, but nothing more than that. One must never splice one proxy record onto another. The calibration and tuning issues are too great, to lead to anything other than a misleading result.
If our thermometer record went back 11,0000 years we may see something very different to the smoothed curve produced by the Marcott proxy study.