Shades of upside down Tiljander. McIntyre is delving further into the Marcott proxy issue and it looks almost certain now there’s a statistical processing error (selection bias).
Steve McIntyre writes:
In the graphic below, I’ve plotted Marcott’s NHX reconstruction against an emulation (weighting by latitude and gridcell as described in script) using proxies with published dates rather than Marcott dates. (I am using this version because it illustrates the uptick using Marcott methodology. Marcott re-dating is an important issue that I will return to.) The uptick in the emulation occurs in 2000 rather than 1940; the slight offset makes it discernible for sharp eyes below.
Marcottian
uptricksupticks arise because of proxy inconsistency: one (or two) proxies have different signs or quantities than the larger population, but continue one step longer. This is also the reason why the effect is mitigated in the infilled variation. In principle, downticks can also occur – a matter that will be covered in my next post which will probably be on the relationship between Marcottian re-dating and upticks.
Read his full post here: How Marcottian Upticks Arise
Maybe we need an Uptick Rule for paleoclimatology
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Another ‘climate scientist” makes an error. Who would have thought it?
More excellent work from Steve McIntyre. I do believe he is thoroughly enjoying himself with this paper…
At least Marcott has something to show for his endeavours having now entered the English language as the adjective: “Marcottian”.
Steve will disassemble another Skeptical Science Syndrome paper.
This illness seems to be reaching epidemic proportions as of late.
the whole AGW world is filled with this Marcott paper.
once again, the public at large will ignore global cooling until it is too late
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
Plot this. Its the total number of sunspots in each cycle since 1878 and the total of the yearly average CET temperature for each cycle.
You will see that from 1898 up to 1954 the temperature was bouncing up and down in line with the sunspot cycle then the temperature is in antiphase with the sunspot cycle and eventually turns into a hockey stick,
Cycle 10 year total of
min 10 year Yearly ave Temp
year total spots
1889 139672 131.1
1901 166052 142.0
1913 134136 140.6
1923 172048 141.8
1933 151884 141.0
1944 222417 145.3
1954 278025 146.2
1964 350626 143.7
1976 253113 144.7
1986 307842 143.1
1996 289910 147.8
2008 238071 155.7
10 year
10 year Yearly ave
year total spots Temp
1889 139672 131.1
1901 166052 142.0
1913 134136 140.6
1923 172048 141.8
1933 151884 141.0
1944 222417 145.3
1954 278025 146.2
1964 350626 143.7
1976 253113 144.7
1986 307842 143.1
1996 289910 147.8
2008 238071 155.7
Mann is doing a premature Snoopy dance over this paper, over on Facebook:
http://s15.postimage.org/5x1hmvhcr/Mann_Celebration2013.jpg
Third time lucky!
10 year
10 year Yearly ave
year total spots Temp
1889 139672 131.1
1901 166052 142.0
1913 134136 140.6
1923 172048 141.8
1933 151884 141.0
1944 222417 145.3
1954 278025 146.2
1964 350626 143.7
1976 253113 144.7
1986 307842 143.1
1996 289910 147.8
2008 238071 155.7
I wouldn’t describe it so much as “proxy sign inconsistency”, rather as the average changing because the composition of the set being averaged has changed. The plotted value at a given time represents an average of all the proxies which have values for that time. But as time approaches the present, fewer and fewer proxies exist. Dropping a below-average proxy (because its data set ends) results in the average increasing, even if all the other proxies remain the same.
As an example, if you plot the average height of men in my family over time, it takes a jump when my (relatively short) grandfather passed away.
Henry@Kelvin Potter
there is no temp?
When there is evidence pointing toward malice and forethought, I don’t know if “error” is the right word.
It sort of downplays what Marcott did. Like he just stepped out for a bit and forgot the kettle boiling.
That would be a error. Missing the cutoff man on the throw to home. Another error.
But what Marcott did is throw a fastball at our heads, with the hope that after the bench clearing brawl, the referee would forget about him.
There has to be a better word for it than “error”.
Marcott is in the same camp with Pete Rose, Roger Clemens, and Barry Bonds. He needs an asterisk.
How about banned from baseball?
These youngsters, with their IPCC mentors, are systematically trying to plaster in the cracks of the alarmists case.
If any sceptic brings up the ‘CO2 follows temperature rises’ argument, they’re directed at Shakun et al, a Rube Goldberg ‘heat took the pretty way across the globe’ nightmare of a paper.
If we now bring up ‘Mann’s Hockey Stick is broken, and look at Gergis!’, we will be pointed at Marcott et al.
Same people, same aim, same tricks, and the MSM lap it up. A PR triumph for alarmists, and only science the loser.
If anyone doubts that fake, fudged or fraudulent made-up stuff can influence events, look up the Zinoviev Letters.
So the main body of the plot is an average of 5 proxies, but the last part is one proxy. Total garbage or an intentional fabrication? Take your pick.
I find just 1 story in the Guardian for the Marcott paper in the World News section and it does not appear in their Climate Change section. They would normally be about 7 stories peppered throughout the Climate Change section of the Guardian. As for the BBC I can’t find anything. Maybe it’s just me. What do people think is going on?
Guardian – AP feed article
http://www.guardian.co.uk/world/feedarticle/10696412
BBC – News story in Spanish in 2011 mentioning Shaun Marcott
[Google Translation]
http://www.bbc.co.uk/mundo/noticias/2011/08/110802_antartida_agua_am.shtml
Perhaps all the so-called “peers” who reviewed Marcott should be very-publicly named & shamed.
and it looks almost certain now there’s a statistical processing error
========
I would never say “error” to this…..
Errors would have given the results we are all getting….
To get the results they got….you have to do it on purpose…..you have to try different combinations until you get this result
As Willis and others have observed, the hockey stick uptick does not exist in any of the proxy data. Marcott takes this data and builds a 1,000 run Monte-Carlo for each data set, then averages them up and finds the hockey stick.
One of the first lesson any statistician learns is to, get this, actually look at the raw data before manipulating it! Anscombe’s Quartet is sometimes used to teach this basic principle.
So the Monte-Carlo data sets have the uptick but the raw data doesn’t. The error was probably in how the simulations were initialized at zero age. The Marcott.SM.PDF file shows the uptick appears in all of these Monte-Carlo runs, not just a few.
How this would be missed by Marcott et. al. or the reviewers is beyond me. Is it malice or foolishness? Hard to tell. but I guarantee if the error were in the opposite direction, it would have been investigated. At the very least this is confirmation bias.
I have to take my hat off to McIntyre. His thoroughness and attention to detail is the stuff of which great science is made. It is an example that we all can take a lesson from, particularly Dr. Marcott.
Thread winner right there! Well said.
Calling it “proxy sign inconsistency” is being charitable in the extreme. There is an expression about “remarkable claims require remarkable proof”. This whole affair is a statement about remarkable claims requiring almost no proof if they are for “The Cause”. The magazine “Science” and the referees should be ashamed of themselves, at best they have been a tool for naked advocacy.
Kelvin Vaughan says:
March 16, 2013 at 8:35 am
Man, I hates it when people do that. I took a look at your numbers, Kelvin. There is NO STATISTICAL SIGNIFICANCE to the correlation between the temperature and the sunspot datasets you show, p = 0.19. Heck, there’s no correlation even without adjustment for autocorrelation, p = 0.09 in that case.
There’s a technical name for that kind of result.
It’s “hogwash”.
RUN THE NUMBERS, people. That’s why statistics was invented—to let us know what is significant and what isn’t. What you’ve given us isn’t significant in the slightest.
And if your math isn’t strong enough to handle the calculations for significance, you have two choices:
1. Do your math pushups until your skills ARE strong enough for you to play, or
2. Sit on the sidelines and pay close attention to the game.
Both are good positions to take, no shame in either one.
Best regards,
w.
I think we need a new Nuremberg trial. It would be fine if it could be arranged in Nuremberg, on historical grounds.
We shouldn’t be too hard on Marcott he has basically admitted to SM that the Uptick is C##p which is totally different to Mann’s various replies to the HS. I would not be surprised that Mann was one of the main reviewers and told him to stick the uptick on.This is pure speculation of course
This is the problem with doing science with a per-conceived agenda.
If you honestly work with data you will question such upticks.
But if the error fulfills your preconception, you let it pass as a discovery.
Same for reviewers.
If temperature standstill continues, soon children won’t know what global warming is. 🙂
https://twitter.com/TheGWPF/status/312535504002375680
When I was in university, doing any sort of statistical analysis required days, weeks, or even months of laborious calculations using slide rules and log tables. I and my fellow students would be sure to consult our profs, thesis advisors, or other experts to make sure and certain that what we proposed doing was valid before attempting to implement it.
I wonder how much of the apparent problems that stem from statistical issues arise out of how easy it is to undertake complex and subtle analyses nowadays without knowing much about what is actually being done.
Modern computer statistical ‘toolboxes’ will cheerfully pull data out of your spreadsheets and output a result in minutes. And then you can try some other method and get another result. And repeat multiple times, while fiddling with the parameters and methodologies, until you see an output that seems to correspond with your preconceptions.
All without needing to understand more than a minimum about what the toolbox is actually doing.