UPDATE: The StataSphere server can’t handle the load of interest, I’ve take the images offline from this article, and disabled the link to it. Once he gets the server up and running again I’ll put them back – Anthony
Readers may recall this quote from Dr. Phil Jones of CRU, by the BBC:
Q: Do you agree that from 1995 to the present there has been no statistically-significant global warming
A: Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. The positive trend is quite close to the significance level. Achieving statistical significance in scientific terms is much more likely for longer periods, and much less likely for shorter periods.
A.J. Strata has done some significance tests:
CRU Raw Temp Data Shows No Significant Warming Over Most Of The World
Published by AJStrata at StrataSphere
Bottom Line – Using two back-of-the-envelope tests for significance against the CRU global temperature data I have discovered:
- 75% of the globe has not seen significant peak warming or cooling changes between the period prior to 1960 and the 2000′s which rise above a 0.5°C threshold, which is well within the CRU’s own stated measurement uncertainties o +/- 1°C or worse.
- Assuming a peak to peak change (pre 1960 vs 2000′s) should represent a change greater than 20% of the measured temperature range (i.e., if the measured temp range is 10° then a peak-to-peak change of greater than 2° would be considered ‘significant’) 87% the Earth has not experienced significant temperature changes between pre 1960 period and the 2000′s.
So how did I come to this conclusion? If you have the time you can find out by reading below the fold.
I have been working on this post for about a week now, testing a hypothesis I have regarding the raw temp data vs the overly processed CRU, GISS, NCDC, IPCC results (the processed data shows dramatic global warming in the last century). I have been of the opinion the raw temp data tells a different, cooler story than the processed data. My theory is alarmists’ results do not track well with the raw data, and require the merging of unproven and extremely inaccurate proxy data to open the error bars and move the trend lines to produce the desired result. We have a clear isolated example from New Zealand where cherry picked data and time windows have resulted in a ridiculous ‘data merging’ that completely obliterates the raw data.
To pull this deception off on a global scale, as I have mentioned before, requires the alarmists to deal with two inconvenient truths:
- The warm periods in the 1930′s and 1940′s which were about the same as today
- The current decline in temperature, just when the alarmists require a dramatic increase to match the rising CO2 levels.
What is needed out the back end of this alarmist process is a graph like we have from NCDC, where the 1930′s-1940′s warm periods are pushed colder and the current temps are pushed higher.
[image offline]
People have found actual CRU code that does this, and it does it by smearing good temp data with inaccurate proxy data (in this case the tree rings) or hard coded adjustments. The second method used by alarmists is to just drop those inconvenient current temps showing global cooling, which has also been clearly discovered in the CRU data dump.
I have been attempting to compensate for the lack of raw temperature data by using the country-by-country graphs dumped with data from University of East Anglia’s Climate Research Unit (CRU). The file is named idl_cruts3_2005_vs_2008b.pdf, which tells me this is the latest version of the CRU raw temp data run in prep for a new release of the latest data (the PDF file was created in July 2009).
I am very confident this data is prior to the heavy handed corrections employed by CRU and its cohorts. The fact is you can see a lot of interesting and telling detail in the graphs. Much of the Pacific Ocean data has been flipped since 2005 trying to correct prior errors and you can see the 2008 data trend way downward in most of the graphs. In addition, the 1930′s-1940′s warm periods have not been squelched yet. The alarmists have not had a chance to ‘clean up’ this data for the general public (which is one reason I think it was in the dump).
Before we get to actual examples and my detailed (and way too lengthy) analysis, I need to explain the graphs and how I used them (click to enlarge).
[image offline]
In this graph we see the primary data we have available from CRU. This is a comparison of the 2005 runs in black and 2008 runs in light purple/red. At CRU all the data is blocked into quarters. This graph is MAM, which stand for March-April-May, for Argentina.
The love of trend lines and averaging by CRU and other alarmists is quite telling here. The ‘raw’ quarterly data is noted with the blue arrows, It is the highly variable lines from which the (much less accurate) trend lines are generated. I point this out to note that fact that to create a quarterly value for a country for a given year means the raw daily temp data has disappeared under a mountain of averaging already. Day/Night temps must be combined into quarterly temps by location and then combined into a country wide figure. Even with all this inaccuracy added in the ‘raw’ data is quite dynamic, which makes me wonder how dynamic the true sensor data is. CRU and others believe the trend lines mean something significant – but really all the do is mask the true dynamics of nature.
Anyway, now let me explain how I derived (by eye – ugh!) the two primary pieces of data I used to test my hypothesis that the 2000′s are not significantly warmer or cooler than the pre 1960 period (when CO2 levels were drastically lower). Here is how I measured the Peak-to-Peak change in each of the graphs (click to enlarge):
I simply find the highest pre 1960′s peak and the highest point in the 2000′s and subtract. I know this is subjective and error prone, but it is good enough for a ‘reasonableness test’. I would have preferred to use actual data and define min/max points for each time period and compare. But this is what happens when you don’t share the raw data, as true science demands.
Note I am using the 2005 trend line. I have noticed many graphs where the 2008 would given my hypothesis more strength, and maybe some day I will compute that version. I also know there were higher peaks prior to 2000 (especially around 1998). In fact I found myself averaging the slide from 1998 into the 2000′ many times. I tried to err on the alarmists’ side (my hypothesis to prove after all). Also please note that the ‘raw’ yearly data bounces around well beyond all trend line peaks – so I am not too concerned with fact some peaks are skipped. The next calculation will better explain why.
The P2P data is captured in my results file [offline] as shown (click to enlarge):
Note: I am trying to find a way to get a clean spreadsheet up so folks can copy out the data.
Anyway, what I did was compute the P2P value for each quarter for each country, and then averaged those over the full ‘year’. Then I applied three significance tests to see if the P2P value is (1) less than -0.5°C, (2) within the +/- range of 0.5°C or (3) greater than +0.5°C.
I decided used this significance test because of another file dumped with the CRU data which clearly showed where CRU stated its measurement accuracy was typically 1°C or greater. Here is the CRU report from 2005 containing their accuracy claims, along with their own global graph of temperature accuracy:
In my original post on these files I went into great detail on the aspect of measurement accuracy (or error bars) regarding alarmists claims. I will not repeat that information here, but I feel I am being generous giving the data a +/- 0.5°C margin of error on a trend line (which contains multiple layers of averaging error incorporated in it). Most of the CRU uncertainty data, as mapped on the globe, is above the 1°C uncertainty level.
What that really means is detecting a global warming increment of 0.8°C is not statistically possible. If I had used their numbers none of the raw temps would have been significant, which is why people do these back-of-the-envelope tests to determine if we have sufficiently accurate data to test our conclusions or hypothesis.
===========
Read the conclusion here: CRU Raw Temp Data Shows No Significant Warming Over Most Of The World
h/t to Joe D’Aleo
Meanwhile, whilst we eagerly await the graphs, may I offer some alternative content c/o good-old Auntie (the BBC):
‘Climate Change is shrinking sheep’
http://news.bbc.co.uk/1/hi/sci/tech/8130907.stm
‘Stress’ is shrinking polar bears’
http://news.bbc.co.uk/1/hi/sci/tech/8214673.stm
No report required for the phenomena of ‘shrunken heads’…… 😉
I’m using Firefox 3.6.12, and I can’t load the Strata-Sphere site either from the links or from my bookmark. I have contacted four friends here in England and all report the same unavailability of the site. They are using IE, Chrome and, like me, Firefox.
The site is just not responding or else access has been denied by other means.
Here is the link.
http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2010/11/26/BAVP1GGLE2.DTL
I keep trying to provide a link to the SFGate article on Lake Tahoe but it does not work.
REPLY: Stop trying too hard, don’t use tags. Just put in the URL like this:
http:/someplace.com/article.htm
Wordpress will auto link it. – Anthony
Question: How did this post by AJStrata which is full of downright errors manage to pass any form of cursory review?
I assume there are people running WUWT who are competent enough on the subject of surface temperature records to be able to spot the numerous errors made in the post.
So why were they left uncorrected? Or is my assumption wrong?
REPLY: Well, “onion”, nobody could even see the post graphics to discuss it, since his server died soon afterward posting an excerpt here, can’t you read the note? And since you are obviously keen on those errors, how about writing them down? Also, hows’ the weather there in England? Early snow there? – Anthony
Anthony, speaking of tags – previously on tips & notes I tried using the syntax shown below the comment box quite unsuccessfully. Nothing showed up on the post where I’d tried inserting the URL & title. So I checked online with some html tutorial type sites, and it seems the most common format is: title to display here Using that format worked like a charm when I reposted using it. Perhaps consider changing the tag syntax shown below the comment box?
My best to you and all the moderators!
REPLY: No, can’t. wp.com hosted blog – Anthony
I don’t need to see the graphics, the errors are throughout his words. Like his bizarre attempt to imply the temperature records show cooling and were replaced with warming tree ring data.
What form of review does WUWT do on posts? Do you just kinda let whatever fly? Is it the commenters who have to figure out the errors in “discussion”? They don’t seem to get it, the first 50 comments always seem to contain a fair number of folk who take the article as gospel as if it’s been checked.
Perhaps you should mark the title of posts as “preliminary – the following articles are not reviewed and may contain errors”
How many blogs need to compare GHCN raw with Hadcrut/GISTEMP before the silly idea that the raw data shows a much cooler result finally dies?
Sorry if you don’t like this but I am just dealing with your blog like you deal with scientists – by demanding you don’t hide the uncertainty.
REPLY: And yet, it is certain (so far) that you can’t bring yourself to write out the specific errors. Again: How’s the weather in England?, I’m curious if you have snow on the ground there related to another post I’m doing. – Anthony
re: Rational Debate says: November 26, 2010 at 2:03 pm
….REPLY: No, can’t. wp.com hosted blog – Anthony
Oh! I shoulda known. Is there a general wordpress addy we can submit suggestions such as this and/or complain to them for you/us about stuff like this?
Why do you keep posting this AGW propaganda? They make their bias well known
Also, anybody that prevents copying from a document like this one does is up to no good.
Anthony – Thanks for the link. Just curious about what service you use to host your blog. We are on a single server at Hosting Matters and I am guessing we need to ramp up our horse power (or host power).
You can reach me at ajstrata@strata-sphere.com
We will keep an eye on things and reboot when necessary. Everyone please be patient, we have zero revenue and fund the site out of our own pocket. It will be back, just keep checking in.
REPLY: I’m on the free wordpress.com hosting service, which can handle domain name redirects for you, $12 per year. You can export your content and move it, takes a little work, but you get the benefit of distributed cloud computing at virtually zero cost, plus free maintenance, automated backups, and free software upgrades installed automatically.
Single server boxes can’t handle the load, which is why last year right after Climategate broke I helped Steve McIntyre move from a single box to wordpress.com, since his box acted like yours under heavy load. Hasn’t had a single problem since.
Get started here:
http://en.wordpress.com/features/
– Anthony
If AJStrata is right it would be good news, not just for us but also for the planet. Please could someone explain the source of the raw data, I don’t understand the references to it. (CRU data dump)
Steven Mosher: you used GHCN data and followed the CRU method and got the same answer as CRU. Is GHCN the source used by AJSTrata? Does it not follow that if you use the same method as CRU you should get the same answer as CRU? Is the method reasonable (as opposed to being crooked)? You obviously disagree with the current post – can you explain why? Surely the point is that the raw data tells a different story to the processed data? This has been covered before by Chiefio and others, including New Zealand, Darwin Zero and other examples.
I’m just trying to get some agreed facts on the table here… (not usually possible with climate science).
Rational Debate says:
November 26, 2010 at 12:56 pm (Edit)
We have, of course, questions about the validity and uncertainty based in the actual temperature themselves (e.g., surfacestations project, etc.).
Aside from that, however, some say that using the same methods as the CRU etc., they get about the same results. OK, fine – but the big question there is, are all the various adjustments to the temp data applied really reasonable and justified? Was that also addressed in the reconstructions of the CRU temp results – or were those reconstructions just tests of repeatability of the methods CRU used?
###########
1 Using the same source data as CRU ( unadjusted)
2. Using random subsets of the same source data
3. Using data from different networks, raw unadjusted
4. Using Estimates of Surface data from atmospheric data
5. Using satellite data.
6, using the same data as CRU with adjustments.
7. Using the same stations as CRU but a different archive (UCAR)
ALL of these have been done by different people from both sides of the climate debate fence.
1. Using code supplied by CRU
2. writing code from scratch using descritions of CRU algorithms
3. writing new algorithms employing superior statistical approaches.
All of these have been done by independent parties in different languages on different Operating systems.
In no case has anyone calculated results that differ substantially from CRU.
The real issues, if there are any, lie elsewhere.
Rational Debate says:
November 26, 2010 at 12:52 pm
I’m sure you know by now that the models show what we want them to show 🙂
Statistical significance is the badge of true scientific research.
Badges? Badges? We don’t need no badges! We are the Climate Police, we don’t need no stinking badges!
Steven Mosher: you used GHCN data and followed the CRU method and got the same answer as CRU. Is GHCN the source used by AJSTrata? Does it not follow that if you use the same method as CRU you should get the same answer as CRU? Is the method reasonable (as opposed to being crooked)? You obviously disagree with the current post – can you explain why? Surely the point is that the raw data tells a different story to the processed data? This has been covered before by Chiefio and others, including New Zealand, Darwin Zero and other examples.
I’m just trying to get some agreed facts on the table here… (not usually possible with climate science).
##### see my post above
1.Is GHCN the source used by AJSTrata?
He does not say. If he posted code showing where he downloaded the data, I could obviously check. Like the last incident with Australian data ( which was due to faulty downloads and a misunderstanding of how ghcn calculates V2max) I have no desire to spend my time reviewing work in detail where there is no code and where the data analysis process is not clearly laid out. It is hard enough reconstructing what CRU and GISS do. and in those cases they explain most of their steps ( but not all) I believe AJ cites taking data from country files. there is a problem with that approach. you cannot make statements about a global average without area weighting BY MONTH. simply you cannot for example average the temperature in Licchtenstein with the average temperature in china. Area matters. That is why we area average. there are more problems with the approach, I’m busy on nightlights. sorry.
2Does it not follow that if you use the same method as CRU you should get the same answer as CRU?
A. personally I use an improved method and got warmer temps.
B. inputs are different, I use unadjusted data
C using the same data JeffId using a different method gets the same result
D. using different data and a similiar method ron broberg gets the same result
shall I go on.
You obviously disagree with the current post – can you explain why?
. A while back I took a couple days to figure out the misatke some guy made looking at GHCN data. Utter waste of my time. In any case there is new data set with 30-40K stations in it that I would rather spend my time on. the data source of tmp files for country by country seasonal data if the first place I would start my investigation. But since I cannot replicate AJs work, while I can replicate jeffIds,Zekes,Taminos,Giss, Cru, Ron broberg I’m not inclined to go ferret out the reason.
Surely the point is that the raw data tells a different story to the processed data?
wrong. for example, using GCOS data which is totally raw ( way more stations than GHCN) we get the same answer… small differences 1/10th here or there. Plus I use unadjusted GHCN. CRU ad some adjustments. they are minor.
to be sure you can find isolated cases where the adjustments are big, in the grand scheme of things they vanish because for the most part positive adjustments cancel the negative. Again, you can prove this to yourself by writing the code to download the unadjusted data and take a world average.
re post by: Steven Mosher says: November 26, 2010 at 2:59 pm
Steven, thanks much for the reply!
“The second method used by alarmists is to just drop those inconvenient current temps showing global cooling,
———————————————————————————————————-
But Mr Briffa says the cooling must be “bad” data (i.e. because it doesn’t fit the preconceived conclusion). See the Montford book. The accepted methodology used by the “Hockey Stick League”.
**** Steven Mosher says:
November 26, 2010 at 3:29 pm
you cannot make statements about a global average without area weighting BY MONTH. simply you cannot for example average the temperature in Licchtenstein with the average temperature in china. Area matters. That is why we area average. there are more problems with the approach, I’m busy on nightlights. sorry.
*****
If you would suffer one more …
Would it be valid to computer the delta temperature, say per month, for each long record station and just average those together? Wouldn’t that give a more reliable answer than with all the data massaging?
I think Steven Mosher answered several of my questions when addressing Rational Debate. The outstanding one is: The difference between the raw and processed data as discussed by AJStata – is it a valid point or not?
I guess the implication from Steven’s other comments is that the CRU adjustments are reasonable and therefore the end results (global temperatures) are correct? I detect a slight uncertainty about the raw data (perhaps UHI and various adjustments etc) and I suppose that the main uncertainty is whether recent warming is man-made or natural. Is that a fair summary? Given the Harry ReadMe file I would love to find confirmation that AJS is correct… but I have no reason to doubt Steven. Can we clarify (or debate) things further from this point?
My last comment was posted before I saw the most recent posts.
Many thanks for your response which I shall read now…
Other problems:
1. The statement of measurement accuracy is wrong.
2. The CRU document referenced refers to sampling errors due to spatial coverage.
those errors range from 0 to 5 degrees (+-5) the majority of the surface has small errors due to sampling (lots of stations) some parts of the world have fewer stations
(more uncertainty) To adjust for this CRU output a variance adjusted time series
( the one people should use) Variance adjustment takes into account how many stations lie in an area.
AJ approach is, as he notes, subjective. He downloaded charts. We have no idea the processing that happends prior to these charts or after them. For example:
1, calculating an reference period anonmaly. This removes stations that have
short periods.
2. 5 sigma outlier removal
3. area weighting.
4. land masking
5. varience adjusting.
One reason you cannot merely take the P2P of various countries and average them is this:
A. country A is huge. It has 1200 stations it covers millions of square miles.
B. country B is small. It has one station, it covers a few square miles.
To handle this challenge people perform area weighting. The second part of the challenge involves calculating a weight that is a function of the number of stations in an area AND the actual number of station months in the data. A while back one error was found in the way CRU did its varience adjustment. That was communicated to them by the programmer who found it.
Finally P2P isnt a very robust method for finding trends especially in peaky data with underlying long term trend terms. if you think that (max_min)/2 is a bad measure of average temperature for a day a P2p measure over decades throws away a ton of information. That is why we dont measure the trend that way.
When I did my Illinois, Wisconsin, and Iowa blink charts comparing USHCN’s original raw data vs their ‘improved’ Version 2 raw data, several things became apparent.
The typical original chart showed a warm 1930’s, a cool 1960’s, and then a warm present, peaking at the perfect storm year of 1998, which was roughly the same height as the 1930’s peak. Some stations showed an overall up trend, some down, some undecided.
Overall, the revised charts did three things.
1. Gave the majority of charts an upward trend, almost always by lowering past raw temperatures.
2. Lowered 1998.
3. Lowered the 1930’s peak, often below 1998.
Intentional or not, these alterations had the following effects –
1. A region that showed no conclusive trend in the raw data now had a definite warming trend.
2. 1998, which was such an anomalously high peak that it would be hard to exceed in the future, was now less of a challenge.
3. 1934, which stateside was the warmest year of the century and an embarrassment to claims of ‘unprecedented’ recent warming, was now less than 1998.
Because average temperatures first sum the raw data, you can raise an early temperature 2° at one station to show equanimity, as long as you lower three stations 1°. We can certainly find many stations where the alterations revise the trend downward, but over the entire set of three states, the great majority went upward.
for Steven Mosher (or, actually, anyone who knows the answer to this one of course)…
Steven, is there by chance a single page you know of that links to each of the various reconstructions of global temperature that you mentioned to me? Or even a couple that pick up most of them?
Time for the Matt Briggs hammer:
Here is how AJ ‘measures; P2P
http://strata-sphere.com/blog/wp-content/uploads/P2P_Calculation.gif
he takes a visual reading of a smoothed line. and then he calculates statistics on the smoothed data. that’s what we technically call a “no-no” primarily because to smooth the line you apply a model to the data and you are no longer working with data you are working with a model of the data. And without knowledge of the aparameters and error structure of the model you got bupkiss.
Its unclear what smooth was used. a polynomial smooth? a causal filter? 30year? an acausal filter with end point reflection ( cru use something like this in the past.. it cause a minor kerfuffle) dunno.
As others have pointed out the Met Office in the UK is claiming that this year will turn out to be either the hottest or one of the hottest in record. That certainly does not apply to the UK as anyone over the age of about 50 will know. However Britain constitutes only a tiny part of the world’s land mass which itself is much smaller in area that the world’s seas and oceans. Therefore the Met Office could be correct about the world as a whole even though this certainly is not one of the hottest years for the UK.
However, if 2010 really is, in global terms, one of the hottest years on record then there must be a lot of places where this year really has been a scorcher. Where are these places? I know that large parts of Russia had a very hot summer but what was the rest of the year like there? And even though Russia is the biggest country in the world can it possibly be big enough to make this a record breaking year unless other countries have also had record breaking heat? Or could the other hot areas all be in the oceans?