UPDATE: The StataSphere server can’t handle the load of interest, I’ve take the images offline from this article, and disabled the link to it. Once he gets the server up and running again I’ll put them back – Anthony
Readers may recall this quote from Dr. Phil Jones of CRU, by the BBC:
Q: Do you agree that from 1995 to the present there has been no statistically-significant global warming
A: Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. The positive trend is quite close to the significance level. Achieving statistical significance in scientific terms is much more likely for longer periods, and much less likely for shorter periods.
A.J. Strata has done some significance tests:
CRU Raw Temp Data Shows No Significant Warming Over Most Of The World
Published by AJStrata at StrataSphere
Bottom Line – Using two back-of-the-envelope tests for significance against the CRU global temperature data I have discovered:
- 75% of the globe has not seen significant peak warming or cooling changes between the period prior to 1960 and the 2000′s which rise above a 0.5°C threshold, which is well within the CRU’s own stated measurement uncertainties o +/- 1°C or worse.
- Assuming a peak to peak change (pre 1960 vs 2000′s) should represent a change greater than 20% of the measured temperature range (i.e., if the measured temp range is 10° then a peak-to-peak change of greater than 2° would be considered ‘significant’) 87% the Earth has not experienced significant temperature changes between pre 1960 period and the 2000′s.
So how did I come to this conclusion? If you have the time you can find out by reading below the fold.
I have been working on this post for about a week now, testing a hypothesis I have regarding the raw temp data vs the overly processed CRU, GISS, NCDC, IPCC results (the processed data shows dramatic global warming in the last century). I have been of the opinion the raw temp data tells a different, cooler story than the processed data. My theory is alarmists’ results do not track well with the raw data, and require the merging of unproven and extremely inaccurate proxy data to open the error bars and move the trend lines to produce the desired result. We have a clear isolated example from New Zealand where cherry picked data and time windows have resulted in a ridiculous ‘data merging’ that completely obliterates the raw data.
To pull this deception off on a global scale, as I have mentioned before, requires the alarmists to deal with two inconvenient truths:
- The warm periods in the 1930′s and 1940′s which were about the same as today
- The current decline in temperature, just when the alarmists require a dramatic increase to match the rising CO2 levels.
What is needed out the back end of this alarmist process is a graph like we have from NCDC, where the 1930′s-1940′s warm periods are pushed colder and the current temps are pushed higher.
[image offline]
People have found actual CRU code that does this, and it does it by smearing good temp data with inaccurate proxy data (in this case the tree rings) or hard coded adjustments. The second method used by alarmists is to just drop those inconvenient current temps showing global cooling, which has also been clearly discovered in the CRU data dump.
I have been attempting to compensate for the lack of raw temperature data by using the country-by-country graphs dumped with data from University of East Anglia’s Climate Research Unit (CRU). The file is named idl_cruts3_2005_vs_2008b.pdf, which tells me this is the latest version of the CRU raw temp data run in prep for a new release of the latest data (the PDF file was created in July 2009).
I am very confident this data is prior to the heavy handed corrections employed by CRU and its cohorts. The fact is you can see a lot of interesting and telling detail in the graphs. Much of the Pacific Ocean data has been flipped since 2005 trying to correct prior errors and you can see the 2008 data trend way downward in most of the graphs. In addition, the 1930′s-1940′s warm periods have not been squelched yet. The alarmists have not had a chance to ‘clean up’ this data for the general public (which is one reason I think it was in the dump).
Before we get to actual examples and my detailed (and way too lengthy) analysis, I need to explain the graphs and how I used them (click to enlarge).
[image offline]
In this graph we see the primary data we have available from CRU. This is a comparison of the 2005 runs in black and 2008 runs in light purple/red. At CRU all the data is blocked into quarters. This graph is MAM, which stand for March-April-May, for Argentina.
The love of trend lines and averaging by CRU and other alarmists is quite telling here. The ‘raw’ quarterly data is noted with the blue arrows, It is the highly variable lines from which the (much less accurate) trend lines are generated. I point this out to note that fact that to create a quarterly value for a country for a given year means the raw daily temp data has disappeared under a mountain of averaging already. Day/Night temps must be combined into quarterly temps by location and then combined into a country wide figure. Even with all this inaccuracy added in the ‘raw’ data is quite dynamic, which makes me wonder how dynamic the true sensor data is. CRU and others believe the trend lines mean something significant – but really all the do is mask the true dynamics of nature.
Anyway, now let me explain how I derived (by eye – ugh!) the two primary pieces of data I used to test my hypothesis that the 2000′s are not significantly warmer or cooler than the pre 1960 period (when CO2 levels were drastically lower). Here is how I measured the Peak-to-Peak change in each of the graphs (click to enlarge):
I simply find the highest pre 1960′s peak and the highest point in the 2000′s and subtract. I know this is subjective and error prone, but it is good enough for a ‘reasonableness test’. I would have preferred to use actual data and define min/max points for each time period and compare. But this is what happens when you don’t share the raw data, as true science demands.
Note I am using the 2005 trend line. I have noticed many graphs where the 2008 would given my hypothesis more strength, and maybe some day I will compute that version. I also know there were higher peaks prior to 2000 (especially around 1998). In fact I found myself averaging the slide from 1998 into the 2000′ many times. I tried to err on the alarmists’ side (my hypothesis to prove after all). Also please note that the ‘raw’ yearly data bounces around well beyond all trend line peaks – so I am not too concerned with fact some peaks are skipped. The next calculation will better explain why.
The P2P data is captured in my results file [offline] as shown (click to enlarge):
Note: I am trying to find a way to get a clean spreadsheet up so folks can copy out the data.
Anyway, what I did was compute the P2P value for each quarter for each country, and then averaged those over the full ‘year’. Then I applied three significance tests to see if the P2P value is (1) less than -0.5°C, (2) within the +/- range of 0.5°C or (3) greater than +0.5°C.
I decided used this significance test because of another file dumped with the CRU data which clearly showed where CRU stated its measurement accuracy was typically 1°C or greater. Here is the CRU report from 2005 containing their accuracy claims, along with their own global graph of temperature accuracy:
In my original post on these files I went into great detail on the aspect of measurement accuracy (or error bars) regarding alarmists claims. I will not repeat that information here, but I feel I am being generous giving the data a +/- 0.5°C margin of error on a trend line (which contains multiple layers of averaging error incorporated in it). Most of the CRU uncertainty data, as mapped on the globe, is above the 1°C uncertainty level.
What that really means is detecting a global warming increment of 0.8°C is not statistically possible. If I had used their numbers none of the raw temps would have been significant, which is why people do these back-of-the-envelope tests to determine if we have sufficiently accurate data to test our conclusions or hypothesis.
===========
Read the conclusion here: CRU Raw Temp Data Shows No Significant Warming Over Most Of The World
h/t to Joe D’Aleo
Mike McMillan says:
November 26, 2010 at 4:17 pm (Edit)
You are most likely seeing the effect of the TOBS adjustment. This adjustment is required. the raw temps are wrong without it. that has been demonstrated by committed skeptics of global warming. time after time.
I accept (reluctantly) your comments, Steven, about the gridded temperatures being more or less correctly calculated. I concede that Jeff Id would support this view. You guys are much more involved with the science than I am. I would love to hear you comment on where you think the big issues are with respect to AGW belief.
I guess my answer to my own question is that I don’t buy that water vapour has positive feedback only – I agree with Roy and also with H Svensmark on clouds being the earth’s sunshade and thermostat. In other words, I believe that much of the recent warming has been natural and perhaps we are moving towards global cooling for a while. I’m also concerned about UHI at the measurement end, and I’m very suspicious about raw data corrections though I don’t know where that ends and gridding begins. Perhaps from what you say, the data corrections are valid.
I would welcome your comments on the big picture, and I guess others may be interested also.
http://www.bloomberg.com/news/2010-11-26/world-may-record-warmest-year-as-u-k-meteorological-office-adjusts-data.html
Headline: “World May Record Warmest Year as U.K. Meteorological Office Adjusts Data”
Whatever year it may be it ought to be a cold summer, and the Thames freeze over, the year warmists announce the most chocking warmest year on record. Anything else makes me perplex and chocked! 😉
ScientistForTruth says:
November 26, 2010 at 9:49
I expect it is timed to align with the fact that the whole of UK is in the grip of severe winter weeks earlier than normal.
————
Weird logic. Seems to be a re-run of the “its snowing in my backyard, so that means the whole planet must be entering an ice age” story line that was popular last winter.
And of course my favorite “how dare those scientists insult my intelligence by claiming the global temperatures are high when clearly it’s cold where I live”.
TonyK says:
November 26, 2010 at 11:17 am
So exactly where is it warming? Oh yes, the continental U.S. where there are all those themometers next to aircon outlets! YOU HAVE TO BE JOKING!
————–
Weird logic. The ever popular my backyard is the entire world idea.
As to the pseudo question. Where is it warning? Errr! try the oceans for a start. They don’t tend to have air conditioners.
Surely all this misses the point – temperature especially ‘average temperature’ is the wrong metric.
The entire AGW hypothesis is based on trapping HEAT. Heat does not equal temperature in the atmosphere as due to the presence of water vapor which raises the enthalpy of a volume of air by nearly two orders of magnitude. Or simply put dry air changes temperature with far less heat than humid air.
What should be being measured is the ‘heat content’ of the atmosphere which requires knowledge of the humidity as well as temperature. Given hourly figures for humidity and temperature the hourly heat content could be totaled into a day’s heat content.
This would be far more reliable than temperature and would be far more easily linked to SST and GHG effects (if any).
Ian W says:
November 26, 2010 at 5:41 pm
You’re flogging a dead horse mate. I agree with you but no-one else seems to be listening. 🙁
As for the local climate isn’t World climate argument. The World climate, (if indeed there is such a thing), is made up of lots of local climates. Mine has done nothing unusual & I don’t know anyone else whose local climate has done anything unusual, so it’s probably a load of balls!
DaveE.
I think the rest of the world that has been warmer than where all the people live is the sea surface and parts of the arctic with no temperature measurements (so, they, or at least GISS, extrapolate whatever they want to into the arctic while disregarding the dmi 80 north estimates, which are based on actual measurements, because they’re inconvenient).
Also, there was an El Nino. I seem to remember something about satellites biasing sea surface temperatures high in El Nino. If so, surely this is taken into account? Or, maybe they don’t use satellite data??? More likely, they use satellite data when it’s convenient.
Other factors can affect glaciers more than temperature. A few are precipitation, humidity, and soot.
I do not understand why the data is still being used to create arguments. My understanding reading the HARRY_READ_ME files (thank you Antony 18/11 post) is the the data collection (s) met few known standards. ie there was no hypothesis being test. Also noted IanW and David A evans comments.
http://www.anenglishmanscastle.com/HARRY_READ_ME.txt
Word search of 235 pgs HARRY txt
3 c**p : approx 15 h*ll : 3 f**k: 50 metadata: 73 try: >100 anomalies
________________________________________________________
‘… then user decision TMin database to take precedence in terms of station metadata.’
_________________________________________________________
7. ….(we know the first number is the lon or column, the second the lat or row – but which way up are the latitudes? And where do the longitudes break?
There is another problem: the values are anomalies, wheras the ‘public’.grim files are actual values. So Tim’s explanations (in _READ_ME.txt) are incorrect..
___________________________________________________________
8. Had a hunt and found an identically-named temperature database file which did include normals lines at the start of every station. How handy – naming two different files with exactly the same name and relying on their location to differentiate! Aaarrgghh!! Re-ran anomdtb:
_______________________________________________________
I really thought I was cracking this project. But every time, it ends up worse than before.
________________________________________________________
but I have an eerie feeling that I won’t experience joy when headers are compared between parameters :/
Wrote metacmp.for. It accepts a list of parameter databases (by default, latest.versions.dat) and compares headers when WMO codes match. If all WMO matches amongst the databases share common metadata (lat, lon, alt, name, country) then the successful header is written to a file. If, however, any one of the WMO matches fails on any metadata – even slightly! – the gaggle of disjointed headers is written to a second file. I know that leeway should be given, particularly with lats & lons, but as a first stab I just need to know how bad things are. Well, I got that:
crua6[/cru/cruts/version_3_0/update_top] ./metacmp
METACMP – compare parameter database metadata
RESULTS:
Matched/unopposed: 2435
Clashed horribly: 4077
Ouch! Though actually, far, far better than expected. As for the disport of those 2435:
_________________________________________________________
I am very sorry to report that the rest of the databases seem to be in nearly as poor a state as Australia was. There are hundreds if not thousands of pairs of dummy stations, one with no WMO and one with, usually overlapping and with the same station name and very similar coordinates. I know it could be old and new stations, but why such large overlaps if that’s the case? Aarrggghhh!
There truly is no end in sight. Look at this:
____________________________________________________________
This still meant an awful lot of encounters with naughty Master stations, when really I suspect nobody else gives a hoot about. So with a somewhat cynical shrug, I added the nuclear option – to match every WMO possible, and turn the rest into new stations (er, CLIMAT excepted). In other words, what CRU usually do. It will allow bad databases to pass unnoticed, and good databases to become bad, but I really don’t think people care enough to fix ’em, and it’s the main reason the project is nearly a year late.
Hi Steven Mosher,
Is it possible that the problem of rises in measured global temperature arises because of the calculations needed to adjust local temperature data to global gridded temperature?
I am finding it difficult to see any significant rise in temperature for more than 100 years, for a number of well known locations widely scattered over the whole of the continent of Australia.
All I can garner from pope’s comments is that the Met office don’t know if its warming or not, and openly confess that they don’t know what’s going on in the sea. The cooling, which they claim isn’t known without revising data) is perfectly consistent with the models, just as warming was perfectly consistent with the models. The necessary a-priori conclusion is that this provides even stronger evidence for man-made global warming.
Presumably, the models are adjusted down and up and down again and up again, according to the -what can only be described as arbitrary global temperature average, a posteriori, so that it can be claimed that the models predicted the cooling that should have been warming, that after revisions, showed up as not so cooling as originally thought, but not as warming as originally projected, and on this basis, the blame can only be given to the fact that co2 causes this pell mell that the models were so consistent over…
Ms Pope, we are a little confused, though it was an English poet (not Pope) who said
“His notions fitted things so well, That which was which he could not tell”
What ever happened to ARGO data?
Several people have recently commented that the ARGO readings have not appeared recently.
Surely with 70% of the earth’s surface being covered by oceans and oceans being far better heat stores than land, then all our attention should be on ARGO measurements.
What happened to ARGO measurements?
Roy says:
November 26, 2010 at 4:32 pm
the Met office seem obsessed with the hottest on record, or the near hottest on record, among hottest, and the predicted hottest. Especially in the case of the latter, regarding prognostications of barbecue summers and extremely mild winters, which turned out to be extremely cold winters and cool wet summers, it could be reasoned that after the events, it was all “perfectly consistent with the models”
From “the Australian” article that Bob Tisdale refers to:
“Dr Pope said that the evidence for man-made global warming had grown stronger in the past year. She said that it was important to look beyond the present cold snap in Britain and last year’s harsh winter in Europe and consider the global picture.
Many parts of the world had experienced very warm temperatures last year although Britain was gripped by snow and ice in the coldest winter for 30 years.
“We are starting to see changes in the climate even in the UK which we can link to global warming,” she said. “We’re seeing more heatwaves and seeing fewer of these cold winters.”
Particularly the last paragraph, does anyone else in the UK – to which Pope specifically refers – notice more heat waves and fewer cold winters (sic) in the past year? Or to give the benefit of the doubt, the past 5 years?
Many here have been upset that the CRU’s adjustments to raw data always seem to require adjustments upward. The argument is usually some form of “Given enough moves (which in a large population should average out to be random), they should average out neutral, since some will mean adjustments upward and some downward, and overall they should cancel out.”
This exercise with raw data essentially argues FROM that argument – that if the adjustments cancel out, on average (as they should) – then the raw trend and the adjusted trend should be the same, all things being equal.
ANY averaging that does not come out close to neutral should be looked upon with great suspicion: Where did they go wrong?
If you are going to area average, don’t you owe it to yourself to take evenly distributed (over area) readings? Altitude, local climate variations, etc all come in to play also. If you are down the proverbial pike from an area formerly receiving rain from transpiration in local fauna, and the fauna has been replaced with asphalt, you are getting a man made reading that has diddly to do with CO2.
How are these situations documented and corrected?
Thanks to you Anthony for this post. Please keep up the good work and I hope everything is fine with your family.
Thanks to AJStrata. All the links worked including those on Strata-Sphere.
As Aussie Dan says there is no significant increase of raw temperature in Australian rural weather station measurements similar to the findings for New Zealand. There have been posts on here (WUPT) to show the same for rural sites in North America and Europe.
The GISS and CRU presentations are the results of manipulation.
I can not understand the stance of Steven Mosher. Why does he want to support the data manipulation of Hansen, Jones, Mann, Briffra, Trenberth etc. and their lack of knowledge about thermodynamics, heat transfer, mathematics, statistics and other engineering subjects?
[re: USHCN Orig raw vs V2 raw]
Steven Mosher says: November 26, 2010 at 4:33 pm
You are most likely seeing the effect of the TOBS adjustment. This adjustment is required. the raw temps are wrong without it. that has been demonstrated by committed skeptics of global warming. time after time.
I was under the impression that the original included TOBS.
Steven Mosher wrote:
“One reason you cannot merely take the P2P of various countries and average them is this:
A. country A is huge. It has 1200 stations it covers millions of square miles.
B. country B is small. It has one station, it covers a few square miles.
To handle this challenge people perform area weighting.”
What is the “coverage area” of one station, Steve, from physics standpoint?
For example, the Weather Station Handbook requires distances from obstacles and paved surfaces to be about 100ft from sensors. I would infer that this is the actual physical radius of “coverage”. Outside this area you have no information and have no rights to extent anything over wider area. Area weighting has no physical justification. I think this “dealing with challenge” is pure self delusion.
Anthony, it appears that your state of California is experiencing some statistically significant climate disruption of its own:
http://www.pasadenastarnews.com/news/ci_16713295
LOL!! Maybe Cancun will get snow later this week??
2010 could be one of the warmest years ever! Part of one of the warmest decades ever!
Statements like these just kill me. The earth has been warming since the Little Ice Age. Take any year or decade since then, and that statement is likely to be true. If we were in say 1800AD, we could say it was the warmest year, decade, century, AND millenium on record. I picked that out of the air, someone will probably nit pick based on actual numbers, but the point is that you could pick almost any year since the LIA and that statement is going to be pretty much correct,
So it is no surprise to me that the various data sets and various methods give very similar results. Not also that we’re using the wrong scale for the discussion. An average temperature of 15 C with a 0.03 warming trend sounds a whole lot different than 288 K with 0.03 warming trend, doesn’t it? Degrees C is a relative scale and is nearly meaningless. When you consider the absolute scale of degrees K, the wonder is not that the various methods diverge at all, the wonder is that they are as close as they are. The wonder ALSO is the incredible stability of the global temperature over hundreds of years.
Given all the variables that could push global temperatures one way or the other, one can assume that they on average pretty much cancel each other out, or that the physics that governs the planetary system is comprised of feedbacks that work in opposition to stimulus, or a combination of the two. Bottom line is that given several centuries of warming in a row, this would most likely be the warmest year/decade/century ever just like it was in 1910, 1810, 1710…
The question is how much of the trend is “natural” and how much is caused by CO2? If the thought that varius variables are mitigated by feedbacks of the opposite sign is correct, then extracting CO2’s effects from the entire chaotic system with millions of variables isn’t even on par with looking for a needle in a haystack. Its more like searching the world for a haystack with a needle in it.
And yet, I don’t care. Just as accelerated warming from CO2 increases should be most pronounced, ocean heat content is falling and atmospheric temps have levelled off. Explain it by off setting feedbacks or natural variables cancelling out CO2 if you want, but my point on CO2 is, and this cannot be repeated often enough, CO2 is logarithmic.
What ever effects CO2 actually has after all feedbacks, the 100 ppm we have added to the atmosphere over the last 100 years would require another 200 ppm to cause the same increase as we saw from the first 100, and 400 ppm more for another. And 800 ppm for the one after that. We’re at 390 or so now, up just over 100 from “natural” at 280 from a century of burning as much fuel as we could. The amount of fossil fuel we would have to burn to cancel the law of diminishing returns that CO2 warming is subject to is staggering. If the planet is in fact entering a long term cooling period leading to an ice age, there isn’t nearly enough fossil fuel on the planet to even dent its progress.
P Wilson
UK temperatures have been plummeting in recent years. (this doesn’t constitute a ‘trend’) This can be seen in the 1772 Central England Temperature (CET) record , which shows anomalies (deviations from a given average) up to this month;
http://hadobs.metoffice.com/hadcet/
We also have the much older (and curiously underused) CET records which enables us to take a further step back in time to 1660.
http://homepage.ntlworld.com/jdrake/Questioning_Climate/_sgg/m2_1.htm
From here we can see many peaks and troughs and that our temperature today is around that of 1730-the middle of the Little Ice Age.
Creating a ‘global average ‘ temperature is a curious thing to do as it disguises the hundreds of locations worldwide that have been cooling for at least thirty years (a statistically meaningful period)
http://diggingintheclay.wordpress.com/2010/09/01/in-search-of-cooling-trends/
Listening to the UK Farming today programnmes are instructive. Just in the last few weeks we have a farmer-encouraged by the govt-who planted Apricot trees 10 years ago and is now grubbing them up as they don’t ripen and just this morning someone saying we don’t get the hot dry summers we used to have which impacts on quality of vegetables.
Of course they’re ‘anecdotal’ rather than robust information from a computer lab so they don’t count.
tonyb
Tony B
“Of course they’re ‘anecdotal’ rather than robust information from a computer lab so they don’t count.”
When push comes to shove good anecdotal well outranks post normal science IMO
Tony B
And we need a new word – I propose “Empixellated” for those who spend far too much time looking at computer screens and not enough looking out of windows – God forbid actually doing field sampling!