Analysing the complete hadCRUT yields some surprising results

From The Reference Frame, 30 July 2011 via the GWPF

HadCRUT3: 30% Of Stations Recorded A Cooling Trend In Their Whole History

The warming recorded by the HadCRUT3 data is not global. Despite the fact that the average station records 77 years of the temperature history, 30% of the stations still manage to end up with a cooling trend.

In a previous blog entry, I encouraged you to notice that HadCRUT3 has released the (nearly) raw data from their 5,000+ stations.

undefined

Temperature trends (in °C/century, in terms of colors) over the whole history as recorded by roughly 5,000 stations included in HadCRUT3. To be discussed below.

The 5,113 files cover the whole world – mostly continents and some islands. I have fully converted the data into a format that is usable and understandable in Mathematica. There are some irregularities, missing longitudes, latitudes, heights of a small fraction of the stations. Some extra entries appear for a very small number of stations and I have classified these anomalies as well.

As Shawn has also noticed, the worst defect is associated with the 863th (out of 5,113) station in Jeddah, Saudi Arabia. This one hasn’t submitted any data. For many stations, some months (and sometimes whole years) are missing so you get -99 instead. This shouldn’t be confused with numbers like -78.9: believe me, stations in Antarctica have recorded average monthly temperatures as low as -78.9 °C. It’s not just a minimum experienced for an hour: it’s the monthly average.

Clearly, 110 °C of warming would be helpful over there.

I wanted to know what are the actual temperature trends recorded at all stations – i.e. what is the statistical distribution of these slopes. Shawn had this good idea to avoid the computation of temperature anomalies (i.e. subtraction of the seasonally varied “normal temperature”): one may calculate the trends for each of the 12 months separately.

At a very satisfactory accuracy, the temperature trend for the anomalies that include all the months is just the average of those 12 trends. In all these calculations, you must carefully omit all the missing data – indicated by the figure -99. But first, let me assure you that the stations are mostly “old enough”:

As you can see, a large majority of the 5,000 weather stations is 40-110 years old (if you consider endYear minus startYear). The average age is 77 years – and that’s also because you may find a nonzero number of stations that have more than 250 years of the data. So it’s not true that you can get too many “bizarre” trends just because they arise from a very small number of short-lived and young stations.

Following Shawn’s idea, I computed the 12 histograms for the overall historical warming trends corresponding to the 12 months. They look like this:

Click to zoom in.

You may be irritated that the first histogram looks much broader than e.g. the fourth one and you may start to think why it is so. At the end, you will realize that it’s just an illusion – the visual difference arises because the scale on the y-axis is different and it’s different because if there’s just “one central bin” in the middle, it may reach much higher a maximum than if you have two central bins. 😉

This insight is easily verified if you actually sketch a basic table for these 12 histograms:

undefined

The columns indicate the month, starting from January; the number of stations that yielded legitimate trends for the month; the average trend for the stations and the given month, in °C/century; and the standard deviation – the width of the histogram.

You may actually see that September (closely followed by October) saw the slowest warming trend in these 5,000 stations – about 0.5 °C per century – while February (closely followed by March) had the fastest trend of 1.1 °C per century or so. The monthly trends are slightly random numbers in the ballpark of 0.7 °C but the function “trend” seems to be a more continuous, sine-like function of the month than white noise.

At any rate, it’s untrue that the 0.7 °C of warming in the last century is a “universal” number. In fact, for each month, you get a different figure and the maximum one is more than 2 times larger than the minimum one. The warming trends hugely depend both on the places as well as the months.

The standard deviations of the temperature trend (evaluated for a fixed month of the year but over the statistical ensemble of all the legitimate weather stations) go from 2.14 °C per century in September to 2.64 °C in February – the same winners and losers! The difference is much smaller than the huge “apparent” difference of the widths of the histogram that I have explained away. You may say that the temperatures in February tend to oscillate much more than those in September because there’s a lot of potential ice – or missing ice – on the dominant Northern Hemisphere. The ice-albedo feedback and other ice-related effects amplify the noise – as well as the (largely spurious) “trends”.

Finally, you may combine all the monthly trends in a huge melting pot. You will obtain this beautiful Gauss-Lorentz hybrid bell curve:

undefined

It’s a histogram containing 58,579 monthly/local trends – some trends that were faster than a certain large bound were omitted but you see that it was a small fraction, anyway. The curve may be imagined to be a normal distribution with the average trend of 0.76 °C per century – note that many stations are just 40 years old or so which is why they may see a slightly faster warming. However, this number is far from being universal over the globe. In fact, the Gaussian has a standard deviation of 2.36 °C per century.

The “error of the measurement” of the warming trend is 3 times larger than the result!

If you ask a simple question – how many of the 58,579 trends determined by a month and by a place (a weather station) are negative i.e. cooling trends, you will see that it is 17,774 i.e. 30.3 percent of them. Even if you compute the average trend for all months and for each station, you will get very similar results. After all, the trends for a given stations don’t depend on the month too much. It will still be true that roughly 30% of the weather stations recorded a cooling trend in all the monthly anomalies on their record.

Finally, I will repeat the same Voronoi graph we saw at the beginning (where I have used sharper colors because I redefined the color function from “x” to “tanh(x/2)”):

undefined

Ctrl/click to zoom in (new tab).

The areas are chosen according to their nearest weather station – that’s what the term “Voronoi graph” means. And the color is chosen according to a temperature color scheme where the quantity determining the color is the overall warming (+, red) or cooling (-, blue) trend ever recorded at the given temperature station.

It’s not hard to see that the number of places with a mostly blue color is substantial. The cooling stations are partly clustered although there’s still a lot of noise – especially at weather stations that are very young or short-lived and closed.

As far as I remember, this is the first time when I could quantitatively calculate the actual local variability of the global warming rate. Just like I expected, it is huge – and comparable to some of my rougher estimates. Even though the global average yields an overall positive temperature trend – a warming – it is far from true that this warming trend appears everywhere.

In this sense, the warming recorded by the HadCRUT3 data is not global. Despite the fact that the average station records 77 years of the temperature history, 30% of the stations still manage to end up with a cooling trend. The warming at a given place is 0.75 plus minus 2.35 °C per century.

If the rate of the warming in the coming 77 years or so were analogous to the previous 77 years, a given place XY would still have a 30% probability that it will cool down – judging by the linear regression – in those future 77 years! However, it’s also conceivable that the noise is so substantial and the sensitivity is so low that once the weather stations add 100 years to their record, 70% of them will actually show a cooling trend.

Even if you imagine that the warming rate in the future will be 2 times faster than it was in the last 77 years (in average), it would still be true that in the next 40 years or so, i.e. by 2050, almost one third of the places on the globe will experience a cooling relatively to 2010 or 2011! So forget about the Age of Stupid doomsday scenario around 2055: it’s more likely than not that more than 25% of places will actually be cooler in 2055 than in 2010.

Isn’t it remarkable? There is nothing “global” about the warming we have seen in the recent century or so.

The warming vs cooling depends on the place (as well as the month, as I mentioned) and the warming places only have a 2-to-1 majority while the cooling places are a sizable minority. Of course, if you calculate the change of the global mean temperature, you get a positive sign – you had to get one of the signs because the exact zero result is infinitely unlikely. But the actual change of the global mean temperature in the last 77 years (in average) is so tiny that the place-dependent noise still safely beats the “global warming trend”, yielding an ambiguous sign of the temperature trend that depends on the place.

Imagine, just for the sake of the argument, that any change of the temperature (calculated as a trend from linear regression) is bad for every place on the globe. It’s not true but just imagine it. So it’s a good idea to reduce the temperature change between now and e.g. the year 2087.

Now, all places on the planet will pay billions for special projects to help to cool the globe. However, 30% of the places will find out in 2087 that they will have actually made the problem worse because they will get a cooling and they will have helped to make the cooling even worse! 😉

Because of this subtlety, it would be an obvious nonsense to try to cool the globe down even if the global warming mattered because it’s extremely far from certain that cooling is what you would need to regulate the temperature at a given place. The regional “noise” is far larger than the trend of the global average so every single place on the Earth can neglect the changes of the global mean temperature if they want to know the future change of their local temperature.

The temperature changes either fail to be global or they fail to be warming. There is no global warming – this term is just another name for a pile of feces.

And that’s the memo.

UPDATE:

EarlW writes in comments:

Luboš Motl has posted an update with new analysis over shorter timescales that is interesting. Also, he posts a correction showing that he calculated the RMS instead of Stand Dev for the error.

Wrong terminology in all figures for the standard deviation

Bill Zajc has discovered an error that affects all values of the standard deviation indicated in both articles. What I called “standard deviation” was actually the “root mean square”, RMS. If you want to calculate the actual value of SD, it is given by

SD2=RMS2−⟨TREND⟩2

In the worst cases, those with the highest ⟨TREND⟩/RMS, this corresponds to a nearly 10% error: for example, 2.35 drops to 2.2 °C / century or so. My sloppy calculation of the “standard deviation” was of course assuming that the distributions had a vanishing mean value, so it was a calculation of RMS.

The error of my “standard deviation” for the “very speedy warming” months is sometimes even somewhat larger than 10%. I don’t have the energy to redo all these calculations – it’s very time-consuming and CPU-time-consuming. Thanks to Bill.

http://motls.blogspot.com/2011/08/hadcrut3-31-of-stations-saw-cooling.html

0 0 votes
Article Rating
111 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
August 4, 2011 8:44 am

errr.. we’ve known this for quite sometime.
REPLY: Mosh, are you locked into being a permanent downer these days? A lot of people DON’T know about it, which is why I repeated the post here. – Anthony

wermet
August 4, 2011 8:48 am

It would be useful to see the “Voronoi graph” without showing any grid lines. Most of the US and European data is obscured by displaying those lines. As such the graph is not very helpful except in sparely covered regions.
Otherwise, it is a very interesting article.

August 4, 2011 8:51 am

Very interesting and informative post, in addition the illustrations of global temp distribution are stunning works of art in themselves. Has the Tate gallery ever seen them I wonder? I see a leading contender for the Turner prize.

August 4, 2011 8:56 am

Well what do you know. I am purely speculating here
but based on my own small sample
http://www.letterdash.com/HenryP/henrys-pool-table-on-global-warming
I am predicting that most of that cooling part took place in the SH.
I am right, aren’t I?

Anteros
August 4, 2011 8:56 am

I think that even more pertinent than the variability as regards warmer or cooler, is the fact that even at +/- 2.75 degrees……. nobody noticed….

Anteros
August 4, 2011 8:59 am

{Perhaps I should have said ‘at +3.05/-1.60 degrees…}

Tenuc
August 4, 2011 8:59 am

Great post which illustrates the uselessness and irrelevance of using global mean temperature as a climate metric. It is interesting that most of the warming took place in Feb/Mar, which means an earlier start to the growing season for the NH, which has to be beneficial!

Scott Covert
August 4, 2011 9:02 am

You call this science?
Where’s your model? /sarc

Mycroft
August 4, 2011 9:04 am

Anthony
If 30% show a cooling trend, what % show a warming trend, and what % show no trend at all.

Greg, Spokane WA
August 4, 2011 9:08 am

That’s a really interesting analysis, Luboš. A lot of people have pointed this out, in bits and pieces, but you wrapped it up nicely.
Thanks.
PS: Just for future reference… guest posters should put their names at the top of the articles. Otherwise there’s a tendency to say, “Great work, Anthony!” 🙂

PM
August 4, 2011 9:09 am

Do you know the proportion of the warming stations that are in cities? Or the proportion of cooling stations that are in rural locations?

stephen richards
August 4, 2011 9:15 am

Now that’s how you treat data, properly. Very well done Anthony; No drama, no hyperbole, no deceit and no lies, just good honest analysis.
Sooooo, you trolls. Compare this work to your high priests. This is how science works, not like the rubbish of RC et al.

stephen richards
August 4, 2011 9:16 am

Mosh doesn’t improve does he? I remember some years ago when he used to produce some really good work.

dmmcmah
August 4, 2011 9:17 am

You should post his most recent article, which shows the fraction of stations reporting cooling increasing when you take 1979-2011, 1995-2011, and then 2001-2011. Interesting.

TomRude
August 4, 2011 9:21 am

Thus local or regional warming and /or cooling are a result of dynamical processes, hence back to meteorological processes making Leroux analysis even more relevant.

Bad Andrew
August 4, 2011 9:23 am

Steven Mosher is apparently cool with using “Global Warming” to describe warming that isn’t “global”.
Snake Oil Salesman
Andrew

Dave Springer
August 4, 2011 9:27 am

I believe the talking points of the climate boffins have been modified to say that nothing really unusual happened before 1950 or so. What happens to those graphs if you only crunch the data from oh say the last 50 years? I believe you’ll get a X.X degrees C per century that is much higher than 0.76 and far fewer stations reporting a cooling trend.

August 4, 2011 9:29 am

Thanks Anthony for another interesting post. Your website continues to consume much time from my day, yet I don’t regret it. Here in western Oregon we are experiencing one of the coolest summers I can recall in many years as the peach crop is approximately 16 days late. Normally we are well into the season by now but I have yet to pick a peach or necterine this year and probably will not for several days. Where is that global warming happening? I doubt the late apples will have good quality, if they ripen at all. The plum set was poor due to the cold spring. I will finally pick a tomatoe today. I am having a hard time beleiving it is August.

Robert M
August 4, 2011 9:32 am

Very nice post Mr. Watts, keep up the good work.
It is sad that my first thought upon reading your post is that it just criminal that it took years and many legal battles to get this information out of the clutches of the people who were paid with taxpayer money to compile this data for public consumption. At first blush, there is nothing alarming in the data that jumps out at you. Imagine how different the conversation would have been if the “Climate Scientists” had started with, well we’ve got the data and some of it looks alarming, here it is, let’s talk.

August 4, 2011 9:33 am

Again illustrating the obvious question: What in hell is all the fuss about? The error bars are bigger than the perceived change! It could be Much Much worse than we thought, but then again it could be much much better! We spent billions on these smarmy COP meetings and bluster, enduring all the claptrap about more intense weather and the like, based on….? Really! What is all of it based on? 0.7 Kelvin? That’s nuts. Picking fly sh*t out of black pepper.

richard verney
August 4, 2011 9:34 am

Interesting article. It confirms a number of points that I have repeatedly been arguing, namely:
1. There is no such thing as global warming. Warming is a local event. It is necessary to examine matters on a local basis since some countries will experience no warming, some modest warming and some more than modest warming. Further the effects of warming will vary from country to country. For some countries if there is warming, this will be benefiical. For others, any warming will be neutral.And for some, any warming will be detrimental. Even sea level rise is not a uniform problem. Quite obviously, those countries that do not have a coast line will be unaffected. For some, sea level rise may be neutral and for some it may be a problem. The upshot is that every country should evaluate their own data and determine whether they are warming, and if so, whether this will or will not be a problem etc.
2. Dealing with climate science by looking at averages, only hides and conceals precisely what is going on, and hence the reason why certain events are happening. There is no such thing as a global average temperature (we could not possibly get a good and reliable assessment of this even if we were to increase station coverage a billion fold). Further, global average temperature is meaningless. The only reason for suggesting that there is a global average temperature is for political purposes to try and persuade everyone that we are all in this together and we need global solutions to a global problem. That is a falacy and the argument is facetious and disengenuous.
3. Looking at matters on a country by country basis, helps identify what may be causing the warming and whether this is due to natural causes. For example, if increased CO2 leads to increased backradiation (DWLWIR) and if this warms the planet or inhibits cooling that would otherwise take place, surely the effects of this would be seen the world over to the extent that CO2 is a well mixed gas and to the extent that humidity/water vapour levels are uniform. Of course, differences and changes in albedo and differences and changes in carbon sinks may also come into play and thus some adjustments to take those matters into account would be needed. However, that said, how does the ‘settled physics’ of increased GHGs leading to increased backradiation explain why certain stations show a cooling trend and others a neutral trend? The theory (if it is a valid theory) needs to be able to deal with this and explain these observations.
4. The ARGO data suggests that the oceans are not presently warming. Many stations will be heavily influenced by weather patterns coming off adjacent oceans and accordingly one would expect many stations to in effect be tracking ocean temperatures. Again, the AGW theory needs to be able to explain why the oceans do not appear to be warming (and of course, Trenberth’s missing heat).
The upshot is that the global temperature data requires complete re-evaluation from the ground up. It should be divided into a country or at any rate regiaonal basis and looked at on that basis. This would reveal a lot more as to preciesly what is going on and where it is going on and whether it is a cause for concern
. .

August 4, 2011 9:35 am

what’s the avg warming without the Antarctic Peninsula?

August 4, 2011 9:35 am

Mosh,
I gotta agree with Anthony. YOU have known about this because you’re highly intelligent and highly familiar with the data. People like me, moderately intelligent who get most of our 411 on this stuff from Anthony here, may have suspected as much (I did), but we didn’t really have our brains around it. If a reader only occasionally drifted in and let his or her eyes glaze over when numbers were discussed, he or she’d have no idea about this.
It’s really quite interesting to us. I’m glad Anthony republished this. Thanks to Lubos, too.

Dave Day
August 4, 2011 9:40 am

Anthony,
I second Mycroft’s question. What percentage of stations are within the error bars around the central, no trend value, meaning they don’t have any confirmed trend, and what percentage have a definite upward trend.
Thanks,
Dave

Scott Scarborough
August 4, 2011 9:40 am

Near the begining of this post you say that Antartica had a -78.9 deg. C reading for a monthly average. I don’t belive that. That is equivalent to -110 deg. F. Unless its wind chill, it must be -78.9 deg. F which would be equivalent to -61.6 deg. C. Wouldn’t some components of the atmosphere condense out at -110 F?

pat
August 4, 2011 9:42 am

And now to segregate the UHE thermometers.

dp
August 4, 2011 9:42 am

Validates the claim of Pielke,Sr, that there’s no global climate, just regional climates, and they change as needed. A point I frequently make. But if the alarmists don’t toss out such a wide net they can’t say Antarctica is warming. It dilutes the message to say one small exposed and northern-most peninsula is responsible for the bulk of the warming for credited to the entire continent.

Richard S Courtney
August 4, 2011 9:44 am

Lubos:
Thankyou for your clear and elegant summary. Excellent!
For me the take-home messages are:
The average warming of the globe was 0.7 °C per century
but
warming at any given place was 0.75 °C plus/minus 2.35 °C per century.
and
about a third of the globe cooled.
So, about this ‘global warming’, can anybody say if and when it is going to start?
Richard

Earle Williams
August 4, 2011 9:47 am

Clearly this is a repost from Lubos Motl’s blog, but I didn’t see any reference at the beginning of the article. May I suggest a notation at the head of the post?
The Mathematica reference was the dead giveaway! 😉
[Fixed, thanks. ~dbs, mod.]

BioBob
August 4, 2011 9:51 am

I assume by “error of the measurement” quote you mean to indicate the standard error of the mean or the 95% confidence interval You ALSO have an unmentioned measurement error for each type of device used to record temperature data (also called the limit of observability) which describes the instrument error. This is thought to be approximately plus or minus point 5 degrees C for mercury in glass thermometers, but is certainly non-zero. You also do not consider sampling error nor the probability that stations have a systematic bias (which seems quite likely since stations were not randomly selected), etc. [surfacestation.org]
The fact is that plus or minus 2.75 degree SD is being exceedingly charitable, the data, in reality, is nearly total trash.

oldgamer56
August 4, 2011 9:52 am

Anyone want to bet that if we look at the actual physical locations of the stations outside the US that are reporting warming trends, we will find a high correlation of unacceptable siting similar to what was found in the US.
Most probably at airports, or surrounded by UHI.
Sounds like a great way to see the world, if I could figure out how to get a grant for it.

Jimmy
August 4, 2011 9:55 am

Would it be possible to take the calculations one step further and determine an estimate for what the area is of the cooling trend? In other words, what proportion of your Voronoi graph is blue?

BioBob
August 4, 2011 9:59 am

I really should say thank you to Anthony Watts for posting this analysis which can not be restated enough. The “you” in my previous comment is meant for the HadCRUT data, not the author of the article.
Thanks again for a useful analysis !

August 4, 2011 10:10 am

Lubos’ result showing a mean trend of 0.76(+/-)2.36 C shows that climate is very heterogeneous and consists of very many widely scattered sub-states. The mean state does not represent the system at all.
Chris Essex had a paper a few years back pointing out that global mean temperature is a statistic rather than a physical parameter, and that it doesn’t have any real physical meaning at all. Lubos’ result demonstrates that point. It makes no scientific sense to talk about a global mean temperature because nowhere on Earth experiences that state. The 95% confidence interval is 4.6 C on either side of the mean.
Talking about the mean state of a system of widely spread sub-states is approximately like asserting the physical comfort of a guy standing with one foot in a bucket of boiling water and the other foot frozen in ice at -80 C. His mean foot is at a comfortable +20 C, and what’s the problem? 🙂

old engineer
August 4, 2011 10:13 am

Anthony-
What an interesting post. I am reminded of the Dead Sea Scrolls which were kept under lock-and-key for years, with only a select few allowed to study them, When they were available to all, a lot of new insights emerged. Now that the HadCRUT3 data is available, there should be all kinds slicing and dicing of the data.
In your analysis, you apparently ignored any seasonal affect, by including both Northern and Southern hemisphere data for a given a month. I wonder what the histograms would look like if you start in the spring of the year in each hemisphere as month 1. (say April and October respectively for the Northern and Southern hemisphere). Of course it wouldn’t effect the individual station trends but it would be interesting to see if it changed the monthly degrees per century.
Looking at the map, I assume that white is no change. It looks like there is a lot of “no change”. Have you calculated the percentage of stations that show no change , and the percent that show a plus change?

jorgekafkazar
August 4, 2011 10:15 am

@Lubos: Nice! Let’s see what happens if you isolate airport sensor stations.
@Mosher: Hey, Mosh-man, you got a mouse in your pocket?
Young: Right! Mosh may know, but I’ve never seen this map before.
@Scott: The partial pressure of CO2 is too small to drop out solid CO2 at -110F.
.

AdderW
August 4, 2011 10:24 am

863th ??

Richard
August 4, 2011 10:26 am

Amazing to see how CT holds back on changing NH ice data especially when it is low I think its done on purpose for max effect. It happens repeatedly. Of course its nowhere near that now so its now 6 days old.
http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/seaice.recent.arctic.png

DirkH
August 4, 2011 10:32 am

steven mosher says:
August 4, 2011 at 8:44 am
“errr.. we’ve known this for quite sometime.”
This is the first time i see a histogram of all the station’s trends, and it’s most interesting.

gnomish
August 4, 2011 10:34 am

wow! properly done statistics!!!
a valid measure of a real trend of any kind would begin with data taken at a particular hour for a particular station, plotted by itself. no min/max garbages.
THEN you can sort the plots precisely as was done for this article and see whatever there is to be seen.
averaging of temperatures is fallacious – will it blend? no!

August 4, 2011 10:36 am

I thought one of his most interesting posts was the one that showed that the starting point, 1850, for the temp history grafted to the hockey stick was clearly cherry picked.
http://motls.blogspot.com/2011/08/global-mean-temperature-since-1706.html

August 4, 2011 10:42 am

There is ample evidence that the Earth’s temperature as measured at the equator has remained within +/- 1°C for more than the past billion years. Those temperatures have not changed over the past century.
~ Prof Richard Lindzen

Thus, the temperature fluctuations occur as heat is transferred from the equator to the poles. Human activity appears to have little effect, possibly none. Lindzen also writes:

For small changes in climate associated with tenths of a degree, there is no need for any external cause. The earth is never exactly in equilibrium. The motions of the massive oceans where heat is moved between deep layers and the surface provides variability on time scales from years to centuries. Recent work suggests that this variability is enough to account for all climate change since the 19th Century. [my bold]

August 4, 2011 10:48 am

moptop,
You mean the one where he says: “You shouldn’t take this graph seriously.”?

August 4, 2011 10:49 am

A good start with the data! Lots of interesting conclusions can come out of having the actual data to play with.

Gary Pearse
August 4, 2011 10:50 am

Scott Scarborough says:
August 4, 2011 at 9:40 am
“Near the begining of this post you say that Antartica had a -78.9 deg. C reading for a monthly average. I don’t belive that. That is equivalent to -110 deg. F.”
Then you won’t believe that the lowest temps recorded over the past 32 years are:
Year Jan. Feb. Mar. Apr. May Jun. Jul. Aug. Sep.Oct. Nov.Dec.
°F .1 -42 -71 -96 -102 -108 -115 -126 -118 -107 -96 -67 -38
°C-17 -41 -57 -71 -75 -80 -85 -92 -86 -79 -71 -55 -38
Years Charted: 32 Source: International Station Meteorological Climate Summary, Version 4.0
http://bing.search.sympatico.ca/?q=Temp%20at%20south%20pole&mkt=en-ca&setLang=en-CA
Also, remember the strategy of (I believe) Al Gore to use fahrenheit because the numbers were bigger for warming. I note that everyone, even here at WUWT uses celcius for Antarctica. The record of -92C (in the past 32 yrs) is a princely -126F!
I, too, think the article could be enhanced by adding the category of zero trend and even the category of modest trend (say 0.2C/Century). Then apply the policy of cooling the earth down by half the increase in a century’s global warming. Latin America, except for the Amazon and Africa except for Sahara would be faced with an overall cooling trend.
Finally, it shoul be noted by Mosher and others of like convictions that once you release the data to the world, you get a goodly amount of thoughtful and varied contribution from a lot of “fresh”, un-prepped people. No wonder “insiders” are becoming irritable in their consensus love-ins and are adopting cornered-rat reactions to new developments in climate science that show it to be considerably more complex and at the same time less alarming,

Jim Wqaters
August 4, 2011 11:01 am

What is the distribution for the site were separated by population density? Will high density sites have a more positive trend than low density sites?
Also will warm sites have a different distribution than sites that are on average colder?

John B
August 4, 2011 11:25 am

Smokey says:
August 4, 2011 at 10:42 am
There is ample evidence that the Earth’s temperature as measured at the equator has remained within +/- 1°C for more than the past billion years. Those temperatures have not changed over the past century.
~ Prof Richard Lindzen
———————————
That’s an extraordinary claim. Do you know if the good professor has extraordinary evidence to back it up?

August 4, 2011 11:33 am

John B,
Why don’t you ask the good professor yourself?

Stephen
August 4, 2011 11:40 am

On the map, what range of trends does each color correspond to? also, I noticed a lot of warming or cooling regions with no weather-stations within them, and not even any stations neighboring them.
How do the results come out if we weight each station by how large an area it represents? For example, what happens if we reduce the weight of all those stations clustered in the U.S. as they all collectively only represent a relatively small region, how do the numbers come out? (I think it would be appropriate to scale weights of stations with only a square-root dependence on area so as to take into account statistical error in large regions with few stations.)

August 4, 2011 11:55 am

I guess the oats aren’t working Mosh. Try eating 2 pounds of Wheat germ and whole Bran, followed by drinking 5 litres of water. Shake your belly vigorously for 20 minutes, then “pour”. Warning! Do not attempt without supervision! May cause hysterical blindness.

huishi
August 4, 2011 11:56 am

“Prof Richard Lindzen”
Ah, he is one of those deniers from MIT! I bet Exxon funds all his girls, … er, parties, no, … I mean research!
(just kidding, please don’t have MIT sue me)

Louis Hooffstetter
August 4, 2011 11:59 am

Back in 2009, in a statement on its website, the CRU said: “We do not hold the original raw data but only the value-added (quality controlled and homogenised) data.”
Is this the value-added data we’re talking about or something else?

Kelvin Vaughan
August 4, 2011 12:00 pm

70% of the worlds population live in urban areas and 30% in rural areas.
In 1950 it was the reverse. http://en.wikipedia.org/wiki/Urbanization
Towns are getting hotter, the hot air rises higher into the atmosphere than it did, cools down more due to the extra height it rises, falls to earth in the rural areas which are now colder than they were.
(Convection)
That’s also why part of the USA is having a heat wave while the other part is having a cold wave.

August 4, 2011 12:04 pm

Henry@Stephen
I think to try and get a true global average (or at least a good estimate of that average) one should try and balance all stations
latitude wise
as I have done here
http://www.letterdash.com/HenryP/henrys-pool-table-on-global-warming

John B
August 4, 2011 12:12 pm

Smokey says:
August 4, 2011 at 11:33 am
John B,
Why don’t you ask the good professor yourself?
——————–
I asked about evidence, not a c.v. Maybe there is no evidence, just an unsubtantiated claim. Oh, but I forgot, in your world “skeptics” are allowed to do that.

EarlW
August 4, 2011 12:14 pm

Luboš Motl has posted an update with new analysis over shorter timescales that is interesting. Also, he posts a correction showing that he calculated the RMS instead of Stand Dev for the error.

Wrong terminology in all figures for the standard deviation
Bill Zajc has discovered an error that affects all values of the standard deviation indicated in both articles. What I called “standard deviation” was actually the “root mean square”, RMS. If you want to calculate the actual value of SD, it is given by
SD2=RMS2−⟨TREND⟩2
In the worst cases, those with the highest ⟨TREND⟩/RMS, this corresponds to a nearly 10% error: for example, 2.35 drops to 2.2 °C / century or so. My sloppy calculation of the “standard deviation” was of course assuming that the distributions had a vanishing mean value, so it was a calculation of RMS.
The error of my “standard deviation” for the “very speedy warming” months is sometimes even somewhat larger than 10%. I don’t have the energy to redo all these calculations – it’s very time-consuming and CPU-time-consuming. Thanks to Bill.

http://motls.blogspot.com/2011/08/hadcrut3-31-of-stations-saw-cooling.html

Oscar Bajner
August 4, 2011 12:20 pm

Thank you Anthony, for reposting this from The Reference Frame.
I enjoy the wicked wit in lubo’s posts, but the colour scheme / background thing makes me psychotic.
This was way easier to read on WUWT.
REPLY: Yeah, same here. I only found out about if via the GWPF repost, TRF is very very hard on the eyes and sometimes locks up browsers with all the embedded scripting. Lubos probably turns off more people than he gains by color scheme, more so than by content. – Anthony

Richard S Courtney
August 4, 2011 12:36 pm

John B:
Contrary to your suggestion (at August 4, 2011 at 11:25 am ), Richard Lindzen does NOT make an “extraordinary claim” when he says;
“There is ample evidence that the Earth’s temperature as measured at the equator has remained within +/- 1°C for more than the past billion years. Those temperatures have not changed over the past century.”
A negative feedback prevents tropical ocean surface temperatures rising above 305K (i.e. present maximum ocean surface temperature). This has been confirmed by several studies and was first discovered in 1991:
Ref. Ramanathan & Collins, Nature, v351, 27-32 (1991)
Simply, sea surface temperature at the equator is bumping against an upper limit.
Glaciation does not reach the equator during ice ages, so it would be surprising if mean equatorial temperature were ever to vary by much.
Richard

Richard S Courtney
August 4, 2011 12:39 pm

John B:
My post crossed your later one.
In the light of my post, and the offensive nature of your post at August 4, 2011 at 12:12 pm, please justify your silly assertion that Lindzen made an “extraordinary claim” and apologise for your uncouth behaviour.
Richard

Michael J. Dunn
August 4, 2011 12:50 pm

There are established techniques for determining whether one probability distribution is statistically different from another (to some criterion). It would be interesting to posit a second distribution with the same standard deviation, but zero mean (null hypothesis for warming), and see if the two are statistically distinguishable. It is sometimes possible in a measurement series that a non-zero mean will result from finite sampling of a zero-mean phenomenon.

August 4, 2011 12:56 pm

“Analysing the complete hadCRUT yields some surprising results”
I suppose you could argue that it’s surprising for people who havent followed the work that SteveMc, did, that zeke did, that tonyb did. we have known for sometime that the distribution of trends is roughly normal. That would be what you would guess before you even saw the data. Non normality would be surprising. Here’s a brain buster. If I tell you the average trend of 5000 stations
is .5C per century how many of you know without even looking that the distribution will have trends that are less than .5c? ( everybody raise your hand ) and since you know ( before looking) that polar amplification can be 4X what you see at the equator that should tell you that the High tail should be at least 2C (again, without even looking at the data ) and if the high tail is at 2C then the low tail is bound to be negative ( again, without even looking at the data one just expects this )
And if you had randomly selected subsamples of 500 stations ( like zeke recently did) and like I did a while back and found that the answer doesnt change, that would also give you a clue that the trend data was roughly normally distributed.. and then of course if its roughly normalish you kinda know there will be tails that are really positive and tails that are negative. So not surprising. If you think about, not surprising at all. predictable in fact.
The real work is figuring out if there is anything that explains why some places cool while other places warm. Been looking for that since 2007 when I first did a version of the analysis presented here. or conversely why some warm more than others ( aside from polar amplification )
REPLY: Sheesh. OK then we’ll just shut the blog off then Mosh, we don’t want to have new people learn about anything new, old, normal then, or discuss it. You are starting to sound like Gavin ;-). /sarc On the plus side, yes figuring out why some have warmed and others have not is the nugget. – Anthony

Editor
August 4, 2011 12:56 pm

This is exactly what Verity Jones and myself found last September in an article carried here
http://wattsupwiththat.com/2010/09/04/in-search-of-cooling-trends/
i would say around 25% of the station data base are static over the extended period, with the remainder showing a small to fairly large warming trend , many of these latter are in urbanised areas and many more are not recording their original micro climatre.i.e they have moved at least once.
Personally, i think the term Global temperature is completely meaningless as is the notion of Global warming.
tonyb

August 4, 2011 1:20 pm

Steven Mosher says:
“The real work is figuring out if there is anything that explains why some places cool while other places warm. Been looking for that since 2007 when I first did a version of the analysis presented here.”
Prof Lindzen gives a very good explanation. The climate is never static:
The earth is never exactly in equilibrium. The motions of the massive oceans where heat is moved between deep layers and the surface provides variability on time scales from years to centuries.

August 4, 2011 1:54 pm

Steven Mosher-I get that you see a normal distribution of trends as a perfectly reasonable thing to expect statistically. But does it make sense physically, if current understanding of climate by “mainstream science” is to be believed? What kind of distribution of trends is found in climate models? Is it also normal? This may not be a “surprising” finding but it may be an important point. “Move along, nothing to see here” is not a reasonable reaction. A reasonable reaction would be to ask whether this is compatible with the “warmer” point of view or not. I take it you think it is. I want to see proof that it is or isn’t before jumping to such a conclusion.

John Whitman
August 4, 2011 2:14 pm

Luboš,
Thanks. It was highly educational for me.
Anthony and all, you are great.
Mosh – don’t be grumpy. Shall I sing “Its a Wonderful World” for you? Can you whistle in accompaniment? : )
John

Paul Irwin
August 4, 2011 2:18 pm

and the rest of the 70% not cooling are at airports, urban areas, or adjacent to and downwind of urban areas, eh? or in al gore’s 3000 sf living room?

Kev-in-UK
August 4, 2011 2:26 pm

of course a global mean temp is meaningless – because spatial coverage is unlikely to be anything like enough to be representative…..this is of course, a little fact the team like to dismiss when using such datasets for their work!
Unfortunately, however, we must accept that that is the only real data we have (for a significant time period) and we just have to make do with it………..but that doesn’t detract from the basic argument of whether any observed temp changes are natural or anthropogenic. Clearly, urban temp increases must be anthropogenic for obvious reasons and I believe all urban stations should essentially be ignored!
in respect of the released data, I would be interested in only the genuine rural stations being analysed on their own. Then, perhaps those stations could be individually appraised for quality – and then we may actually have some valid base data. I don’t subscribe that the UHI adjustment is acceptable data manipulation as it is essentially ‘guesswork’, and I don’t subscribe to the ‘gridding’ methodology in obtaining a ‘global’ temp. As I see it, and I accept I may be alone here – the only decent data is rural, fully quality checked and assured,to be used as a standalone ‘point’ illustration of a small piece of the ‘climate’. Mixing them all together with all the fancy statistics does not make sense to me…….IMHO, the summary result is no better than having say, an average human height for the world – when we all know that certain asian countries have shorter folk, etc – which in their own ‘local’ area, is perfectly normal!
Obviously, the manufacturing of a global temp anomaly was required to ‘prove’ the problem to the world – but how can it ever be trusted given the inadequacies of the base data and statistical analysis?

August 4, 2011 2:42 pm

“It is a travesty that we cannot point to any warming lately”

gnomish
August 4, 2011 2:50 pm

the average human has one ovary and one testicle.
does it blend? no!

Green Sand
August 4, 2011 2:55 pm

Steven Mosher says:
August 4, 2011 at 12:56 pm
“Analysing the complete hadCRUT yields some surprising results”

————————————————————————————————
Steve, have you therefore had access to the HadCRUT3 data prior to this release?

James Allison
August 4, 2011 3:08 pm

Steven Mosher says:
August 4, 2011 at 12:56 pm
Your sarcastic tone puts me off reading your comments. You would be well received at RC.

manicbeancounter
August 4, 2011 3:13 pm

Thanks Luboš for a well-thought out article, and nicely summarised by
“The “error of the measurement” of the warming trend is 3 times larger than the result!”
One of the implications of this wide variability, and the concentration of temperature measurements in a small proportion of the land mass (with very little from the oceans covering 70% of the globe) is that one must be very careful in the interpretation of the data. Even if the surface stations were totally representative and uniformly accurate (no UHI) and the raw data properly adjusted (Remember Darwin, Australia on this blog?), there are still normative judgements to be made to achieve a figure.
I have done some (much cruder) analysis comparing HADCRUT3 to GISSTEMP for the period 1880 to 2010, which helps illustrate these judgemental decisions.
1. The temperature series agree on the large fluctuations, with the exception of the post 1945 cooling – it happens 2 or 3 years later and more slowly in GISSTEMP.
2. One would expect greater agreement with recent data in more recent years. But since 1997 the difference in temperature anomalies has widened by nearly 0.3 celsius – GISSTEMP showing rapid warming and HADCRUT showing none.
3. If you take the absolute change in anomaly from month to month and average from 1880 to 2010, GISSTEMP is nearly double that of HADCRUT3 – 0.15 degrees v 0.08. The divergence in volatility reduced from 1880 to the middle of last century, when GISSTEMP was around 40% more volatile than HADCRUT3. But since then the relative volatility has increased. The figures for the last five years are respectively about 0.12 and 0.05 degrees. That is GISSTEMP is around 120% more volatile that HADCRUT3.
This all indicates that there must be greater clarity in the figures. We need the temperature indices to be compiled by qualified independent statisticians, not by those who major in another subject. This is particularly true of the major measure of global warming, where there is more than a modicum of partisan elements.
I have illustrated with a couple of graphs at
http://manicbeancounter.wordpress.com/2011/08/04/a-note-on-hadcrut3-v-gisstemp/

1DandyTroll
August 4, 2011 4:32 pm

Wow, when I was in the coffee shops of Amsterdam never did I imagine that the crystalized colorful reality was in essence some five thousand thermometers littering such a “small” place like earth!
Would 5 000 thermometers even be enough to give a real daily image of Siberia, I wonder?

August 4, 2011 4:41 pm

Complementarily, I should share a little study I did about five years ago. Without going into the long backstory, suffice it to say I did something similar to what Lubos did, except in the “date domain” rather than the spatial domain. In short, I wanted to know if every date of the year (e.g. the 4th of August) showed roughly the same rate of increase in temperature as every other date. I focused on a specific location (my home in Memphis, TN) in order to avoid all the nonsense and uncertainty about global averaging. So I downloaded temperature data for the Memphis airport for all the days from 1948 to 2006. I wasn’t really interested in the urban heat island effect, though that may affect the average trend, I was just curious about the date-to-date comparisons. Then, looking at each day of the year (including February 29th), I did the 366 linear regressions to find the slopes that represent the average rate of increase of the average daily temperature for each day of the year, and plotted up the results.
Steven Mosher will not be surprised to hear that the slopes follow a roughly Gaussian distribution, but even he might be surprised to learn that about a quarter of all the dates show a decline in temperature over the 59-year study period. You might be curious about which date shows the steepest decrease. Well, for Memphis, it turns out that the most rapidly cooling day of the year is Christmas day, which has shown an average decrease of about -0.17F per year for 1948-2006. I thought this was remarkable. I did some spot checks for other nearby locations, like Jackson, MS, and Little Rock, AR, and found essentially the same thing. So I started calling this the “Santa Claus effect”. I’ve always wondered if the decline had something to do with the obvious falloff in commercial activity on Christmas, but I haven’t pursued the issue.
Here is the plot I made five years ago:
http://bbbeard.org/MEM_XmasTemps.jpg
BBB

Malcolm Miller
August 4, 2011 5:07 pm

As I have pointed out before, the only way to measure the instantaneous ‘temperature ‘ of the Earth is to set up a bolometer – an all-wavelength radiation detector – at a sufficient distance in space to measure the effective temperature of the planet by observing half of it at a time and then studying the diurnal, monthly, yearly, and various other cycles. I think some people would be surprised at the T(eff) as revealed in all its variability by using the Stephan-Boltzmann law applied to the radiation flux. And no, I don’t think that Earth radiates as a ‘black body’.

jaymam
August 4, 2011 5:08 pm

Missing value codes
Back in 1970 the specification for a program that I was told to write said to use a year of 99 to indicate end of file. I protested but was told that it was unlikely that the system would still be going in the year 1999.
“missing data – indicated by the figure -99”
I find it amazing that someone decided to use -99 for missing data when that value can occur on Earth, unlikely but it does happen.
There appears to be no standard for indicating missing data. If there are many possible “missing code values”, a program may miss one and include wildly spurious data into the climate record. That may not be noticed because certain scientists average a whole lot of figures together.
Here I have GISS data that uses 999.9, and NIWA that uses –
(i.e. minus blank, an excellent option)
HARRY_READ_ME mentions missing value codes of -9999 and -999 and -999.00 and -99999 and 999999 and 8888888 and -10 and -7777777
e.g.
“CHERRAPUNJI, the wettest place in the world. So here, the high values are realistic. However I did notice that the missing value code was -10 instead of -9999!”
“LoadCTS multiplies non-missing lons by 0.1, so they range from -18 to +18 with missing value codes passing through AS LONG AS THEY ARE -9999. If they are -999 they will be processed and become -99.9. “

August 4, 2011 5:17 pm

“I wanted to know what are the actual temperature trends recorded at all stations – i.e. what is the statistical distribution of these slopes. Shawn had this good idea to avoid the computation of temperature anomalies (i.e. subtraction of the seasonally varied “normal temperature”): one may calculate the trends for each of the 12 months separately”.
It always bothered me a bit that a certain peculiarity of the Gregorian calendar we are using is never taken into account when calculating century scale temperature trends for each month of the year separately.
The problem with the old Julian calendar was that long term average length of its annual cycle was slightly longer than the tropical year, so the vernal equinox (when night and day has the same length in spring) slowly wandered back in the calendar, at an average rate of about 0.78 day/century. This is why Pope Gregory XIII skipped 10 days in 1582 (October 4 was followed by October 15) and established a new rule for leap years saying years divisible by 100 but not by 400 are not leap years (even though they are divisible by 4). That brings the calendar much closer to reality in the long run.
However, year 2000 happened to be divisible by 400, so it was a leap year. It means for 199 years in a row (centered on 2000 AD) we are missing Gregorian correction and in this epoch (between 1900 and 2100 AD) our calendar works just like the old Julian one. It means vernal equinox shifts back in the calendar by about 1.5 days in two centuries (in a see-saw pattern due to slight overcorrection in every fourth year).
Now, few temperature time series go back to the 19th century (or earlier), therefore if you calculate monthly trends for each location, this shift can’t be ignored.
As much more land area is found in the Northern hemisphere and average density of stations is also higher there, it dominates the dataset in this respect. At places where temperature difference between winter and summer is high, average rate of warming during spring months (within the year) can be pretty high, sometimes as much as 1°C/day (but ~0.5°C/day quite often). It means for these months century scale rate of warming in our epoch is biased upward by several tenths of a degree due to the “Gregorian calendar effect”.
Of course it is just the opposite for autumn and it is entirely the other way around in the Southern hemisphere. But just remember how often one finds springtime warming rates highlighted in press releases (while ignoring fall) for locations in Europe or Northern America.
I believe this effect also explains the pattern in your “sketch of a basic table”, that is, rates for February-May being high while for September-October they are low.

sky
August 4, 2011 5:24 pm

Regressional trends fitted to data with strong oscillatory components are very sensitive to record-duration and start-stop times. Thus there’s no such thing as “the” trend at a particular station. The analysis might be improved by putting all stations on the same time interval. Also it would interesting to see a stratification according to population size. After all, outside the USA, Scandinavia and Australia, the GHCN data base is largely urban.

timetochooseagain
August 4, 2011 5:44 pm

bbbeard-It’s not clear to me how you handled the fact that there will be only about one fourth as many February 29ths as other days of the year. Here’s a thought, rank days within each year, and ask what the trends are in nth warmest or coldest day from year to year. Might give some idea about how the distribution of temperatures is changing.

Richard S Courtney
August 4, 2011 5:45 pm

Berényi Péter:
Thankyou for your post at August 4, 2011 at 5:17 pm. I had failed to recognise the matter but it is obvious to me now you have explained it: i.e.
“However, year 2000 happened to be divisible by 400, so it was a leap year. It means for 199 years in a row (centered on 2000 AD) we are missing Gregorian correction and in this epoch (between 1900 and 2100 AD) our calendar works just like the old Julian one. It means vernal equinox shifts back in the calendar by about 1.5 days in two centuries (in a see-saw pattern due to slight overcorrection in every fourth year).
Now, few temperature time series go back to the 19th century (or earlier), therefore if you calculate monthly trends for each location, this shift can’t be ignored.”
As you say, when comparing seasonal or monthly hemispheric averages “this shift can’t be ignored” and it may also bias global data because the hemisphers differ in their trends.
However, considering the wide spread in the data reported by Lubos Motl, the effect you report makes no difference to what he reports in his above essay.
Thankyou. I like to learn.
Richard

Allen63
August 4, 2011 5:48 pm

Good post. As others have pointed out, its questionable if temperature data genuinely says anything accurate about “global warming”. Still, there is a net trend.
I downloaded the data set. The data set seems to be monthly temperatures — probably daily or twice daily temperatures averaged by some means with missing days “filled in” by some means. That is, its manipulated data not raw data (no disparagement meant to the original post as the post is very interesting).
Is there a complete set of the true raw data (or a limited set — say, limited to USA)? That is the individual daily, twice daily, or whatever readings — uncorrected. Does such a thing even exist anymore — or in digital form — or are all existing data sets “manipulated”. Where could I download it (assuming its available)? Thanks.

timetochooseagain
August 4, 2011 5:50 pm

Berényi Péter-Actually there is at least one study that has examined the effect of “Gregorian calender bias”:
R. S. Cerveny, B. M. Svoma, R. C. Balling, and R. S. Vose (2008), Gregorian calendar bias in monthly temperature databases , Geophys. Res. Lett. , 35 , L19706, doi:10.1029/2008GL035209.

August 4, 2011 5:56 pm

The size of areas near the poles is distorted by the Mercator (I assume) projection. Can the map be re-done on an equal area projection, like maybe the Lambert cylindrical equal-area projection?

cagw_skeptic99
August 4, 2011 6:04 pm

Actually mod, the author’s name does not appear at the top of the post.

SteveSadlov
August 4, 2011 6:04 pm

Here at the leading edge of North America it is cooling. It is undeniable. May or may not mean anything. We’ll see.

August 4, 2011 6:31 pm

timetochooseagain:
I did a regression on the 15 average temperature days that were labeled Feb 29th; for every other date I had 59 data points. On general principles, having 1/4 as many data points means the standard error is twice as big. But when I did this I didn’t bother to estimate the standard error of the regressed slope. If I were to do this today I would update through 2010 and include the standard errors.
I was a little worried about the precessional effects that Berenyi Peter referred to, in terms of the leap-year correction. In successive years, Jan 1st, for example, falls 1/4 day “later” than the previous years, except following a leap year, when it backs up 3/4 of a day. The only way to handle this rigorously, I think, is to partition the set of dates using mod-4 arithmetic (Jan 1st of leap year (“4k”), Jan 1st after leap year (“4k+1”), then “4k+2”, and “4k+3”). But it seems to me that the graph I produced shows there is so little correlation between successive dates, in terms of their regressed slope, that this correction would add little to the analysis. At worst there is a tiny bit more uncertainty for each data point in the abscissa; partioning the dates mod 4 would double the uncertainty on the ordinate. I suppose you could argue that when I do the regression I should add 0.2422*(Year-1948)-floor((Year-1948)/4) to each date — but I doubt that would change the regressed slopes by more than a small fraction of their uncertainty. If anyone can provide a rigorous formulation I’d be glad to listen.

naturalclimate
August 4, 2011 6:32 pm

You know if you smooth out that data enough, you won’t have to use so many colors, and then you can just call it an uptrend and be done with it.

Bart
August 4, 2011 6:40 pm

Given the haphazard pattern of temperature readings, average temperature should be calculated by fitting the data to an expansion of spherical harmonics, like we do for Earth’s gravitational potential and other quantities.

DocMartyn
August 4, 2011 7:08 pm

Your Voronoi graph is quite simply the best visual representation of a complex dataset that I have ever seen; I say that as someone who had been in research for 20 years.
Quite brilliant, well done.

Richard Hill
August 4, 2011 7:12 pm

S Mosher said…
The real work is figuring out if there is anything that explains why some places cool while other places warm. Been looking for that since 2007 when I first did a version of the analysis presented here. or conversely why some warm more than others ( aside from polar amplification )

It is a brilliant graph. Lubos should get a prize.
If you narrow your eyes, can you see a trail of red up the Great Rift Valley of Africa and then spreading out over the seismically active areas left towards Italy and right to Iran?
I know humans can see a pattern where there is none, but Mosh’s comment on this would be valued.

timetochooseagain
August 4, 2011 7:14 pm

bbbeard-Thanks. I am currently look at the daily variations in some data and am needing to figure out how to deal with leap years. You’ve got some interesting suggestions there I hadn’t thought of.

Richard M
August 4, 2011 7:16 pm

Sounds like randomness in action. Should not be a surprise in a chaotic system. While we humans might think it should average out, the climate just plods along at its own pace. I’m not really sure there is much information to be mined here.

David Falkner
August 4, 2011 9:50 pm

Ok, I’ll finish reading in just a second. Just wanted to note for the record that the first graph makes my ocular faculties sweat like a whore in church.

David Falkner
August 4, 2011 10:11 pm

Wow, I wonder if Gavin et al would be interested in providing the possible physical reasons for 30% of the stations cooling? Especially since they seemed so dismissive of Essex et al.
http://www.realclimate.org/index.php/archives/2007/03/does-a-global-temperature-exist/
The whole paper is irrelevant in the context of a climate change because it missed a very central point. CO2 affects all surface temperatures on Earth, and in order to improve the signal-to-noise ratio, an ordinary arithmetic mean will enhance the common signal in all the measurements and suppress the internal variations which are spatially incoherent (e.g. not caused by CO2 or other external forcings).
The result that all temperatures on Earth are not equally affected doesn’t seem very suppressed. Or spatially incoherent. Ouch.

August 4, 2011 11:49 pm

timetochooseagain says:
August 4, 2011 at 5:50 pm
Berényi Péter-Actually there is at least one study that has examined the effect of “Gregorian calender bias”

Thank you for bringing it to my attention, I was not aware of it.
GEOPHYSICAL RESEARCH LETTERS, VOL. 35, L19706, 4 PP., 2008
doi:10.1029/2008GL035209
Gregorian calendar bias in monthly temperature databases
Randall S. Cerveny, Bohumil M. Svoma, Robert C. Balling Jr. & Russell S. Vose
Is there a copy not hiding behind a paywall?
BTW, I wonder if the BEST project would take this effect into account or ignore it as all other global temperature analyses do. They have quite a lot of daily temperature data in their dataset, so at least in theory, they could go for it. We will see, as all their data, algorithms and methods are supposed to be published online (sooner or later).
Also, in non leap years January-June is three days shorter (181 days) than July-December (184 days), so any positive warming bias that may be present in the first half of the year due to the calendar effect, gets more weight in the annual average if the correction is ignored. And then, as you say, there is an overall hemispheric difference in trends, that make things a bit worse.
Anyway, it would be nice to see this bias quantified properly.

Patrick Davis
August 5, 2011 1:56 am

Maybe slightly O/T, but Mombasa, Kenya today, 12c. One of the coldest days on record…so far.

Dr A Burns
August 5, 2011 3:13 am

So where did the IPCC get those very tight error bars ?

Ryan
August 5, 2011 3:32 am

I notice that the Himalayas don’t appear to be getting any warmer. I wonder why all those Himalayan glaciers are melting? Oh I forgot, they aren’t melting, somebody made that up.
I notice that most of the developed world producing all that CO2 don’t seem to be warming much at all. The most severe warming occurs where there aren’t many people and even fewer thermometers it seems.

August 5, 2011 3:56 am

Dear everyone, thanks for your kind words and interest!
Who prefers white background, gadget-free, simple-design blogs could bookmark the mobile version of my blog,
http://motls.blogspot.com/?m=1
Otherwise one must be careful about the interpretation of “error margins”. The standard deviation over 2 Celsius degrees is the error of the temperature trend “at a random place of the globe”. However, the error in the trend of the global mean temperature is much smaller than 2 degrees Celsius because a quantity obtained by averaging N quantities with the same error margin has a much smaller, sqrt(N) times, error margin.
So the increase of the global mean temperature over the last 100 years or so, if the local HadCRUT3 data are at least approximately right, is of course statistically significant. As the histograms shows, the expected (but not guaranteed) increase of the global mean temperature in the next century is just not too useful for predictions what will happen at any particular place of the globe – which is a largely uncorrelated question. Just like 30% of places were cooling in the last 77 years – in average – 30% of places may cool in the next 77 years.
There are many variations of the work one could do – computing area-weighted averages with gridding (or an equivalent of it); drawing the diagrams on the round surface of the globe rather than the simple linear longitude-latitude graph, and so on. But many calculations that I did in Mathematica are pretty time-consuming – when it comes both to the CPU time and the human time.
I would like to hope that someone will convert the data to his or her preferred format and continue to analyze the data in many other ways. There is a very little doubt that most of the actual analyses of the data from the whole globe is done by the people who call themselves “climate realists” or “skeptics”. But that doesn’t mean that there can’t be any crisper conclusions one may still extract from similar data.
Yours
LM

Ryan
August 5, 2011 4:02 am

By the way, I would point out that these graphs are not based on data that could remotely be called “raw”. They are based on the monthly averages that appear in the HadCRUT database.
The graphs shown here are misleading, because although they give you a distribution, it is a distribution of averages, not the distribution of the raw readings from the thermometers. In other words although the distributions show a s.d. which is several times larger than the difference of the mean to zero, this s.d. is not nearly as wide as it would be if the raw data had not been averaged first.
Let me put it another way – these graphs show the distribution of temperatures for the whole of July, i.e. with the effect of clouds and wind direction and wind speed kind of smoothed out in a statistically imappropriate way. If you had the raw data you could plot graphs of how the temperature trends are for say the 1st July. This would avoid doing any averaging and would then show that any trend present is in a much wider distribution of temperatures than even these graphs suggest. In other words you are looking for a trend due to CO2 underlying a considerable amount of noise – primarily caused by clouds and wind direction. Now you could say that averaging out is intended to filter out this noise – but it is not a statistically valid method of doing so. The whole of July could be cloudy, or the whole of July could be cloud free of it could alternate between cloudy and cloud-free days – in these cases averaging over a month would give very different results unrelated to CO2 because the impact of cloud is not actually being filtered out succesfully by averaging. You could say that averaging out over longer periods of time or over multiple sites removes this noise, but we can’t be sure of that because we can’t be sure that the amount of cloud is not varying over time.
Another source of noise is the quantisation noise introduced by being able to read mercury thermometers only accurate to 0.5Celsius. Averaging obscures this because if you divide 31 different temperature readings over a month by 31 then you will likely get a recurring decimal which will look like the accuracy is much greater 0 but it isn’t. Graphs of temperature should only allow discrete values rounded to the nearest 0.5Celsius for this reason.

wayne Job
August 5, 2011 4:53 am

I read this entire post with all those commenting and enjoyed it much, I have to say however that Steve Mosher does come over as a boof head, that term in America would be ego centric. Please Mr Mosher think seriously before you post. James Dillingpole just had an interesting blog on trolls You may fit one of the descriptions.

gary gulrud
August 5, 2011 7:30 am

Impressive analysis, a straightforward dissection of the ‘average’ showing that the global population undermines the use of a global Bell curve.

August 5, 2011 7:49 am

I didn’t get the time today to read all comments here
but I want to say that what I am missing in the data
is the increase in maxima versus that of minima
as I have done in my (small) sample of 15 weather stations
(where I concentrated on looking at temp. changes on islands in the oceans to make a better estimate of the global increase – seeing that 70% of earth is water)
http://www.letterdash.com/HenryP/henrys-pool-table-on-global-warming
yet I think this data must exist?

Doug Proctor
August 5, 2011 9:58 am

Your comment about the 30% of stations that show cooling means that the 70% carry the burden of the cooling 30%: of this 70%, what proportion are urban? “Global” is easily redefined as “regional”: NH vs SH, northern latitudes vs southern latitudes etc. Each time a region is deselected, the warming becomes concentrated.
The IPCC/Hansen/Gore like to use the statisticians (no offense) and computer models because with both we get great precision. Not accuracy, but precision, and it is precision that impresses and motivates the ruling classes. Hansen notes a 0.01C of global warming as significant, while the ACCURACY of his temperature readings if always 0.5C (for historical, read-em-and-write records). At the same time GISTemp is about 1.2C warmer than HadCru: accuracy is lousy but the precision is fabulous.
In ancient times there were no computers, but the Greeks, Egyptians and the Romans were able to create wonderful buildings, acquaducts and star-charts and ephemeris’ because they focused on accuracy. When you are accurate the fourth decimal place doesn’t mean much. When you aren’t accurate, any decimal place doesn’t mean anything.

Shawn Halayka
August 5, 2011 12:07 pm

Hi Gareth Phillips,
You might like Scientific American’s new sci-art blog Symbiartic:
http://blogs.scientificamerican.com/symbiartic/2011/07/07/science-art-dont-call-it-art/
Also, I am working on making the some different visualizations that will complement the ones made by Lubos. So, even more pics soon.

gnomish
August 5, 2011 6:56 pm

mosher says:
“If I tell you the average trend of 5000 stations
is .5C per century how many of you know without even looking that the distribution will have trends that are less than .5c? ( everybody raise your hand )”
here’s the rub, teacher – if you tell me it is warming, then i expect to see a normal distribution of WARMING trends around a mean warming trend.
if you claim there is cooling, then i would expect to see the distribution reflect COOLING around a mean cooling trend.
if the distribution shows both cooling and warming, it is what one would expect of NO OVERALL TREND.
therefore, the claim that there is an overall warming trend during the specified period is not really supported by a distribution that shows cooling trends @ 30% of the samples, does it?

David Falkner
August 5, 2011 7:19 pm

Steven Mosher says:
August 4, 2011 at 12:56 pm
The real work is figuring out if there is anything that explains why some places cool while other places warm. Been looking for that since 2007 when I first did a version of the analysis presented here. or conversely why some warm more than others ( aside from polar amplification )
http://www.realclimate.org/index.php/archives/2007/03/does-a-global-temperature-exist/
Look for the paper referenced here.

August 6, 2011 10:40 am

I think most of you still don’t get it.
If you want to prove that man influences climate and global temps.
you must prove that minima (that occur during the night) pushed up average temps.
That happens (as far as I can see from my own sample) currently only in places where there is already a lot of volcanic activity going on, like Hawaii, (so that is non-anthropogenic) or there where man has become very effective in removing snow in the winter months (Holland, Norway, etc).
I don’t even know of any paper here on WUWT or on any other climate site that even looks closely at the trends in maxima and minima.
Except my own, of course.
http://www.letterdash.com/HenryP/henrys-pool-table-on-global-warming

George E. Smith
August 6, 2011 5:18 pm

Well I am sure you won’t find anything about “Voronoi” graphs in any respectable treatise on The Theory of Sampled Data Systems.
SD theory does allow for non uniform sampling intervals; but it is the LONGEST interval between samples that, must be shorter than half the wavelength of the highest frequency present in the band limited signal; not the SHORTEST sample interval; so non uniform sampling requires more samples than the minimum; not fewer.

Shawn Halayka
August 7, 2011 1:10 am

George,
If the material you’re reading doesn’t cover the ubiquity and usefulness of Voronoi diagrams in Nature, then you should probably find extra material. Perhaps start with something easier to comprehend, like biology.

JeffT
August 10, 2011 12:39 pm

As a few commenters have noted, one expects a distribution of trends. How many of the negative (or positive) trends are significant? I fit the data for each month at each site to a linear trend and calculated the standard deviation for each slope. Of the 60972 month-site combinations with sufficient data, 31% are negative. The customary rule for statistical significance is that a quantity must differ from zero by more than twice its standard deviation (stdev). The negative slope differs from zero by more than twice the standard deviation (stdev) in only 5.4% of the cases. 69% of the slopes are positive, but only 27% exceed 2*stdev. Therefore, in 67.6% of the cases, the slope is not statistically different from zero.
I also fit all the data at each site to a single trend, with a different offset for each month. There are 5095 sites with sufficient data. With more data in each fit, the stdev is smaller and more slopes are statistically significant. 35% of the slopes are negative, 12% by more than 2*stdev. 65% of the slopes are positive and 57% exceed 2*stdev. 23% of these year-round slopes are not statistically different from zero. Using all 12 months of data at each site at one time has improved the statistics.
Yes, there are statistically significant local variations and some are negative. However, most sites have a statistically significant positive year-round trend and only 12% have a statistically significant year-round negative trend.