Analysing the complete hadCRUT yields some surprising results

From The Reference Frame, 30 July 2011 via the GWPF

HadCRUT3: 30% Of Stations Recorded A Cooling Trend In Their Whole History

The warming recorded by the HadCRUT3 data is not global. Despite the fact that the average station records 77 years of the temperature history, 30% of the stations still manage to end up with a cooling trend.

In a previous blog entry, I encouraged you to notice that HadCRUT3 has released the (nearly) raw data from their 5,000+ stations.

undefined

Temperature trends (in °C/century, in terms of colors) over the whole history as recorded by roughly 5,000 stations included in HadCRUT3. To be discussed below.

The 5,113 files cover the whole world – mostly continents and some islands. I have fully converted the data into a format that is usable and understandable in Mathematica. There are some irregularities, missing longitudes, latitudes, heights of a small fraction of the stations. Some extra entries appear for a very small number of stations and I have classified these anomalies as well.

As Shawn has also noticed, the worst defect is associated with the 863th (out of 5,113) station in Jeddah, Saudi Arabia. This one hasn’t submitted any data. For many stations, some months (and sometimes whole years) are missing so you get -99 instead. This shouldn’t be confused with numbers like -78.9: believe me, stations in Antarctica have recorded average monthly temperatures as low as -78.9 °C. It’s not just a minimum experienced for an hour: it’s the monthly average.

Clearly, 110 °C of warming would be helpful over there.

I wanted to know what are the actual temperature trends recorded at all stations – i.e. what is the statistical distribution of these slopes. Shawn had this good idea to avoid the computation of temperature anomalies (i.e. subtraction of the seasonally varied “normal temperature”): one may calculate the trends for each of the 12 months separately.

At a very satisfactory accuracy, the temperature trend for the anomalies that include all the months is just the average of those 12 trends. In all these calculations, you must carefully omit all the missing data – indicated by the figure -99. But first, let me assure you that the stations are mostly “old enough”:

As you can see, a large majority of the 5,000 weather stations is 40-110 years old (if you consider endYear minus startYear). The average age is 77 years – and that’s also because you may find a nonzero number of stations that have more than 250 years of the data. So it’s not true that you can get too many “bizarre” trends just because they arise from a very small number of short-lived and young stations.

Following Shawn’s idea, I computed the 12 histograms for the overall historical warming trends corresponding to the 12 months. They look like this:

Click to zoom in.

You may be irritated that the first histogram looks much broader than e.g. the fourth one and you may start to think why it is so. At the end, you will realize that it’s just an illusion – the visual difference arises because the scale on the y-axis is different and it’s different because if there’s just “one central bin” in the middle, it may reach much higher a maximum than if you have two central bins. 😉

This insight is easily verified if you actually sketch a basic table for these 12 histograms:

undefined

The columns indicate the month, starting from January; the number of stations that yielded legitimate trends for the month; the average trend for the stations and the given month, in °C/century; and the standard deviation – the width of the histogram.

You may actually see that September (closely followed by October) saw the slowest warming trend in these 5,000 stations – about 0.5 °C per century – while February (closely followed by March) had the fastest trend of 1.1 °C per century or so. The monthly trends are slightly random numbers in the ballpark of 0.7 °C but the function “trend” seems to be a more continuous, sine-like function of the month than white noise.

At any rate, it’s untrue that the 0.7 °C of warming in the last century is a “universal” number. In fact, for each month, you get a different figure and the maximum one is more than 2 times larger than the minimum one. The warming trends hugely depend both on the places as well as the months.

The standard deviations of the temperature trend (evaluated for a fixed month of the year but over the statistical ensemble of all the legitimate weather stations) go from 2.14 °C per century in September to 2.64 °C in February – the same winners and losers! The difference is much smaller than the huge “apparent” difference of the widths of the histogram that I have explained away. You may say that the temperatures in February tend to oscillate much more than those in September because there’s a lot of potential ice – or missing ice – on the dominant Northern Hemisphere. The ice-albedo feedback and other ice-related effects amplify the noise – as well as the (largely spurious) “trends”.

Finally, you may combine all the monthly trends in a huge melting pot. You will obtain this beautiful Gauss-Lorentz hybrid bell curve:

undefined

It’s a histogram containing 58,579 monthly/local trends – some trends that were faster than a certain large bound were omitted but you see that it was a small fraction, anyway. The curve may be imagined to be a normal distribution with the average trend of 0.76 °C per century – note that many stations are just 40 years old or so which is why they may see a slightly faster warming. However, this number is far from being universal over the globe. In fact, the Gaussian has a standard deviation of 2.36 °C per century.

The “error of the measurement” of the warming trend is 3 times larger than the result!

If you ask a simple question – how many of the 58,579 trends determined by a month and by a place (a weather station) are negative i.e. cooling trends, you will see that it is 17,774 i.e. 30.3 percent of them. Even if you compute the average trend for all months and for each station, you will get very similar results. After all, the trends for a given stations don’t depend on the month too much. It will still be true that roughly 30% of the weather stations recorded a cooling trend in all the monthly anomalies on their record.

Finally, I will repeat the same Voronoi graph we saw at the beginning (where I have used sharper colors because I redefined the color function from “x” to “tanh(x/2)”):

undefined

Ctrl/click to zoom in (new tab).

The areas are chosen according to their nearest weather station – that’s what the term “Voronoi graph” means. And the color is chosen according to a temperature color scheme where the quantity determining the color is the overall warming (+, red) or cooling (-, blue) trend ever recorded at the given temperature station.

It’s not hard to see that the number of places with a mostly blue color is substantial. The cooling stations are partly clustered although there’s still a lot of noise – especially at weather stations that are very young or short-lived and closed.

As far as I remember, this is the first time when I could quantitatively calculate the actual local variability of the global warming rate. Just like I expected, it is huge – and comparable to some of my rougher estimates. Even though the global average yields an overall positive temperature trend – a warming – it is far from true that this warming trend appears everywhere.

In this sense, the warming recorded by the HadCRUT3 data is not global. Despite the fact that the average station records 77 years of the temperature history, 30% of the stations still manage to end up with a cooling trend. The warming at a given place is 0.75 plus minus 2.35 °C per century.

If the rate of the warming in the coming 77 years or so were analogous to the previous 77 years, a given place XY would still have a 30% probability that it will cool down – judging by the linear regression – in those future 77 years! However, it’s also conceivable that the noise is so substantial and the sensitivity is so low that once the weather stations add 100 years to their record, 70% of them will actually show a cooling trend.

Even if you imagine that the warming rate in the future will be 2 times faster than it was in the last 77 years (in average), it would still be true that in the next 40 years or so, i.e. by 2050, almost one third of the places on the globe will experience a cooling relatively to 2010 or 2011! So forget about the Age of Stupid doomsday scenario around 2055: it’s more likely than not that more than 25% of places will actually be cooler in 2055 than in 2010.

Isn’t it remarkable? There is nothing “global” about the warming we have seen in the recent century or so.

The warming vs cooling depends on the place (as well as the month, as I mentioned) and the warming places only have a 2-to-1 majority while the cooling places are a sizable minority. Of course, if you calculate the change of the global mean temperature, you get a positive sign – you had to get one of the signs because the exact zero result is infinitely unlikely. But the actual change of the global mean temperature in the last 77 years (in average) is so tiny that the place-dependent noise still safely beats the “global warming trend”, yielding an ambiguous sign of the temperature trend that depends on the place.

Imagine, just for the sake of the argument, that any change of the temperature (calculated as a trend from linear regression) is bad for every place on the globe. It’s not true but just imagine it. So it’s a good idea to reduce the temperature change between now and e.g. the year 2087.

Now, all places on the planet will pay billions for special projects to help to cool the globe. However, 30% of the places will find out in 2087 that they will have actually made the problem worse because they will get a cooling and they will have helped to make the cooling even worse! 😉

Because of this subtlety, it would be an obvious nonsense to try to cool the globe down even if the global warming mattered because it’s extremely far from certain that cooling is what you would need to regulate the temperature at a given place. The regional “noise” is far larger than the trend of the global average so every single place on the Earth can neglect the changes of the global mean temperature if they want to know the future change of their local temperature.

The temperature changes either fail to be global or they fail to be warming. There is no global warming – this term is just another name for a pile of feces.

And that’s the memo.

UPDATE:

EarlW writes in comments:

Luboš Motl has posted an update with new analysis over shorter timescales that is interesting. Also, he posts a correction showing that he calculated the RMS instead of Stand Dev for the error.

Wrong terminology in all figures for the standard deviation

Bill Zajc has discovered an error that affects all values of the standard deviation indicated in both articles. What I called “standard deviation” was actually the “root mean square”, RMS. If you want to calculate the actual value of SD, it is given by

SD2=RMS2−⟨TREND⟩2

In the worst cases, those with the highest ⟨TREND⟩/RMS, this corresponds to a nearly 10% error: for example, 2.35 drops to 2.2 °C / century or so. My sloppy calculation of the “standard deviation” was of course assuming that the distributions had a vanishing mean value, so it was a calculation of RMS.

The error of my “standard deviation” for the “very speedy warming” months is sometimes even somewhat larger than 10%. I don’t have the energy to redo all these calculations – it’s very time-consuming and CPU-time-consuming. Thanks to Bill.

http://motls.blogspot.com/2011/08/hadcrut3-31-of-stations-saw-cooling.html

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

111 Comments
Inline Feedbacks
View all comments
huishi
August 4, 2011 11:56 am

“Prof Richard Lindzen”
Ah, he is one of those deniers from MIT! I bet Exxon funds all his girls, … er, parties, no, … I mean research!
(just kidding, please don’t have MIT sue me)

Louis Hooffstetter
August 4, 2011 11:59 am

Back in 2009, in a statement on its website, the CRU said: “We do not hold the original raw data but only the value-added (quality controlled and homogenised) data.”
Is this the value-added data we’re talking about or something else?

Kelvin Vaughan
August 4, 2011 12:00 pm

70% of the worlds population live in urban areas and 30% in rural areas.
In 1950 it was the reverse. http://en.wikipedia.org/wiki/Urbanization
Towns are getting hotter, the hot air rises higher into the atmosphere than it did, cools down more due to the extra height it rises, falls to earth in the rural areas which are now colder than they were.
(Convection)
That’s also why part of the USA is having a heat wave while the other part is having a cold wave.

August 4, 2011 12:04 pm

Henry
I think to try and get a true global average (or at least a good estimate of that average) one should try and balance all stations
latitude wise
as I have done here
http://www.letterdash.com/HenryP/henrys-pool-table-on-global-warming

John B
August 4, 2011 12:12 pm

Smokey says:
August 4, 2011 at 11:33 am
John B,
Why don’t you ask the good professor yourself?
——————–
I asked about evidence, not a c.v. Maybe there is no evidence, just an unsubtantiated claim. Oh, but I forgot, in your world “skeptics” are allowed to do that.

EarlW
August 4, 2011 12:14 pm

Luboš Motl has posted an update with new analysis over shorter timescales that is interesting. Also, he posts a correction showing that he calculated the RMS instead of Stand Dev for the error.

Wrong terminology in all figures for the standard deviation
Bill Zajc has discovered an error that affects all values of the standard deviation indicated in both articles. What I called “standard deviation” was actually the “root mean square”, RMS. If you want to calculate the actual value of SD, it is given by
SD2=RMS2−⟨TREND⟩2
In the worst cases, those with the highest ⟨TREND⟩/RMS, this corresponds to a nearly 10% error: for example, 2.35 drops to 2.2 °C / century or so. My sloppy calculation of the “standard deviation” was of course assuming that the distributions had a vanishing mean value, so it was a calculation of RMS.
The error of my “standard deviation” for the “very speedy warming” months is sometimes even somewhat larger than 10%. I don’t have the energy to redo all these calculations – it’s very time-consuming and CPU-time-consuming. Thanks to Bill.

http://motls.blogspot.com/2011/08/hadcrut3-31-of-stations-saw-cooling.html

Oscar Bajner
August 4, 2011 12:20 pm

Thank you Anthony, for reposting this from The Reference Frame.
I enjoy the wicked wit in lubo’s posts, but the colour scheme / background thing makes me psychotic.
This was way easier to read on WUWT.
REPLY: Yeah, same here. I only found out about if via the GWPF repost, TRF is very very hard on the eyes and sometimes locks up browsers with all the embedded scripting. Lubos probably turns off more people than he gains by color scheme, more so than by content. – Anthony

Richard S Courtney
August 4, 2011 12:36 pm

John B:
Contrary to your suggestion (at August 4, 2011 at 11:25 am ), Richard Lindzen does NOT make an “extraordinary claim” when he says;
“There is ample evidence that the Earth’s temperature as measured at the equator has remained within +/- 1°C for more than the past billion years. Those temperatures have not changed over the past century.”
A negative feedback prevents tropical ocean surface temperatures rising above 305K (i.e. present maximum ocean surface temperature). This has been confirmed by several studies and was first discovered in 1991:
Ref. Ramanathan & Collins, Nature, v351, 27-32 (1991)
Simply, sea surface temperature at the equator is bumping against an upper limit.
Glaciation does not reach the equator during ice ages, so it would be surprising if mean equatorial temperature were ever to vary by much.
Richard

Richard S Courtney
August 4, 2011 12:39 pm

John B:
My post crossed your later one.
In the light of my post, and the offensive nature of your post at August 4, 2011 at 12:12 pm, please justify your silly assertion that Lindzen made an “extraordinary claim” and apologise for your uncouth behaviour.
Richard

Michael J. Dunn
August 4, 2011 12:50 pm

There are established techniques for determining whether one probability distribution is statistically different from another (to some criterion). It would be interesting to posit a second distribution with the same standard deviation, but zero mean (null hypothesis for warming), and see if the two are statistically distinguishable. It is sometimes possible in a measurement series that a non-zero mean will result from finite sampling of a zero-mean phenomenon.

August 4, 2011 12:56 pm

“Analysing the complete hadCRUT yields some surprising results”
I suppose you could argue that it’s surprising for people who havent followed the work that SteveMc, did, that zeke did, that tonyb did. we have known for sometime that the distribution of trends is roughly normal. That would be what you would guess before you even saw the data. Non normality would be surprising. Here’s a brain buster. If I tell you the average trend of 5000 stations
is .5C per century how many of you know without even looking that the distribution will have trends that are less than .5c? ( everybody raise your hand ) and since you know ( before looking) that polar amplification can be 4X what you see at the equator that should tell you that the High tail should be at least 2C (again, without even looking at the data ) and if the high tail is at 2C then the low tail is bound to be negative ( again, without even looking at the data one just expects this )
And if you had randomly selected subsamples of 500 stations ( like zeke recently did) and like I did a while back and found that the answer doesnt change, that would also give you a clue that the trend data was roughly normally distributed.. and then of course if its roughly normalish you kinda know there will be tails that are really positive and tails that are negative. So not surprising. If you think about, not surprising at all. predictable in fact.
The real work is figuring out if there is anything that explains why some places cool while other places warm. Been looking for that since 2007 when I first did a version of the analysis presented here. or conversely why some warm more than others ( aside from polar amplification )
REPLY: Sheesh. OK then we’ll just shut the blog off then Mosh, we don’t want to have new people learn about anything new, old, normal then, or discuss it. You are starting to sound like Gavin ;-). /sarc On the plus side, yes figuring out why some have warmed and others have not is the nugget. – Anthony

Editor
August 4, 2011 12:56 pm

This is exactly what Verity Jones and myself found last September in an article carried here
http://wattsupwiththat.com/2010/09/04/in-search-of-cooling-trends/
i would say around 25% of the station data base are static over the extended period, with the remainder showing a small to fairly large warming trend , many of these latter are in urbanised areas and many more are not recording their original micro climatre.i.e they have moved at least once.
Personally, i think the term Global temperature is completely meaningless as is the notion of Global warming.
tonyb

August 4, 2011 1:20 pm

Steven Mosher says:
“The real work is figuring out if there is anything that explains why some places cool while other places warm. Been looking for that since 2007 when I first did a version of the analysis presented here.”
Prof Lindzen gives a very good explanation. The climate is never static:
The earth is never exactly in equilibrium. The motions of the massive oceans where heat is moved between deep layers and the surface provides variability on time scales from years to centuries.

August 4, 2011 1:54 pm

Steven Mosher-I get that you see a normal distribution of trends as a perfectly reasonable thing to expect statistically. But does it make sense physically, if current understanding of climate by “mainstream science” is to be believed? What kind of distribution of trends is found in climate models? Is it also normal? This may not be a “surprising” finding but it may be an important point. “Move along, nothing to see here” is not a reasonable reaction. A reasonable reaction would be to ask whether this is compatible with the “warmer” point of view or not. I take it you think it is. I want to see proof that it is or isn’t before jumping to such a conclusion.

John Whitman
August 4, 2011 2:14 pm

Luboš,
Thanks. It was highly educational for me.
Anthony and all, you are great.
Mosh – don’t be grumpy. Shall I sing “Its a Wonderful World” for you? Can you whistle in accompaniment? : )
John

Paul Irwin
August 4, 2011 2:18 pm

and the rest of the 70% not cooling are at airports, urban areas, or adjacent to and downwind of urban areas, eh? or in al gore’s 3000 sf living room?

Kev-in-UK
August 4, 2011 2:26 pm

of course a global mean temp is meaningless – because spatial coverage is unlikely to be anything like enough to be representative…..this is of course, a little fact the team like to dismiss when using such datasets for their work!
Unfortunately, however, we must accept that that is the only real data we have (for a significant time period) and we just have to make do with it………..but that doesn’t detract from the basic argument of whether any observed temp changes are natural or anthropogenic. Clearly, urban temp increases must be anthropogenic for obvious reasons and I believe all urban stations should essentially be ignored!
in respect of the released data, I would be interested in only the genuine rural stations being analysed on their own. Then, perhaps those stations could be individually appraised for quality – and then we may actually have some valid base data. I don’t subscribe that the UHI adjustment is acceptable data manipulation as it is essentially ‘guesswork’, and I don’t subscribe to the ‘gridding’ methodology in obtaining a ‘global’ temp. As I see it, and I accept I may be alone here – the only decent data is rural, fully quality checked and assured,to be used as a standalone ‘point’ illustration of a small piece of the ‘climate’. Mixing them all together with all the fancy statistics does not make sense to me…….IMHO, the summary result is no better than having say, an average human height for the world – when we all know that certain asian countries have shorter folk, etc – which in their own ‘local’ area, is perfectly normal!
Obviously, the manufacturing of a global temp anomaly was required to ‘prove’ the problem to the world – but how can it ever be trusted given the inadequacies of the base data and statistical analysis?

August 4, 2011 2:42 pm

“It is a travesty that we cannot point to any warming lately”

gnomish
August 4, 2011 2:50 pm

the average human has one ovary and one testicle.
does it blend? no!

Green Sand
August 4, 2011 2:55 pm

Steven Mosher says:
August 4, 2011 at 12:56 pm
“Analysing the complete hadCRUT yields some surprising results”

————————————————————————————————
Steve, have you therefore had access to the HadCRUT3 data prior to this release?

James Allison
August 4, 2011 3:08 pm

Steven Mosher says:
August 4, 2011 at 12:56 pm
Your sarcastic tone puts me off reading your comments. You would be well received at RC.

manicbeancounter
August 4, 2011 3:13 pm

Thanks Luboš for a well-thought out article, and nicely summarised by
“The “error of the measurement” of the warming trend is 3 times larger than the result!”
One of the implications of this wide variability, and the concentration of temperature measurements in a small proportion of the land mass (with very little from the oceans covering 70% of the globe) is that one must be very careful in the interpretation of the data. Even if the surface stations were totally representative and uniformly accurate (no UHI) and the raw data properly adjusted (Remember Darwin, Australia on this blog?), there are still normative judgements to be made to achieve a figure.
I have done some (much cruder) analysis comparing HADCRUT3 to GISSTEMP for the period 1880 to 2010, which helps illustrate these judgemental decisions.
1. The temperature series agree on the large fluctuations, with the exception of the post 1945 cooling – it happens 2 or 3 years later and more slowly in GISSTEMP.
2. One would expect greater agreement with recent data in more recent years. But since 1997 the difference in temperature anomalies has widened by nearly 0.3 celsius – GISSTEMP showing rapid warming and HADCRUT showing none.
3. If you take the absolute change in anomaly from month to month and average from 1880 to 2010, GISSTEMP is nearly double that of HADCRUT3 – 0.15 degrees v 0.08. The divergence in volatility reduced from 1880 to the middle of last century, when GISSTEMP was around 40% more volatile than HADCRUT3. But since then the relative volatility has increased. The figures for the last five years are respectively about 0.12 and 0.05 degrees. That is GISSTEMP is around 120% more volatile that HADCRUT3.
This all indicates that there must be greater clarity in the figures. We need the temperature indices to be compiled by qualified independent statisticians, not by those who major in another subject. This is particularly true of the major measure of global warming, where there is more than a modicum of partisan elements.
I have illustrated with a couple of graphs at
http://manicbeancounter.wordpress.com/2011/08/04/a-note-on-hadcrut3-v-gisstemp/

August 4, 2011 4:32 pm

Wow, when I was in the coffee shops of Amsterdam never did I imagine that the crystalized colorful reality was in essence some five thousand thermometers littering such a “small” place like earth!
Would 5 000 thermometers even be enough to give a real daily image of Siberia, I wonder?

August 4, 2011 4:41 pm

Complementarily, I should share a little study I did about five years ago. Without going into the long backstory, suffice it to say I did something similar to what Lubos did, except in the “date domain” rather than the spatial domain. In short, I wanted to know if every date of the year (e.g. the 4th of August) showed roughly the same rate of increase in temperature as every other date. I focused on a specific location (my home in Memphis, TN) in order to avoid all the nonsense and uncertainty about global averaging. So I downloaded temperature data for the Memphis airport for all the days from 1948 to 2006. I wasn’t really interested in the urban heat island effect, though that may affect the average trend, I was just curious about the date-to-date comparisons. Then, looking at each day of the year (including February 29th), I did the 366 linear regressions to find the slopes that represent the average rate of increase of the average daily temperature for each day of the year, and plotted up the results.
Steven Mosher will not be surprised to hear that the slopes follow a roughly Gaussian distribution, but even he might be surprised to learn that about a quarter of all the dates show a decline in temperature over the 59-year study period. You might be curious about which date shows the steepest decrease. Well, for Memphis, it turns out that the most rapidly cooling day of the year is Christmas day, which has shown an average decrease of about -0.17F per year for 1948-2006. I thought this was remarkable. I did some spot checks for other nearby locations, like Jackson, MS, and Little Rock, AR, and found essentially the same thing. So I started calling this the “Santa Claus effect”. I’ve always wondered if the decline had something to do with the obvious falloff in commercial activity on Christmas, but I haven’t pursued the issue.
Here is the plot I made five years ago:
http://bbbeard.org/MEM_XmasTemps.jpg
BBB

Malcolm Miller
August 4, 2011 5:07 pm

As I have pointed out before, the only way to measure the instantaneous ‘temperature ‘ of the Earth is to set up a bolometer – an all-wavelength radiation detector – at a sufficient distance in space to measure the effective temperature of the planet by observing half of it at a time and then studying the diurnal, monthly, yearly, and various other cycles. I think some people would be surprised at the T(eff) as revealed in all its variability by using the Stephan-Boltzmann law applied to the radiation flux. And no, I don’t think that Earth radiates as a ‘black body’.