From The Reference Frame, 30 July 2011 via the GWPF
HadCRUT3: 30% Of Stations Recorded A Cooling Trend In Their Whole History
The warming recorded by the HadCRUT3 data is not global. Despite the fact that the average station records 77 years of the temperature history, 30% of the stations still manage to end up with a cooling trend.
In a previous blog entry, I encouraged you to notice that HadCRUT3 has released the (nearly) raw data from their 5,000+ stations.
Temperature trends (in °C/century, in terms of colors) over the whole history as recorded by roughly 5,000 stations included in HadCRUT3. To be discussed below.
The 5,113 files cover the whole world – mostly continents and some islands. I have fully converted the data into a format that is usable and understandable in Mathematica. There are some irregularities, missing longitudes, latitudes, heights of a small fraction of the stations. Some extra entries appear for a very small number of stations and I have classified these anomalies as well.
As Shawn has also noticed, the worst defect is associated with the 863th (out of 5,113) station in Jeddah, Saudi Arabia. This one hasn’t submitted any data. For many stations, some months (and sometimes whole years) are missing so you get -99 instead. This shouldn’t be confused with numbers like -78.9: believe me, stations in Antarctica have recorded average monthly temperatures as low as -78.9 °C. It’s not just a minimum experienced for an hour: it’s the monthly average.
Clearly, 110 °C of warming would be helpful over there.
I wanted to know what are the actual temperature trends recorded at all stations – i.e. what is the statistical distribution of these slopes. Shawn had this good idea to avoid the computation of temperature anomalies (i.e. subtraction of the seasonally varied “normal temperature”): one may calculate the trends for each of the 12 months separately.
At a very satisfactory accuracy, the temperature trend for the anomalies that include all the months is just the average of those 12 trends. In all these calculations, you must carefully omit all the missing data – indicated by the figure -99. But first, let me assure you that the stations are mostly “old enough”:
As you can see, a large majority of the 5,000 weather stations is 40-110 years old (if you consider endYear minus startYear). The average age is 77 years – and that’s also because you may find a nonzero number of stations that have more than 250 years of the data. So it’s not true that you can get too many “bizarre” trends just because they arise from a very small number of short-lived and young stations.
Following Shawn’s idea, I computed the 12 histograms for the overall historical warming trends corresponding to the 12 months. They look like this:
Click to zoom in.
You may be irritated that the first histogram looks much broader than e.g. the fourth one and you may start to think why it is so. At the end, you will realize that it’s just an illusion – the visual difference arises because the scale on the y-axis is different and it’s different because if there’s just “one central bin” in the middle, it may reach much higher a maximum than if you have two central bins. 😉
This insight is easily verified if you actually sketch a basic table for these 12 histograms:
The columns indicate the month, starting from January; the number of stations that yielded legitimate trends for the month; the average trend for the stations and the given month, in °C/century; and the standard deviation – the width of the histogram.
You may actually see that September (closely followed by October) saw the slowest warming trend in these 5,000 stations – about 0.5 °C per century – while February (closely followed by March) had the fastest trend of 1.1 °C per century or so. The monthly trends are slightly random numbers in the ballpark of 0.7 °C but the function “trend” seems to be a more continuous, sine-like function of the month than white noise.
At any rate, it’s untrue that the 0.7 °C of warming in the last century is a “universal” number. In fact, for each month, you get a different figure and the maximum one is more than 2 times larger than the minimum one. The warming trends hugely depend both on the places as well as the months.
The standard deviations of the temperature trend (evaluated for a fixed month of the year but over the statistical ensemble of all the legitimate weather stations) go from 2.14 °C per century in September to 2.64 °C in February – the same winners and losers! The difference is much smaller than the huge “apparent” difference of the widths of the histogram that I have explained away. You may say that the temperatures in February tend to oscillate much more than those in September because there’s a lot of potential ice – or missing ice – on the dominant Northern Hemisphere. The ice-albedo feedback and other ice-related effects amplify the noise – as well as the (largely spurious) “trends”.
Finally, you may combine all the monthly trends in a huge melting pot. You will obtain this beautiful Gauss-Lorentz hybrid bell curve:
It’s a histogram containing 58,579 monthly/local trends – some trends that were faster than a certain large bound were omitted but you see that it was a small fraction, anyway. The curve may be imagined to be a normal distribution with the average trend of 0.76 °C per century – note that many stations are just 40 years old or so which is why they may see a slightly faster warming. However, this number is far from being universal over the globe. In fact, the Gaussian has a standard deviation of 2.36 °C per century.
The “error of the measurement” of the warming trend is 3 times larger than the result!
If you ask a simple question – how many of the 58,579 trends determined by a month and by a place (a weather station) are negative i.e. cooling trends, you will see that it is 17,774 i.e. 30.3 percent of them. Even if you compute the average trend for all months and for each station, you will get very similar results. After all, the trends for a given stations don’t depend on the month too much. It will still be true that roughly 30% of the weather stations recorded a cooling trend in all the monthly anomalies on their record.
Finally, I will repeat the same Voronoi graph we saw at the beginning (where I have used sharper colors because I redefined the color function from “x” to “tanh(x/2)”):
Ctrl/click to zoom in (new tab).
The areas are chosen according to their nearest weather station – that’s what the term “Voronoi graph” means. And the color is chosen according to a temperature color scheme where the quantity determining the color is the overall warming (+, red) or cooling (-, blue) trend ever recorded at the given temperature station.
It’s not hard to see that the number of places with a mostly blue color is substantial. The cooling stations are partly clustered although there’s still a lot of noise – especially at weather stations that are very young or short-lived and closed.
As far as I remember, this is the first time when I could quantitatively calculate the actual local variability of the global warming rate. Just like I expected, it is huge – and comparable to some of my rougher estimates. Even though the global average yields an overall positive temperature trend – a warming – it is far from true that this warming trend appears everywhere.
In this sense, the warming recorded by the HadCRUT3 data is not global. Despite the fact that the average station records 77 years of the temperature history, 30% of the stations still manage to end up with a cooling trend. The warming at a given place is 0.75 plus minus 2.35 °C per century.
If the rate of the warming in the coming 77 years or so were analogous to the previous 77 years, a given place XY would still have a 30% probability that it will cool down – judging by the linear regression – in those future 77 years! However, it’s also conceivable that the noise is so substantial and the sensitivity is so low that once the weather stations add 100 years to their record, 70% of them will actually show a cooling trend.
Even if you imagine that the warming rate in the future will be 2 times faster than it was in the last 77 years (in average), it would still be true that in the next 40 years or so, i.e. by 2050, almost one third of the places on the globe will experience a cooling relatively to 2010 or 2011! So forget about the Age of Stupid doomsday scenario around 2055: it’s more likely than not that more than 25% of places will actually be cooler in 2055 than in 2010.
Isn’t it remarkable? There is nothing “global” about the warming we have seen in the recent century or so.
The warming vs cooling depends on the place (as well as the month, as I mentioned) and the warming places only have a 2-to-1 majority while the cooling places are a sizable minority. Of course, if you calculate the change of the global mean temperature, you get a positive sign – you had to get one of the signs because the exact zero result is infinitely unlikely. But the actual change of the global mean temperature in the last 77 years (in average) is so tiny that the place-dependent noise still safely beats the “global warming trend”, yielding an ambiguous sign of the temperature trend that depends on the place.
Imagine, just for the sake of the argument, that any change of the temperature (calculated as a trend from linear regression) is bad for every place on the globe. It’s not true but just imagine it. So it’s a good idea to reduce the temperature change between now and e.g. the year 2087.
Now, all places on the planet will pay billions for special projects to help to cool the globe. However, 30% of the places will find out in 2087 that they will have actually made the problem worse because they will get a cooling and they will have helped to make the cooling even worse! 😉
Because of this subtlety, it would be an obvious nonsense to try to cool the globe down even if the global warming mattered because it’s extremely far from certain that cooling is what you would need to regulate the temperature at a given place. The regional “noise” is far larger than the trend of the global average so every single place on the Earth can neglect the changes of the global mean temperature if they want to know the future change of their local temperature.
The temperature changes either fail to be global or they fail to be warming. There is no global warming – this term is just another name for a pile of feces.
And that’s the memo.
UPDATE:
EarlW writes in comments:
Luboš Motl has posted an update with new analysis over shorter timescales that is interesting. Also, he posts a correction showing that he calculated the RMS instead of Stand Dev for the error.
Wrong terminology in all figures for the standard deviation
Bill Zajc has discovered an error that affects all values of the standard deviation indicated in both articles. What I called “standard deviation” was actually the “root mean square”, RMS. If you want to calculate the actual value of SD, it is given by
SD2=RMS2−⟨TREND⟩2
In the worst cases, those with the highest ⟨TREND⟩/RMS, this corresponds to a nearly 10% error: for example, 2.35 drops to 2.2 °C / century or so. My sloppy calculation of the “standard deviation” was of course assuming that the distributions had a vanishing mean value, so it was a calculation of RMS.
The error of my “standard deviation” for the “very speedy warming” months is sometimes even somewhat larger than 10%. I don’t have the energy to redo all these calculations – it’s very time-consuming and CPU-time-consuming. Thanks to Bill.
http://motls.blogspot.com/2011/08/hadcrut3-31-of-stations-saw-cooling.html




And now to segregate the UHE thermometers.
Validates the claim of Pielke,Sr, that there’s no global climate, just regional climates, and they change as needed. A point I frequently make. But if the alarmists don’t toss out such a wide net they can’t say Antarctica is warming. It dilutes the message to say one small exposed and northern-most peninsula is responsible for the bulk of the warming for credited to the entire continent.
Lubos:
Thankyou for your clear and elegant summary. Excellent!
For me the take-home messages are:
The average warming of the globe was 0.7 °C per century
but
warming at any given place was 0.75 °C plus/minus 2.35 °C per century.
and
about a third of the globe cooled.
So, about this ‘global warming’, can anybody say if and when it is going to start?
Richard
Clearly this is a repost from Lubos Motl’s blog, but I didn’t see any reference at the beginning of the article. May I suggest a notation at the head of the post?
The Mathematica reference was the dead giveaway! 😉
[Fixed, thanks. ~dbs, mod.]
I assume by “error of the measurement” quote you mean to indicate the standard error of the mean or the 95% confidence interval You ALSO have an unmentioned measurement error for each type of device used to record temperature data (also called the limit of observability) which describes the instrument error. This is thought to be approximately plus or minus point 5 degrees C for mercury in glass thermometers, but is certainly non-zero. You also do not consider sampling error nor the probability that stations have a systematic bias (which seems quite likely since stations were not randomly selected), etc. [surfacestation.org]
The fact is that plus or minus 2.75 degree SD is being exceedingly charitable, the data, in reality, is nearly total trash.
Anyone want to bet that if we look at the actual physical locations of the stations outside the US that are reporting warming trends, we will find a high correlation of unacceptable siting similar to what was found in the US.
Most probably at airports, or surrounded by UHI.
Sounds like a great way to see the world, if I could figure out how to get a grant for it.
Would it be possible to take the calculations one step further and determine an estimate for what the area is of the cooling trend? In other words, what proportion of your Voronoi graph is blue?
I really should say thank you to Anthony Watts for posting this analysis which can not be restated enough. The “you” in my previous comment is meant for the HadCRUT data, not the author of the article.
Thanks again for a useful analysis !
Lubos’ result showing a mean trend of 0.76(+/-)2.36 C shows that climate is very heterogeneous and consists of very many widely scattered sub-states. The mean state does not represent the system at all.
Chris Essex had a paper a few years back pointing out that global mean temperature is a statistic rather than a physical parameter, and that it doesn’t have any real physical meaning at all. Lubos’ result demonstrates that point. It makes no scientific sense to talk about a global mean temperature because nowhere on Earth experiences that state. The 95% confidence interval is 4.6 C on either side of the mean.
Talking about the mean state of a system of widely spread sub-states is approximately like asserting the physical comfort of a guy standing with one foot in a bucket of boiling water and the other foot frozen in ice at -80 C. His mean foot is at a comfortable +20 C, and what’s the problem? 🙂
Anthony-
What an interesting post. I am reminded of the Dead Sea Scrolls which were kept under lock-and-key for years, with only a select few allowed to study them, When they were available to all, a lot of new insights emerged. Now that the HadCRUT3 data is available, there should be all kinds slicing and dicing of the data.
In your analysis, you apparently ignored any seasonal affect, by including both Northern and Southern hemisphere data for a given a month. I wonder what the histograms would look like if you start in the spring of the year in each hemisphere as month 1. (say April and October respectively for the Northern and Southern hemisphere). Of course it wouldn’t effect the individual station trends but it would be interesting to see if it changed the monthly degrees per century.
Looking at the map, I assume that white is no change. It looks like there is a lot of “no change”. Have you calculated the percentage of stations that show no change , and the percent that show a plus change?
@Lubos: Nice! Let’s see what happens if you isolate airport sensor stations.
@Mosher: Hey, Mosh-man, you got a mouse in your pocket?
@Mark Young: Right! Mosh may know, but I’ve never seen this map before.
@Scott: The partial pressure of CO2 is too small to drop out solid CO2 at -110F.
.
863th ??
Amazing to see how CT holds back on changing NH ice data especially when it is low I think its done on purpose for max effect. It happens repeatedly. Of course its nowhere near that now so its now 6 days old.
http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/seaice.recent.arctic.png
steven mosher says:
August 4, 2011 at 8:44 am
“errr.. we’ve known this for quite sometime.”
This is the first time i see a histogram of all the station’s trends, and it’s most interesting.
wow! properly done statistics!!!
a valid measure of a real trend of any kind would begin with data taken at a particular hour for a particular station, plotted by itself. no min/max garbages.
THEN you can sort the plots precisely as was done for this article and see whatever there is to be seen.
averaging of temperatures is fallacious – will it blend? no!
I thought one of his most interesting posts was the one that showed that the starting point, 1850, for the temp history grafted to the hockey stick was clearly cherry picked.
http://motls.blogspot.com/2011/08/global-mean-temperature-since-1706.html
Thus, the temperature fluctuations occur as heat is transferred from the equator to the poles. Human activity appears to have little effect, possibly none. Lindzen also writes:
moptop,
You mean the one where he says: “You shouldn’t take this graph seriously.”?
A good start with the data! Lots of interesting conclusions can come out of having the actual data to play with.
Scott Scarborough says:
August 4, 2011 at 9:40 am
“Near the begining of this post you say that Antartica had a -78.9 deg. C reading for a monthly average. I don’t belive that. That is equivalent to -110 deg. F.”
Then you won’t believe that the lowest temps recorded over the past 32 years are:
Year Jan. Feb. Mar. Apr. May Jun. Jul. Aug. Sep.Oct. Nov.Dec.
°F .1 -42 -71 -96 -102 -108 -115 -126 -118 -107 -96 -67 -38
°C-17 -41 -57 -71 -75 -80 -85 -92 -86 -79 -71 -55 -38
Years Charted: 32 Source: International Station Meteorological Climate Summary, Version 4.0
http://bing.search.sympatico.ca/?q=Temp%20at%20south%20pole&mkt=en-ca&setLang=en-CA
Also, remember the strategy of (I believe) Al Gore to use fahrenheit because the numbers were bigger for warming. I note that everyone, even here at WUWT uses celcius for Antarctica. The record of -92C (in the past 32 yrs) is a princely -126F!
I, too, think the article could be enhanced by adding the category of zero trend and even the category of modest trend (say 0.2C/Century). Then apply the policy of cooling the earth down by half the increase in a century’s global warming. Latin America, except for the Amazon and Africa except for Sahara would be faced with an overall cooling trend.
Finally, it shoul be noted by Mosher and others of like convictions that once you release the data to the world, you get a goodly amount of thoughtful and varied contribution from a lot of “fresh”, un-prepped people. No wonder “insiders” are becoming irritable in their consensus love-ins and are adopting cornered-rat reactions to new developments in climate science that show it to be considerably more complex and at the same time less alarming,
What is the distribution for the site were separated by population density? Will high density sites have a more positive trend than low density sites?
Also will warm sites have a different distribution than sites that are on average colder?
Smokey says:
August 4, 2011 at 10:42 am
There is ample evidence that the Earth’s temperature as measured at the equator has remained within +/- 1°C for more than the past billion years. Those temperatures have not changed over the past century.
~ Prof Richard Lindzen
———————————
That’s an extraordinary claim. Do you know if the good professor has extraordinary evidence to back it up?
John B,
Why don’t you ask the good professor yourself?
On the map, what range of trends does each color correspond to? also, I noticed a lot of warming or cooling regions with no weather-stations within them, and not even any stations neighboring them.
How do the results come out if we weight each station by how large an area it represents? For example, what happens if we reduce the weight of all those stations clustered in the U.S. as they all collectively only represent a relatively small region, how do the numbers come out? (I think it would be appropriate to scale weights of stations with only a square-root dependence on area so as to take into account statistical error in large regions with few stations.)
I guess the oats aren’t working Mosh. Try eating 2 pounds of Wheat germ and whole Bran, followed by drinking 5 litres of water. Shake your belly vigorously for 20 minutes, then “pour”. Warning! Do not attempt without supervision! May cause hysterical blindness.