Guest Post by Willis Eschenbach
Recently, Nature Magazine published a paywalled paper called “Human contribution to more-intense precipitation extremes” by Seung-Ki Min, Xuebin Zhang, Francis W. Zwiers & Gabriele C. Hegerl (hereinafter MZZH11) was published in Nature Magazine. The Supplementary Information is available here. The study makes a very strong claim to have shown that CO2 and other greenhouse gases are responsible for increasing extreme rainfall events, viz:
Here we show that human-induced increases in greenhouse gases have contributed to the observed intensification of heavy precipitation events found over approximately two-thirds of data-covered parts of Northern Hemisphere land areas.
Figure 1. Extreme 1-day rainfall. New Orleans, Katrina. Photo Source
There are two rainfall indices which are used in their analysis, called the RX1day and RX5day indices. The RX1day and RX5day indices give the maximum one-day precipitation and five-day precipitation for a given station for a given month. These individual station datasets (available here, free registration required) have been combined into a gridded dataset, called HADEX (Hadley Climate Extremes Dataset) . It is this gridded dataset that was used in the MZZH11 study.
So what’s wrong with the study? Just about everything. Let me peel the layers off it for you, one by one.
Other people have commented on a variety of problems with the study, including Roger Pielke Jr., Andy Revkin, Judith Curry . But to begin with, I didn’t read them, I did what I always do. I went for the facts. I thrive on facts. I went to get the original data. For me, this is not the HADEX data, as that data has already been gridded. I went to the actual underlying data used to create the HADEX dataset, as cited above. Since they don’t provide a single datablock file with all of the areas (grrrr … pet peeve), I started by looking at the USA data.
And as is my habit, the first thing I do is just to look at the individual records. There are 2,661 stations in the USA database, of which some 731 contain some RX1day maximum one day rainfall data. However, as is usual with weather records of all kinds, many of these have missing data. In addition, only 9% of the stations contain a significant trend at the 95% confidence level. Since with a 95% confidence interval (CI) we would expect 5% of the stations to exceed that in any random dataset, we’re only slightly above what would be expected in a random dataset. In addition, the number of stations available varies over time..
Now, let me repeat part of that, because it is important.
91% of the rainfall stations in the US do not show a significant trend in precipitation extremes, either up or down.
So overwhelmingly in the US there has been
No significant change in the extreme rainfall.
And as if that wasn’t enough …
Of the remaining 9% that have significant trends, 5% of the trends are probably from pure random variation.
So this means that
Only about 5% of the stations in the US show any significant change in rainfall extremes.
So when you see claims about changes in US precipitation extremes, bear in mind that they are talking about a situation where only ~ 5% of the US rainfall stations show a significant trend in extreme rainfall. The rest of the nation is not doing anything.
Now, having seen that, let’s compare that to the results shown in the study:
Figure 2. The main figure of the MZZH11 study, along with the original caption. This claims to show that the odds of extreme events have increased in the US.
Hmmmm …. so how did they get that result, when the trends of the individual station extreme precipitation show that some 95% of the stations aren’t doing anything out of the ordinary? Let me go over the stages step by step as they are laid out in the study. Then I’ll return to discuss the implications of each step.
1. The HADEX folks start with the individual records. Then, using a complex formula based on the distance and the angle from the center of the enclosing gridcell, they take a weighted station average of each month’s extreme 1-day rain values from all stations inside the gridcell. This converts the raw station data into the HADEX gridded station data.
2. Then in this study they convert each HADEX gridcell time series to a “Probability-Based Index” (PI) as follows:
Observed and simulated annual extremes are converted to PI by fitting a separate generalized extreme value (GEV) distribution to each 49-year time series of annual extremes and replacing values with their corresponding percentiles on the fitted distribution. Model PI values are interpolated onto the HadEX grid to facilitate comparison with observations (see Methods Summary and Supplementary Information for details).
In other words, they separately fit a generalized three-parameter probability function each to gridcell time series, to get a probability distribution. The fitting is done iteratively, by repeatedly adjusting each parameter to find the best fit. Then they replace that extreme rainfall value (in millimetres per day) with the corresponding probability distribution value, which is between zero and 1.
They explain this curious transformation as follows:
Owing to the high spatial variability of precipitation and the sparseness of the observing network in many regions, estimates of area means of extreme precipitation may be uncertain; for example, for regions where the distribution of individual stations does not adequately sample the spatial variability of extreme values across the region. In order to reduce the effects of this source of uncertainty on area means, and to improve representativeness and inter-comparability, we standardized values at each grid-point before estimating large area averages by mapping extreme precipitation amounts onto a zero-to-one scale. The resulting ‘probability-based index’ (PI) equalizes the weighting given to grid-points in different locations and climatic regions in large area averages and facilitates comparison between observations and model simulations.
Hmmm … moving right along …
3. Next, they average the individual gridcells into “Northern Hemisphere”, “Northern Tropics”, etc.
4. Then the results from the models are obtained. Of course, models don’t have point observations, they already have gridcell averages. However, the model gridcells are not the same as the HADEX gridcells. So the model values have to be area-averaged onto the HADEX gridcells, and then the models averaged together.
5. Finally, they use a technique optimistically called “optimal fingerprinting”. As near as I can tell this method is unique to climate science. Here’s their description:
In this method, observed patterns are regressed onto multi-model simulated responses to external forcing (fingerprint patterns). The resulting best estimates and uncertainty ranges of the regression coefficients (or scaling factors) are analysed to determine whether the fingerprints are present in the observations. For detection, the estimated scaling factors should be positive and uncertainty ranges should exclude zero. If the uncertainty ranges also include unity, the model patterns are considered to be consistent with observations.
In other words, the “optimal fingerprint” method looks at the two distributions H0 and H1 (observational data and model results) and sees how far the distributions overlap. Here’s a graphical view of the process, from Bell, one of the developers of the technique.
Figure 2a. A graphical view of the “optimal fingerprint” technique.
As you can see, if the distributions are anything other than Gaussian (bell shaped), the method gives incorrect results. Or as Bell says (op. cit.) the optimal footprint model involves several crucial assumptions, viz:
• It assumes the probability distribution of the model dataset and the actual dataset are Gaussian
• It assumes the probability distribution of the model dataset and the actual have approximately the same width
While it is possible that the extreme rainfall datasets fit these criteria, until we are shown that they do fit them we don’t know if the analysis is valid. However, it seems extremely doubtful that the hemispheric averages of the probability based indexes will be normal. The MZZH11 folks haven’t thought through all of the consequences of their actions. They have fitted an extreme value distribution to standardize the gridcell time series.
This wouldn’t matter a bit, if they hadn’t then tried to use optimal fingerprinting. The problem is that the average of a PI of a number of extreme value distributions will be an extreme value distribution, not a Gaussian distribution. As you can see in Figure 2a above, for the “optimal fingerprint” method to work, the distributions have to be Gaussian. It’s not as though the method will work with other distributions but just give poorer results. Unless the data is Gaussian, the “optimal fingerprint” method is worse than useless … it is actively misleading.
It also seems doubtful that the two datasets have the same width. While I do not have access to their model dataset, you can see from Figure 1 that the distribution of the observations is wider, both regarding increases and decreases, than the distribution of the model results.
This seems extremely likely to disqualify the use of optimal fingerprinting in this particular case even by their own criteria. In either case, they need to show that the “optimal fingerprint” model is actually appropriate for this study. Or in the words of Bell, the normal distribution “should be verified for the particular choice of variables”. If they have done so there is no indication of that in the study.
I think that whole concept of using a selected group of GCMs for “optimal fingerprinting” is very shaky. While I have seen theoretical justifications for the procedure, I have not seen any indication that it has been tested against real data (not used on real data, but tested against a selected set of real data where the answer is known). The models are tuned to match the past. Because of that, if you remove any of the forcings, it’s almost a certainty that the model will not perform as well … duh, it’s a tuned model. And without knowing how or why the models are chosen, how can they say their results are solid?
OK, I said above that I would first describe the steps of their analysis. Those are the steps. Now let’s look at the implications of each step individually.
STEP ONE: We start with what underlies the very first step, which is the data. I didn’t have to look far to find that the data used to make the HADEX gridded dataset contains some really ugly errors. One station shows 48 years of August rains with a one-day maximum of 25 to 50 mm (one to two inches), and then has one August (1983) with one day when it is claimed to have rained 1016 mm (40 inches) … color me crazy, but I think that once again, as we have seen time after time, the very basic steps have been skipped. Quality doesn’t seem to be getting controlled. So … we have an unknown amount of uncertainty in the data simply due to bad individual data points. I haven’t done an analysis of how much, but a quick look revealed a dozen stations with that egregious an error in the 731 US datasets … no telling about the rest of the world.
The next data issue is “inhomogeneities” (sudden changes in volume or variability) in the data. In a Finnish study, 70% of the rainfall stations had inhomogeneities. While there are various mathematical methods used by the HADEX folks to “correct” for this, it introduces additional uncertainty into the data. I think it would be preferable to split the data at the point of the inhomogeneous change, and analyze each part as a separate station. Either way, we have an uncertainty of at least the difference in results of the two methods. In addition, the Norwegian study found that on average, the inhomogeneities tended to increase the apparent rainfall over time, introducing a spurious trend into the data.
In addition, extreme rainfall data is much harder to quality control than mean temperature data. For example, it doesn’t ever happen that the January temperature at a given station averages 40 degrees every January but one, when it averages 140 degrees. But extreme daily rainfall could easily change from 40 mm one January to an unusual rain of 140 mm. This makes for very difficult judgements as to whether a large daily reading is erroneous.
In addition, an extreme value is one single value, so if that value is incorrectly large it is not averaged out by valid data. It carries through, and is wrong for the day, the month, the year, and the decade.
Rainfall extreme data also suffers in the recording itself. If I have a weather station and I go away for the weekend, my maximum thermometer will record the maximum temperature of the two days I missed. But the rainfall gauge can only give me the average of the two days I missed … or I could record the two days as one with no rain on the other day. Either way … uncertainties.
Finally, up to somewhere around the seventies, the old rain gauges were not self emptying. This means that if the gauge were not manually emptied, it could not record an extreme rain. All of these problems with the collection of the extreme rainfall data means it is inherently less accurate than either mean or extreme temperature data.
So that’s the uncertainties in the data itself. Next we come to the first actual mathematical step, the averaging of the station data to make the HADEX gridcells. HADEX, curiously, uses the averaging method rejected by the MZZH11 folks. HADEX averages the actual rainfall extreme values, and did not create a probability-based index (PI) as in the MZZH11 study. I can make a cogent argument for either one, PI or raw data, for the average. But using a PI based average of a raw data average seems like an odd choice, which would result in unknown uncertainties. But I’m getting ahead of myself. Let me return to the gridding of the HADEX data.
Another problem increasing the uncertainty of the gridding is the extreme spatial and temporal variability of rainfall data. They are not well correlated, and as the underlying study for HADEX says (emphasis mine):
[56] The angular distance weighting (ADW) method of calculating grid point values from station data requires knowledge of the spatial correlation structure of the station data, i.e., a function that relates the magnitude of correlation to the distance between the stations. To obtain this we correlate time series for each station pairing within defined latitude bands and then average the correlations falling within each 100 km bin. To optimize computation only pairs of stations within 2000 km of each other are considered. We assume that at zero distance the correlation function is equal to one. This may not necessarily be the best assumption for the precipitation indices because of their noisy nature but it does provide a good compromise to give better gridded coverage.
Like most AGW claims, this seems reasonable on the surface. It means that stations closer to the gridbox center get weighted more than distant stations. It is based on the early observation by Hansen and Lebedeff in 1987 that year-to-year temperature changes were well correlated between nearby stations, and that correlation fell off with distance. In other words, if this year is hotter than last year in my town, it’s likely hotter than last year in a town 100 km. away. Here is their figure showing that relationship:
Figure 3. Correlation versus Inter-station Distance. Original caption says “Correlation coefficients between annual mean temperature changes for pairs of randomly selected stations having at least 50 common years in their records.”
Note that at close distances there is good correlation between annual temperature changes, and that at the latitude of the US (mostly the bottom graph in Figure 3) the correlation is greater than 50% out to around 1200 kilometres.
Being a generally suspicious type fellow, I wondered about their claim that changes in rainfall extremes could be calculated by assuming they follow the same distribution used for temperature changes. So I calculated the actual relationship between correlation and inter-station distance for the annual change in maximum one-day rainfall. Figure 4 shows that result. It is very different from temperature data, which has good correlation between nearby stations and drops off slowly with increasing distance. Extreme rainfall does not follow that pattern in the slightest.
Figure 4. Correlation of annual change in 1-day maximum rainfall versus the distance between the stations. Scatterplot shows all station pairs between all 340 mainland US stations which have at least 40 years of data per station. Red line is a 501 point Gaussian average of the data.
As you can see, there is only a slight relationship at small distances between extreme rainfall event correlation and distance between stations. There is an increase in correlation with decreasing distance as we saw with temperature, but it drops to zero very quickly. In addition, there are a significant number of negative correlations at all distances. In the temperature data shown in Figure 3, the decorrelation distance (the distance where the average correlation drops to 0.50) is on the order of 1200 km. The corresponding decorrelation distance for one-day extreme precipitation is only 40 km …
Thinking that the actual extreme values might correlate better than the annual change in the extreme values, I plotted that as well … it is almost indistinguishable from Figure 4. Either way, there is only a very short-range (less than 40 km) relation between distance and correlation for the RX1day data.
In summary, the method of weighting averages by angular distances used for gridding temperature records is supported by the Hansen/Lebedeff temperature data in Figure 3. On the other hand, the observations of extreme rainfall events in Figure 4 means that we cannot use same method for gridding of extreme rainfall data. It makes no sense, and reduces accuracy, to average data weighted by distance when the correlation doesn’t vary with anything but the shortest distances, and the standard deviation for the correlation is so large at all distances.
STEP 2: Next, they fit a generalized extreme value (GEV) probability distribution to each individual gridcell. I object very strongly to this procedure. The GEV distribution has three different parameters. Depending on how you set the three GEV dials, it will give you distributions ranging from a normal to an exponential to a Weibull distribution. Setting the dials differently for each gridcell introduces an astronomical amount of uncertainty into the results. If one gridcell is treated as a normal distribution, and the next gridcell is treated as an exponential distribution, how on earth are we supposed to compare them? I would throw out the paper based on this one problem alone.
If I decided to use their method, I would use a Zipf distribution rather than a GEV. The Zipf distribution is found in a wide range of this type of natural phenomena. One advantage of the Zipf distribution is that it only has one parameter, sigma. Well, two, but one is the size of the dataset N. Keeps you from overfitting. In addition, the idea of fitting a probability distribution to the angular-distance weighted average of raw extreme event data is … well … nuts. If you’re going to use a PI, you need to use it on the individual station records, not on some arbitrary average somewhere down the line.
STEP 3: Hemispheric and zonal averages. In addition to the easily calculable statistical error propagation in such averaging, we have the fact that in addition to statistical error each individual gridpoint has its own individual error. I don’t see any indication that they have dealt with this source of uncertainty.
STEP 4: Each model needs to have its results converted from the model grid to the HADEX grid. This, of course, gives a different amount of uncertainty to each of the HADEX gridboxes for each of the models. In addition, this uncertainty is different from the uncertainty of the corresponding observational gridbox …
There are some other model issues. The most important one is that they have not given any ex-ante criteria for selecting the models used. There are 24 models in the CMIP database that they could have used. Why did they pick those particular models? Why not divide the 24 models into 3 groups of 8 and see what difference it makes? How much uncertainty is introduced here? We don’t know … but it may be substantial.
STEP 5: Here we have the question of the uncertainties in the optimal fingerprinting. These uncertainties are said to have been established by Monte Carlo procedures … which makes me nervous. The generation of proper data for a Monte Carlo analysis is a very subtle and sophisticated art. As a result, the unsupported claim of a Monte Carlo analysis doesn’t mean much to me without a careful analysis of their “random” proxy data.
More importantly, the data does not appear to be suitable for “optimal fingerprinting” by their own criteria.
End result of the five steps?
While they have calculated the uncertainty of their final result and shown it in their graphs, they have not included most of the uncertainties I listed above. As a result, they have greatly underestimated the real uncertainty, and their results are highly questionable on that issue alone.
OVERALL CONCLUSIONS
1. They have neglected the uncertainties from:
• the bad individual records in the original data
• the homogenization of the original data
• the averaging into gridcells
• the incorrect assumption of increasing correlation with decreasing distance
• the use of a 3 parameter fitted different probability function for each gridcell
• the use of a PI average on top of a weighted raw data average
• the use of non-Gaussian data for an “optimal fingerprint” analysis
• the conversion of the model results to the HADEX grid
• the selection of the models
As a result, we do not know if their findings are significant or not … but given the number of sources of uncertainty and the fact that their results were marginal to begin with, I would say no way. In any case, until those questions are addressed, the paper should not have been published, and the results cannot be relied upon.
2. There are a number of major issues with the paper:
• Someone needs to do some serious quality control on the data.
• The use of the HADEX RX1day dataset should be suspended until the data is fixed.
• The HADEX RX1day dataset also should not be used until gridcell averages can be properly recalculated without distance-weighting.
• The use of a subset of models which are selected without any ex-ante criteria damages the credibility of the analysis
• If a probability-based index is going to be used, it should be used on the raw data rather than on averaged data. Using it on grid-cell averages of raw data introduces spurious uncertainties.
• If a probability-based index is going to be used, it needs to be applied uniformly across all gridcells rather than using different distributions a gridcell by gridcell basis.
• No analysis is given to justify the use of “optimal fingerprinting” with non-Gaussian data.
3. Out of the 731 US stations with rainfall data, including Alaska, Hawaii and Puerto Rico, 91% showed no significant change in the extreme rainfall events, either up or down.
4. Of the 340 mainland US stations with 40 years or more of records, 92% showed no significant change in extreme rainfall in either direction.
As a result, I maintain that their results are contrary to the station records, that they have used inappropriate methods, and that they have greatly underestimated the total uncertainties of their results. Thus the conclusions of their paper are not supported by their arguments and methods, and are contradicted by the lack of any visible trend in the overwhelming majority of the station datasets. To date, they have not established their case.
My best regards to all, please use your indoor voices in discussions …
w.
[UPDATE] I’ve put the widely-cited paper by Allen and Tett about “optimal fingerprinting” online here.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.





Thanks Willis
I downloaded the paper and also was extremely skeptical that there had even been a statistically significant increase in extreme precipitation. But I am glad you have really gone into it – thanks for that !
I’d be intrigued to know what Steve McIntyre or Ross McKitrick thinks about this analysis…
Mr. Eschenbach,
As always a great piece of work. A couple of questions though. You say “In addition, only 9% of the stations contain a significant trend at the 95% confidence level. Since with a 95% confidence interval (CI)…”.
How do you know that the trend is significant? And at the 5% level? Is the trend significant only by observation or have you assumed a Gaussian distribution? Or have you used your own preferred Zipf distribution? And, if the latter, have you also used it to obtain your 95% confidence limit?
Nature used to be the premier scientific journal with rigorous peer review. To have a paper published in Nature was the passport to fame, tenure and grants. Sadly, the journal seems to have become a mere propaganda vehicle.
The misuse of statistical processes you highlight in this paper shows that either the reviewers are so fixated on the conclusions which fit their own preconceived notions or they simply do not understand the methodology. If they didn’t, did they tell the editor? Did the editor take specialist statistical advice? One suspects strongly the answer to both those questions is “no”.
I am sure that peer review retains it’s integrity in some areas of scientific endeavour but “climate science” is clearly not one of them.
If I understand their methodology correctly the team has looked for an alternate CAGW signal because global temperatures are not doing what they should and the scope for adjustments has reached its end and in finding this alternate round peg they have tried to smash it into the square hole of reality by starting out with the determination and then trying to find and fit and adjusts methods and data to support their case. Observation versus modelling and data extrapolation?
The CAGW theorists believe they need to show the ordinary people an easy to understand easy to digest and simple visual cue, its easy to see rain and rainfall is easy to understand. We see the attempt to find an anthropogenic signal, any signal will do, is taking ever more desperately bizarre forms, the data and causal links ever more tenuous.
Roy says:
February 21, 2011 at 12:58 am
I think that the general point that a change in climate will produce changes in the (frequency of particular types of) weather is logically sound!
Agreed, but the problem here (and with the Oxford study about flooding) is that they both take the conclusion that their studies show the increased risk of precipitation/flooding is as a result of co2 forcings, where as all they actualy show (if we accept them as correct, which is a big if), is an increased risk due to climate change. They do not prove correlation with co2 and CC.
This strikes me as a major problem with a lot of alarmist papers/studies in that they always assume that co2 is the only possible cause. Thus any changes in weather/flood/draught patterns are proof of AGW.
Afraid to say the horse has bolted on this one,the paper was aired on the BBC’s
6 o’clock news Friday 18/2 evening.think it was Shukman frothing at the mouth over it.
So much for the IPCC stance of not be able to say that any extreme weather event should/could be attributed to AGW!!
i seem to remember it being put down to the jet stream again??
Richard Telford says:
February 21, 2011 at 1:55 am
Gosh, Richard. I wish I was clever like you. I tried to get cleverer when I first started getting concerned about global warming, and I went to real Climate because they claimed to be the fount of all climate knowledge. Their condescension rapidly turned me off.
What I found useful here (on WUWT) was the way this community helped people who didn’t understand. They didn’t talk down, and where the author of a post was too technical for me, I usually found illumination from the comments of bloggers.
I take it that you are not persuaded by Willis’ analysis of this paper, but making vague assertions and flaunting a bit of code leaves me thinking that you are simply trying to show off. Climate thickies like me would value some explanation – for all I know you may have a valid point – but if you can’t learn to communicate in normal language then pray, keep your thoughts to yourself.
Australia’s new Commissioner for Climate, Tim Flannery, was on National TV a few minutes ago quoting both this paper and the one about the UK year 2000 floods. He said they supported man-made climate change as linked to the Queensland floods of recent months.
Opposition politician Senator Barnaby Joyce (who would be Rupublican in US terminology) asked Prof Flannery about his main doctoral discipline, which was palaeontology. The green-leaning host Tony Jones, then asked Sen Joyce if he would draw Prof Flannery into discussions in his new role. Joyce replied that he would call him in if palaeontology was to be discussed……
From Willis’ analysis of this paper, it seems that once again, statisticians should be employed more often by climate workers.
Well done Mr.Eschenbach, a pity you were not on the review panel!
Is it me or does there seem to be great haste to publish doom and gloom reports, papers and articles recently. There must be someone, somewhere tracking this. Has this output of doom gone exponential yet? Where will the tipping point be?
Please do not use a moving average when trying to fit temperature data. As a statistican and meteorologist , there is far too much variation in the data (Monthly temperatures) to support the use of a moving average. In fact, the variation in the data does not permit adequate forecasting abilities (although it does permit trending applications). This is why climate predictions of temperature deviations (changes) are somewhat meaningless. It would be better to select a time period, when variations demonstrate randomness, to then use an average or predict a trend. Also, using a single number or even a small set of numbers to predict a trend is meaningless if the variation (deviation month to month, season to season, etc.) is non-random.
The heaviest rains tend to occur in unstable air masses that tend to be local, e.g. thunderstorms. Steady light rains tend to be from wide area stable air masses. It should be clear that extreme rain events will not correlate well with distance.
Also errors in reporting will always tend high. You may report a 4mm rain as 40mm but very unlikely as negative 40mm.
As always, your contribution is greatly appreciated.
There is one hook inside: now it is not the “warmer climate causing extremes” claim, but directly the pure anthropogenic forcing itself! This flip in argumentation is not surprising, since Northern hemisphere cools down big time, being bellow 1990 at the moment:
http://climexp.knmi.nl/data/icrutem3_hadsst2_0-360E_30-90N_n_su_1990:2011a.png
So first it was “CO2 warms planet and thus extremes will increase” but now expect reasoning “CO2 directly causes extremes even the temperature does not rise, because temperature was never that important, but the forcing, the forcing”.
I’m confused. NOAA just had a study that said global warming was going to cause more desertification.
http://wattsupwiththat.com/2011/02/19/noaas-compendium-of-climate-catastrophe/
And now this study in Nature says global warming is causing more rain. Which is it?
Nature Mag’s article is a good example of the old saying:
Liars go “figure” [MZZH11]
Figures don’t lie [Willis’s].
IMHO
I need help finding PAST weather related extreme events world wide. Wik is not a good resource. I am looking for detailed reports of drought heatwaves storms , cyclones hurricanes, extreme cold, etc. Here is a list of what I have found so far for the 1930 to 1936 or so period I am researching. China, Russia US and Canada are covered, but little else. Thanks in advance.
1930 May 13th Farmer killed by hail in Lubbock, Texas, USA; this is the only US known fatality due to hail.
1930 June 13th 22 people killed by hailstones in Siatista, Greece.
1930 Sept 3rd Hurricane kills 2,000, injures 4,000 (Dominican Republic).
1930s Sweden The warmest decade was the 1930s, after which a strong cooling trend occurred until the
1970s INTERNATIONAL JOURNAL OF CLIMATOLOGY http://onlinelibrary.wiley.com/doi/10.1002/joc.946/abstract
1930 Russian heat wave in the 1930′s, for the decade was 0.2 degrees below 2000 to 2010 heat wave.
1930 set 3 all time HIGHEST state temperatures, Delaware, 110F Jul. 21, Kentucky, 114 Jul. 28, Tennessee 113 Aug. 9, and one all time LOWEST state record, Oklahoma -27 Jan. 18. About 400% more then a statistical average.
1931 set two highest State temp ever, FL, 109 Jun. 29, and HI, 109 Jun. 29
1931 Europe LOWEST temp ever in all of Europe −58.1 °C (−72.6°F)
1931 The 20th centuries worst water related disaster was the Central China flooding of 1931, inundating 70,000 square miles and killing 3.5-4 million people.
1931 July Western Russia heat wave 6 degrees F monthly anomaly above normal, 2nd warmest on 130 year record. Decade of 1930 to 1940 within 0.2 degrees of 2000 to 2010 western Russia July
1931 Sept 10th The worst hurricane in Belize Central America history kills 1,500 people.
1932 TORNADO OUTBREAK SEVERE 1932, March 21 Alabama 268 DEAD
1932 November 9th Santa Cruz Del Sur Cuba category 5 hurricane 2,500 dead.
1932 Madagascar cyclone crosses Reunion Island 35,000 homeless 45 dead.
1932 June 19th Hailstones kill 200 in Hunan Province, China
1932 / 33 Soviet famine. 7 to 14 million. Mostly human caused, but drought and low crop yields in 1931 and 32 contributed.
1933 Sept Cat 3 Florida landfall.
1933 4 LOWEST state temp ever were recorded in Oregon -54 Feb. 10, Texas -23 Feb. 8,
Vermont -50 Dec. 30, Wyoming -66 Feb. 9
1933 February 6 Highest recorded sea wave (not tsunami), 34 metres (112 feet), in Pacific hurricane
1933 Highest temp ever in SWEDEN 38.0 °C (100.4 °F) tied in 2009
1933 Lowest temp ever recorded in ASIA −68 °C (−90 °F) tied in 02 and 06
1933 NORTH KOREA LOWEST temp ever North Korea −43.6 °C ( -46.48°F)
1933 August 11th Highest World Temperature ever reaches 136 degrees F (58 degrees C) at San Luis Potosí, Mexico (world record).
1933 Nov 11th Great Black Blizzard” first great dust storm in the Plains of the USA.
1934 May 11th Over two days, the most severe dust storm to date in the USA sweeps an estimated 350 million tons of topsoil from the Great Plains across to the eastern seaboard.
1934 Fastest recorded with an anemometer outside of a tropical cyclone: 372 km/h (231 mph) sustained 1-minute average; Mount Washington, New Hampshire,
Michigan -two states recorded their highest ever temperature both 118 degrees Idaho and Iowa, and two states recorded their lowest ever temperatur Michigan -51 and New Hampshire -47
1934 LOWEST temp ever Singapore 19.4 °C (66.9 °F)
1934 Typhoon strikes Honshu Island, Japan, kills 4,000
1935 Ifrane Morocco, LOWEST temperature continent of Africa ever recorded, minus 11
1935 Florida, A CAT ONE HURICANE AT LANDFALL.
1935 Nepisiguit Falls, New Brunswick 39.4 °C 12th highest temp ever in Canada.
1935 Collegeville, Nova Scotia 38.3 °C 15th highest temp ever in Canada.
1935 Iroquois Falls, Ontario −58.3 °C 5th lowest temp ever in Canada.
1935 Western Russia, 9th coldest July in 130 years.
1935 145,000 dead 1935 Yangtze river flood China
1935 August 1935 and 36 two typhoons hit Fukien province in China, hundreds dead.
1935 Labor Day hurricane one of the most intense hurricanes to make landfall in U.S. in recorded history. More than 400 people were killed. 185 MPH sustained winds
1935 Hati 21 October: hurricane in Sud and Sud-Est départements. 2,000 people perished.
1936 HIGHEST state temperature ever recorded in Nebraska 118 Jul. 24, New Jersey 110 Jul. 10, North Dakota 121 Jul. 6, Oklahoma 120 Jun. 27, Pennsylvania 111 Jul. 10, South Dakota 120 Jul. 5, Virginia 112 Jul. 10, Wisconsin 114 Jul. 13, Arkansas 120 Aug. 10, 1936, Indiana 116 Jul. 14, ever recorded Kansas 121 Jul. 24, Louisiana 114 Aug. 10, Maryland 109 Jul. 10
1936 TORNADO outbreak April 5-6 Mississippi and Georgia 436 dead
1930 to 1936 20 Twenty state record all time HIGHEST in 6 year period plus 7 were tied ONLY in the same 6 year period. 9 record Lowest in same period. Contrast that to 5 highs set in 1990 – 2000 all 5 in 1994. And 5 lows in the same period ten year period.
Six of Canada’s highest ever records were set in the same period.
1936 Bay of Bengal Myanmar May 1st cyclone 72,000 homes lost 360 dead
1936 Drought related famine in China, five million dead. (
NOAA’S TOP GLOBAL WEATHER, WATER AND CLIMATE EVENTS OF THE 20 TH CENTURY)
1936 July 11th St. Albans, Manitoba 2nd highest temp ever in Canada 44.4 C
1936 Northeast Flood – Spring 1936
Rain concurrent with snowmelt set the stage for this flood. It affected the entire state of New Hampshire.[17] … In all, damage totaled US$113 million (1936 dollars), and 24 people were killed.
1937 record state lowest temp California -45 Jan. 20
1937 state LOWEST record Nevada -50 Jan. 8
1937 January Ohio/Great Miami River Flood
Two days later, the Ohio River crested in Cincinnati at a record 24.381 m (79.99 ft). Flooding in the city lasted 19 days…… Damages totaled US$20 million (1937 dollars).[23]
1937 Highest recorded temp in Canada 45 °C (113 °F) Midale
Willis:
9% of the raw data were trended. As you say, we expect 5% to be trended at 95% level, but we expect 2.5% to be + trended and 2.5% – trended. What percentage of the series showed +ve and -ve trends?
Also, don’t quite follow how they do the grid averages. How big are the gridcells? Are they really using data from 2000km away in the calculation of cell values in gridcells that are much smaller (when spatial correlation dies off so fast)?
Why have gridcells at all – why not just interpolate all available series and calculate standard errors based on some sort of cross-correlation? (And limit contributing series to those within the range of +ve spatial correlation)
Help, looking for worldwide historic records of extreme weather, storms, typhoons, hurricanes tornados, floods, droughts etc. North America China, and Russiaa are somewhat covered, but none but the US in detail. The period is 1930 to 1936 or there about. This is what I have so far…Thanks in advance
1930 May 13th Farmer killed by hail in Lubbock, Texas, USA; this is the only US known fatality due to hail.
1930 June 13th 22 people killed by hailstones in Siatista, Greece.
1930 Sept 3rd Hurricane kills 2,000, injures 4,000 (Dominican Republic).
1930s Sweden The warmest decade was the 1930s, after which a strong cooling trend occurred until the
1970s INTERNATIONAL JOURNAL OF CLIMATOLOGY http://onlinelibrary.wiley.com/doi/10.1002/joc.946/abstract
1930 Russian heat wave in the 1930′s, for the decade was 0.2 degrees below 2000 to 2010 heat wave.
1930 set 3 all time HIGHEST state temperatures, Delaware, 110F Jul. 21, Kentucky, 114 Jul. 28, Tennessee 113 Aug. 9, and one all time LOWEST state record, Oklahoma -27 Jan. 18. About 400% more then a statistical average.
1931 set two highest State temp ever, FL, 109 Jun. 29, and HI, 109 Jun. 29
1931 Europe LOWEST temp ever in all of Europe −58.1 °C (−72.6°F)
1931 The 20th centuries worst water related disaster was the Central China flooding of 1931, inundating 70,000 square miles and killing 3.5-4 million people.
1931 July Western Russia heat wave 6 degrees F monthly anomaly above normal, 2nd warmest on 130 year record. Decade of 1930 to 1940 within 0.2 degrees of 2000 to 2010 western Russia July
1931 Sept 10th The worst hurricane in Belize Central America history kills 1,500 people.
1932 TORNADO OUTBREAK SEVERE 1932, March 21 Alabama 268 DEAD
1932 November 9th Santa Cruz Del Sur Cuba category 5 hurricane 2,500 dead.
1932 Madagascar cyclone crosses Reunion Island 35,000 homeless 45 dead.
1932 June 19th Hailstones kill 200 in Hunan Province, China
1932 / 33 Soviet famine. 7 to 14 million. Mostly human caused, but drought and low crop yields in 1931 and 32 contributed.
1933 Sept Cat 3 Florida landfall.
1933 4 LOWEST state temp ever were recorded in Oregon -54 Feb. 10, Texas -23 Feb. 8, Vermont -50 Dec. 30, Wyoming -66 Feb. 9
1933 February 6 Highest recorded sea wave (not tsunami), 34 metres (112 feet), in Pacific hurricane
1933 Highest temp ever in SWEDEN 38.0 °C (100.4 °F) tied in 2009
1933 Lowest temp ever recorded in ASIA −68 °C (−90 °F) tied in 02 and 06
1933 NORTH KOREA LOWEST temp ever North Korea −43.6 °C ( -46.48°F)
1933 August 11th Highest World Temperature ever reaches 136 degrees F (58 degrees C) at San Luis Potosí, Mexico (world record).
1933 Nov 11th Great Black Blizzard” first great dust storm in the Plains of the USA.
1934 May 11th Over two days, the most severe dust storm to date in the USA sweeps an estimated 350 million tons of topsoil from the Great Plains across to the eastern seaboard.
1934 Fastest wind speed recorded with an anemometer outside of a tropical cyclone: 372 km/h (231 mph) sustained 1-minute average; Mount Washington, New Hampshire,
Michigan -two states recorded their highest ever temperature both 118 degrees Idaho and Iowa, and two states recorded their lowest ever temperatur Michigan -51 and New Hampshire -47
1934 LOWEST temp ever Singapore 19.4 °C (66.9 °F)
1934 Typhoon strikes Honshu Island, Japan, kills 4,000
1935 Ifrane Morocco, LOWEST temperature continent of Africa ever recorded, minus 11
1935 Florida, A CAT ONE HURICANE AT LANDFALL.
1935 Nepisiguit Falls, New Brunswick 39.4 °C 12th highest temp ever in Canada.
1935 Collegeville, Nova Scotia 38.3 °C 15th highest temp ever in Canada.
1935 Iroquois Falls, Ontario −58.3 °C 5th lowest temp ever in Canada.
1935 Western Russia, 9th coldest July in 130 years.
1935 145,000 dead 1935 Yangtze river flood China
1935 August 1935 and 36 two typhoons hit Fukien province in China, hundreds dead.
1935 Labor Day hurricane one of the most intense hurricanes to make landfall in U.S. in recorded history. More than 400 people were killed. 185 MPH sustained winds
1935 Hati 21 October: hurricane in Sud and Sud-Est départements. 2,000 people perished.
1936 HIGHEST state temperature ever recorded in Nebraska 118 Jul. 24, New Jersey 110 Jul. 10, North Dakota 121 Jul. 6, Oklahoma 120 Jun. 27, Pennsylvania 111 Jul. 10, South Dakota 120 Jul. 5, Virginia 112 Jul. 10, Wisconsin 114 Jul. 13, Arkansas 120 Aug. 10, 1936, Indiana 116 Jul. 14, ever recorded Kansas 121 Jul. 24, Louisiana 114 Aug. 10, Maryland 109 Jul. 10
1936 TORNADO outbreak April 5-6 Mississippi and Georgia 436 dead
1930 to 1936 20 Twenty state record all time HIGHEST in 6 year period plus 7 were tied ONLY in the same 6 year period. 9 record Lowest in same period. Contrast that to 5 highs set in 1990 – 2000 all 5 in 1994. And 5 lows in the same period ten year period.
Six of Canada’s highest ever records were set in the same period.
1936 Bay of Bengal Myanmar May 1st cyclone 72,000 homes lost 360 dead
1936 Drought related famine in China, five million dead. (
NOAA’S TOP GLOBAL WEATHER, WATER AND CLIMATE EVENTS OF THE 20 TH CENTURY)
1936 July 11th St. Albans, Manitoba 2nd highest temp ever in Canada 44.4 C
1936 Northeast Flood – Spring 1936
Rain concurrent with snowmelt set the stage for this flood. It affected the entire state of New Hampshire.[17] … In all, damage totaled US$113 million (1936 dollars), and 24 people were killed.
1937 record state lowest temp California -45 Jan. 20
1937 state LOWEST record Nevada -50 Jan. 8
1937 January Ohio/Great Miami River Flood
Two days later, the Ohio River crested in Cincinnati at a record 24.381 m (79.99 ft). Flooding in the city lasted 19 days…… Damages totaled US$20 million (1937 dollars).[23]
1937 Highest recorded temp in Canada 45 °C (113 °F) Midale
Further to the above on Australia’s new Commissioner for climate, the statement was made that 2 recent papers had shown a link between flooding and man-made climate change. This was amid the Queensland flood context. We can presume these papers to be
Pardeep Pall, Tolu Aina, Dáithí A. Stone, Peter A. Stott Toru Nozawa Arno G. J. Hilberts, Dag Lohmann & Myles R. Allen, 2011: Anthropogenic greenhouse gas contribution to flood risk in England and Wales in autumn 2000. Nature vol 470, pp 382–385 DOI:doi:10.1038/nature09762
Seung-Ki Min et al 2011, Xuebin Zhang, Francis W. Zwiers, Gabriele C. Hegerl 2011: Human contribution to more-intense precipitation extremes, Nature, vol 470 , pp 378–381
There were heavy Spring rains in Queensland in late Dec 2010 to Jan 2011. Several people including the above are spreading the story that high sea surface temperatures in the few months before the rains prepared the way.
Here is a graph of SST, Spring, the Coral Sea, where the hot rains were supposed to come from.
http://www.geoffstuff.com/BOMsst_cor_0911_11969.png
Here is a graph of rainfall over Northen Australia land. I cannot find a map to match rain over the sea. If you can see a correlation between high SST in Spring and heavy northern rainfall, you have a better stats pack than I do.
Remember that Spring down under is the few months before Christmas.
Even from NOAA global maps, the S-O-N-D-J period in recent years has has Coral Sea SSTs that can be described as
2005 hot
2006 cold
2007 warmish
2008 hot
2009 average
2010 3 lukewarm months (S N J) and 2 average to coolish (O, D).
Like Willis above, one gets a clearer story by delving into the actual data. These are not a 1:1 match, but they are indicative of no significant correlation. So where is the hand of man in all this? Drawing graphs, I suspect.
Peer review that says CO2 not causing extreme events…
Over the period of 1965–2008, the global TC activity, as measured by storm days, shows a large amplitude fluctuation regulated by the ENSO and PDO, but has no trend, suggesting that the rising temperature so far has not yet an impact on the global total number of storm days.” Wang, B., Y. Yang, Q.‐H. Ding, H. Murakami, and F. Huang, 2010. Climate control of the global tropical storm days (1965–2008). Geophysical Research Letters,
“(1) There is no significant overall long-term trend common to all indices in cyclone activity in the North Atlantic and European region since the Dalton minimum.
Bärring and Fortuniak, 2009 International Journal of Climatology,
“Over the past 24 yr, the land falling tropical cyclones clearly show variability on inter-annual and inter-decadal time scales, but there is no significant trend in the landfall frequency. from Zhang et al., 2009
Chan and Xu write “An important finding in this part of the study is that none of the time series shows a significant linear temporal trend, which suggests that global warming has not led to more landfalls in any of the regions in Asia.” from Chan and Xu, 2009 Proceedings of the Royal Society A, 465, 3011-3021.
Phillipines 1902 – 2005 Annual TLP from 1902 to 2005 using the two definitions shows dominant periodicity of about 32 years before 1940 and of about 10–22 years after 1945; however, no trend is found.” Chan and Xu, 2009 International Journal of Climatology, 29, 1285-1293.
The 1900–01 to 2006–07 trends in the annual percentage of high- and low-extreme snowfall years for the entire United States are not statistically significant.”
Sorrel, P., B. Tessier, F. Demory, N. Delsinne, D. Mouaze. 2009.
France, …no evidence is found of any increase in the frequency or intensity of storms, and in fact, the large storms of southern France seemed more frequent more than 100 years ago. Sabatier, P., L. Dezileau, M. Condomines, L. Briqueu, C. Colin, F. Bouchette, M. Le Duff, and P. Blanchemanche. 2008. Reconstruction of paleostorm events in a coastal lagoon (Hérault, South of France). Marine Geology,
Analyses show that although economic losses from weather related hazards have increased, anthropogenic climate change so far did not have a significant impact on losses from natural disasters. The observed loss increase is caused primarily by increasing exposure and value of capital at risk. Laurens M. Bouwer Bulletin of the American Meteorological Society 2010
I first thing that hit me when I first read about this paper were the dates … 1951-1999. Did the world stop measuring rainfall in 1999? Why not 2009? Why not start in 1900? Does anyone else feel like this entire study was likely another alarmist cherry picking exercise?
Willis:
Nice job. The Nature editorial board appears to have some more explaining to do.
Do you know if there have been verifications of the original Hansen and Lebedeff (1987) results? I realize that precipitation is likely to be different from temperature but the difference in your analysis from H&L is so dramatic that it seems to me to be worth verifying – if for no other reason than the passing of 25 years. Also, I would assume that the analysis of the satellite data would verify any findings with respect to temperature.
one word only- is enough to know its shonky…Hadley.
we have some fool here in aus saying that WA drought is agw, but the east coast cant be proven to be agw…I suspect hes looking for a Hadley or other handout
Hello Willis,
Another masterfully exposed, infinitely complex statistical house of cards. Very well done, and thanks so much for your time spent on this. A number of questions immediately came to mind:
Forget man-made influences for a moment, lets not even go that far.
Is there any proof that these rain indexes actually correlate to all past flood events, especially when gridded?
I can think of a couple of problems there…..
Are there causal factors in past flood events other than an immediate precipitation event?
How do they handle a correlation when the precipitation event takes place a very large distance away from the flood event?
sarc on/ Do they take into account wind caused waves slamming onto earthen levies until they collapse thus flooding the lower terrain (the Katrina event in Figure 1) ? /sarc off :).
Best,
Jose
I hope people have finally figured out that peer review is not what they think it is….
..and that having something published, does not mean it’s right