From the “If your experiment needs statistics, you ought to have done a better experiment.- Ernest Rutherford” department.
From the American Meteorological Society via press release:
AMS TO RELEASE REPORT EXPLAINING EXTREME EVENTS OF 2016
New research links notable weather and climate events to human influence
The American Meteorological Society is releasing the strongest evidence yet that humanity’s long-term release of greenhouse gases has altered recent extreme weather and climate events. In the new collection of peer-reviewed papers analyzing the link between extremes in 2016 and climate change, scientists identified three events that would not have been possible without human-caused changes in the climate.
The linkages are made in a newly released report—Explaining Extreme Events from a Climate Perspective, a supplement to the Bulletin of the American Meteorological Society. This collection of studies from 116 scientists analyzes 21 different events in 2016, from U.S. snowstorms and South African drought to oceanic hot spots and Arctic warmth. Most of the events researchers examined can be attributed, at least in part, to human-caused climate change.
Some of the analyses go beyond atmospheric and oceanic extremes to link societal or ecological impacts, including coral bleaching and crop failures, to human-caused climate change.
The American Meteorological Society will release Explaining Extreme Events in 2016 from a Climate Perspective at the AGU Fall Meeting on Wednesday, December 13, 2017 at 11:30am CDT.
The panel including editors and authors of the papers will discuss their findings about natural and human influences on the extreme events, as well as the developing maturity of attribution science.
Reporters are invited to attend the press conference in person at the Morial Convention Center in New Orleans or else via live web streaming offered by the American Geophysical Union.
Events Assessed for 2016 include:
- The notorious warm “Blob” in the Pacific Ocean
- Flash Droughts in South Africa
- Wildfires in North America and Australia
- Cold Snap in Eastern China
- Drought in Northeast Brazil
UPDATED: You can download the entire report: http://www.ametsoc.net/eee/2016/2016_bams_eee_low_res.pdf
The press release and links to individual chapters is here:
From the introduction of the 2016 edition:
As in past years, this sixth edition of Explaining Extreme Events from a Climate Perspective includes studies of extreme events from around the world that did not find a role for climate change in influencing the magnitude or frequency of an event. It is important to note that papers are selected for consideration in this report by reviewing author proposals that do not indicate whether a role for climate change will or will not be found. Thus, there is no selection bias on the part of the editorial team toward one particular conclusion, and this publication prides itself as a venue that accepts papers without consideration for whether a role for climate change is found. This year there may be a slight bias toward events that do not find a signal relative to previous years because the editors have begun to limit the number of heat papers in the report which is the event type where a signal is most commonly found.
Given that the majority of heat papers now use a widely established and accepted methodology, the scientific value of continuing to include a large number of heat studies began to seem limited. Extreme weather event types included in this year’s edition include ocean heat waves, forest fires, snow storms, and frost, as well as heavy precipitation, drought, and extreme heat and cold events over land. A number of papers also look at the impacts of extremes (Fig. 1.1). The Summary of Results Table (Table 1.1) gives readers a general overview of the results. Twenty-one of the 27 papers in this current edition identified climate change as a significant driver of an event, while six did not. Of the 131 papers now examined in this report over the last six years, approximately 65% have identified a role for climate change, while about 35% have not found an appreciable effect
Last year, the editors called on scientists submitting research proposals to investigate potential links between an extreme event and its subsequent impact, and we were excited to see five research teams take on this challenge in this year’s report. Lewis and Mallela concluded that the risk of the extreme Great Barrier Reef bleaching event was increased through anomalously high sea surface temperature and the accumulation of thermal stress caused by human caused climate change. Jacox et al. and Brainard et al. both examined how high ocean temperatures
caused in part by human-caused climate change impacted living marine resources like coral bleaching, reduced fish stocks, and a decrease in seabird counts in the California current and the equatorial Pacific, respectively. On land, Sippel et al. found that human caused climate change is causing warmer winters on the Iberian Peninsula and, when coupled with a wet spring, drove higher ecosystem productivity in the region in 2016. However, these papers represent early approaches, and more work is needed to develop impact attribution methodologies.
As is always the case, we would caution that the results of any single study should not be interpreted as the final word on the matter for that event, nor be generalized to a broader class of extremes. For example, authors of these papers selected specific modeling approaches and made other choices about factors that are important in how the models replicate extreme events, such as terrestrial heat or sea surface temperatures. If other study designs were applied to these events, it is possible a different result would be reached. The importance of the methodological approach in attribution research is further discussed in the summary of this report (Stott et al.).
A big question raised by this collection of research is whether these findings undermine the axiom that “no event is caused by climate change alone and that natural variability always plays some role.” The short answer is no. While several of the studied events were found not to be possible without climate change,
natural variability still laid the foundation for the events to occur, and the authors acknowledge this in their papers. Extreme events are always the result of a cumulative set of factors. The building blocks that form the foundation of any event continue to include natural variability, with factors such as El Niño potentially adding to the strength of the event. These temperature-related extremes would likely still have been warm events even without human-caused climate change, but according to these analyses, the events could not have surpassed the extreme warm thresholds that they did without climate change. This was especially the case for the record-setting globally averaged temperature. At the global scale, the natural
variations of Earth’s temperature are increasingly seen to pale in comparison to the growing intensity of human-induced warming. Overall, human-caused climate change allowed them to pass a threshold that they could not otherwise have exceeded.
From the introduction in the 2015 edition:
This last year has been exciting for attribution science, as the U.S. National Academy of Sciences released its report on the topic (NAS 2016). To date, it is the most comprehensive look at the state of event attribution science, including how the framing of attribution questions impacts the results. For example, in a complex event such as drought, a study of precipitation versus a study of temperature may yield different results regarding the role of climate change. The report also addresses how attribution results are presented, interpreted, and communicated. It provides the most robust description to date of the various methodologies used in event attribution and addresses the issues around both the confidence of the results and the current capabilities of near-real time attribution. No single methodology exists for the entire field of event attribution, and each event type must be examined individually. Confidence in results of an attribution analysis depends on what has been referred to as the “three pillars” of event attribution: the quality of the observational record, the ability of models to simulate the event, and our understanding of the physical processes that drive the event and how they are being impacted by climate change.
I’m not all impressed with the “three pillars”, because what typically happens is that if a model doesn’t simulate an event on the first pass, the researchers keep tweaking it until it does. Eventually, they all become “Clever Hans” in being able to respond to the weather events nature provides.
Larry Kummer of Fabius Maximus comments via email to me:
An exercise in data mining.
How many kinds of extreme weather are there? How many of these irregularly defined geographic areas are there? Combine the two into a database. A survey of one year will always find outliers at the 5% level — by chance, because there are so many possibilities.Given that, it is easy for ingenious scientists to link some of them to anthropogenic effects.
More useful would be to see if the overall class of extreme events showed trends over time. Or at least some classes of extreme events (temp, precipitation) did so. Otherwise they have not shown that anything unusually happened in 2016. Just weather.
Literally, all that is going on here is “p-hacking”, and it is well known to have bias problems.
Data dredging (also data fishing, data snooping, and p–hacking) is the use of data mining to uncover patterns in data that can be presented as statistically significant, without first devising a specific hypothesis as to the underlying causality.
After they find a statistically significant set of data, then they use the “three pillars” to assign causality, and that causality is always climate change. Problem is, there’s a built-in bias involved, for example, this 2015 paper in PLOS One explains why(bold mine):
The Extent and Consequences of P-Hacking in Science
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.
In Wikipedia, there is this description and a curious graph:
The process of data dredging involves automatically testing huge numbers of hypotheses about a single data set by exhaustively searching — perhaps for combinations of variables that might show a correlation, and perhaps for groups of cases or observations that show differences in their mean or in their breakdown by some other variable. Conventional tests of statistical significance are based on the probability that a particular result would arise if chance alone were at work, and necessarily accept some risk of mistaken conclusions of a certain type (mistaken rejections of the null hypothesis). This level of risk is called the significance. When large numbers of tests are performed, some produce false results of this type, hence 5% of randomly chosen hypotheses turn out to be significant at the 5% level, 1% turn out to be significant at the 1% significance level, and so on, by chance alone. When enough hypotheses are tested, it is virtually certain that some will be statistically significant but misleading, since almost every data set with any degree of randomness is likely to contain (for example) some spurious correlations. If they are not cautious, researchers using data mining techniques can be easily misled by these results.
There’s this example provided:
There’s also the famous graph showing climate change correlates to the number of pirates.
Larry Kummer adds via email:
In general use, p-hacking can be used to mine datasets for patterns to support a pre-existing hypothesis — such as “AGW is increasing incidence and magnitude of exteme weather.” Operationally, it can result from scientists each checking their own database (a specific kind of weather in a specific geographical region) for increased extremes — which can be attributed to AGW or CC. Since negative findings are not reported, this inevitably results in “findings.”
This is a classic setup for replication failure, as so many other fields — in both “hard” and “soft” sciences — have discovered. I’ve written about this, starting with this from April 2016: The replication crisis in science has just begun. It will be big.
Scientists have been searching for years for the elusive link between “climate change” and “severe weather” for example, this editorial in Nature put the onus on them back in 2012:
From Nature: Extreme weather
Better models are needed before exceptional events can be reliably linked to global warming.
As climate change proceeds — which the record summer melt of Arctic sea-ice suggests it is doing at a worrying pace — nations, communities and individual citizens may begin to seek compensation for losses and damage arising from global warming. Climate scientists should be prepared for their skills one day to be probed in court. Whether there is a legal basis for such claims, such as that brought against the energy company ExxonMobil by the remote Alaskan community of Kivalina, which is facing coastal erosion and flooding as the sea ice retreats, is far from certain, however. So lawyers, insurers and climate negotiators are watching with interest the emerging ability, arising from improvements in climate models, to calculate how anthropogenic global warming will change, or has changed, the probability and magnitude of extreme weather and other climate-related events. But to make this emerging science of ‘climate attribution’ fit to inform legal and societal decisions will require enormous research effort.
Attribution is the attempt to deconstruct the causes of observable weather and to understand the physics of why extremes such as floods and heatwaves occur. This is important basic research. Extreme weather and changing weather patterns — the obvious manifestations of global climate change — do not simply reflect easily identifiable changes in Earth’s energy balance such as a rise in atmospheric temperature. They usually have complex causes, involving anomalies in atmospheric circulation, levels of soil moisture and the like. Solid understanding of these factors is crucial if researchers are to improve the performance of, and confidence in, the climate models on which event attribution and longer-term climate projections depend.
Read the full editorial here.
Dr. Roger Pielke Jr. observed then:
The 116 scientists finally have come to a point where they figured out how to justify their claims with with “better models” and data mining, but all the correlation in the world does not equate to causation.
Meanwhile, examining one of the most fearful severe weather metrics, tornadoes, doesn’t seem to show a correlation:
But in the case of this recent BAMS special report, the researchers truly believe the correlation must be there, and belief is a powerful motivator, so they set out on a path of self-reinforcing data discovery to prove it, just like those kids at spelling bees and venomous spiders, they certainly found what they are looking for.
I weep for science.
NOTE: Shortly after publication, several updates and edits were added to improve the article, including additional excerpts from the 2016 report, and a new comment from Larry Kummer.