Guest Post by Willis Eschenbach
Anthony recently discussed a recent paper called “The Role of Atmospheric Nuclear Explosions on the Stagnation of Global Warming in the Mid 20th Century” (PDF, author’s version). It advances the claim that nuclear tests changed the temperature in the period 1945-1980, in a sort of mini-“nuclear winter”. Here’s their main graph:
ORIGINAL CAPTION: Fig. 1 Anomaly in global-mean surface temperature [GST] between 1880 and 2008. Black line: original data and their trend (the broken line). Red triangles: eruptions whose VEI (volcanic explosivity index) is equal or greater than 5 (source). Green vertical bars: annual yield of atmospheric nuclear explosions (UNSCEAR, 2000). Blue line: corrected GST (0.3K was added to GST data of 1945 and later) based on Thompson et al. (2008) and its trend (the broken line). Red line: re-corrected GST anomaly based on effects of atmospheric nuclear explosion (∆t was set at 3 years) and Thompson et al. (2008), and its trend (the broken line). Green line: imaginary linear global warming trend. Gray line: sunspot number (source)
Something caught my eye about this graph, something that generally makes me curious.
What I found odd was the logarithmic scale on the right, for the green bars showing the “Annual yield of atmospheric nuclear explosions (MT/y).” I don’t like logarithmic scales unless there’s a good reason for them. In this case, obviously, a good reason would be if the temperature cooling effect of the bombs was proportional, not to the total yield of the explosions, but to the log of the total yield. However, this would mean that smaller explosions would cause more cooling per megatonne than large explosions, which seemed unlikely.
And in fact, their Figure 6 shows that the amount of fine dust injected into the atmosphere goes up, not logarithmically with total yield, but linearly with total bomb yield. In addition, their Figure 5 shows that the total temperature drop varies linearly with the dust concentration. Which means that the temperature drop varies linearly with the total bomb yield. So the logarithmic display is deceptive.
I got to thinking about the question of how I might falsify their claim. I realized that a) the lifetime of dust in the troposphere isn’t very long, months rather than years or decades; b) there’s only a slow exchange of air between the Northern and Southern Hemispheres; and c) the overwhelming majority of the tests were conducted in the Northern Hemisphere. The Brits blew a few off in Australia, and that was about it. China, Russia, and the US did most of the atmospheric testing, and it was virtually all north of the equator.
This means that if their theory is true, the atom bomb tests should have cooled the Northern Hemisphere more than the Southern Hemisphere.
And that, we can say something about. Figure 2 shows the HadCRUT3 Northern and Southern Hemisphere data, along with a non-logarithmic view of the annual yield of the nuclear and thermonuclear bomb tests:
Figure 2. Temperature anomalies for the Northern (blue) and Southern (red) Hemispheres. Orange circles show annual total yield of all atmospheric (above-ground) nuclear and thermonuclear bombs. Yield data from Figure 1. Vertical gray lines show the start and end of atmospheric tests and bombs, 1945-1980. Fine dust in the lower troposphere has a half-life of days/weeks, and in the upper troposphere, a few months. Stratospheric dust lasts longer, but not much of the dust made it that high.
Immediately, we can see problems. In no particular order these are:
• More than half of the total bomb yield comes from just two years, 1962 and 1963.
• The first big temperature drop takes place in the period 1945-1950, during which time there was little testing.
• During the time when most of the fine dust was injected into the atmosphere, from 1951-1963 (94% of total bomb yield), the temperature was not falling.
So it’s not looking good for the hypothesis.
However, we still haven’t examined what I set out to examine. This was the difference between the Northern and Southern Hemisphere temperatures. Figure 3 shows that difference. We would expect the line to drop if the Northern Hemisphere actually were being cooled by dust injected into the atmosphere.
At first glance it looks like they might have something. There is a big drop in the period 1964-72. But there’s a couple of problems with that.
First, if we look at Figure 2, we see that the reason for the drop is the Southern Hemisphere is warming. The Northern Hemisphere is not cooling during that period,. False alarm.
Second, they identify the period of “stagnation of global warming as being 1945-1975. But the relative change between the two hemispheres didn’t happen until 1965.
Overall, I’d say that their explanation of the “stagnation” simply doesn’t hold water. The timing is not right, the size is not right, and the pattern of cooling is not right.
Note that the same arguments apply for the usual culprit advanced for the “stagnation”, which is aerosols, particularly sulfates. As with the bombs, the main sulfate and other aerosol sources were predominantly in the Northern Hemisphere at that time, and they last no longer in the atmosphere than does fine dust from bombs. So the lack of NH cooling argues against the sulfate/aerosol hypothesis as well.
Best to all,
PS – Before you ask, yes, I know that the Partial Test Ban Treaty (PTBT) went into effect in 1963. But the Chinese and the French didn’t pay any attention to that, did they? After all, we’re talking China and France, and besides we were asking them not to do something we’d done over a hundred times.