On Holland and Bruyère (2013) “Recent Intense Hurricane Response to Global Climate Change”

Alternate Title: Climate Science Community Continues to Lose Sight of Reality

SkepticalScience is promoting the Holland and Bruyère (2013) paper Recent Intense Hurricane Response to Global Climate Change as proof positive that hypothetical human-induced global warming has caused more intense hurricanes. See Dana Nuccitelli’s post New Research Shows Humans Causing More Intense Hurricanes. My Figure 1 is Figure 1 from Holland and Bruyère (2013).

Figure 1

Figure 1

The abstract of Holland and Bruyère (2013) begins:

An Anthropogenic Climate Change Index (ACCI) is developed and used to investigate the potential global warming contribution to current tropical cyclone activity. The ACCI is defined as the difference between the means of ensembles of climate simulations with and without anthropogenic gases and aerosols. This index indicates that the bulk of the current anthropogenic warming has occurred in the past four decades, which enables improved confidence in assessing hurricane changes as it removes many of the data issues from previous eras.

That’s right; referring to Figure 1, Holland and Bruyère (2013) created an index by subtracting the multi-model mean of climate models forced by natural factors (variations in solar activity and volcanic aerosols) from the mean of the simulations that are also forced with anthropogenic factors like manmade greenhouse gases—as if the two types of model simulations and their difference represent reality. They then used that model-based index, with little to no basis in the real world, for comparisons to hurricane activity at various hurricane strengths.

Hurricane activity is influenced by tropical sea surface temperatures. Yet, we know climate models cannot simulate sea surface temperatures over the past 31 years, which is included in the 1975 to 2010 period studied by Holland and Bruyère (2013). Refer to the post here for a model-data comparison of satellite-era sea surface temperature anomalies. And we’ve also discussed for 4 years how ocean heat content data and satellite-era sea surface temperature data indicate the oceans warmed naturally. Refer to the illustrated essay “The Manmade Global Warming Challenge” [42MB]. The models are obviously flawed.

Hurricane activity is also influenced by the El Niño-Southern Oscillation (ENSO). There are fewer Atlantic hurricanes during El Niño years due to the increase in wind shear there. On the other hand, there’s an increase in the intensity of eastern tropical Pacific cyclones during El Niño years. See Table 1, which is from the NOAA Weather Impacts of ENSO webpage.

Table 1

Table 1

Does Holland and Bruyère (2013) consider ENSO? No. The words El Niño and La Niña do not appear in the paper, and ENSO appears only once, when they’re discussing the reason for the use of 5-year smoothing.

All variance numbers use the 5-years smoothed annual time series to remove ENSO type variability.

Can climate models simulate ENSO? The answer is also no. Refer to the post Guilyardi et al (2009) “Understanding El Niño in Ocean-Atmosphere General Circulation Models: progress and challenges”.

Guilyardi et al (2009) includes:

Because ENSO is the dominant mode of climate variability at interannual time scales, the lack of consistency in the model predictions of the response of ENSO to global warming currently limits our confidence in using these predictions to address adaptive societal concerns, such as regional impacts or extremes (Joseph and Nigam 2006; Power et al. 2006).

The multidecadal variability of the sea surface temperatures in the North Atlantic is called the Atlantic Multidecadal Oscillation or AMO. There are numerous papers that discuss the influence of the Atlantic Multidecadal Oscillation on hurricane activity. In fact, the NOAA Frequently Asked Questions About the Atlantic Multidecadal Oscillation (AMO) includes the question Does the AMO influence the intensity or the frequency of hurricanes (which)? Their answer reads:

The frequency of weak-category storms – tropical storms and weak hurricanes – is not much affected by the AMO. However, the number of weak storms that mature into major hurricanes is noticeably increased. Thus, the intensity is affected, but, clearly, the frequency of major hurricanes is also affected. In that sense, it is difficult to discriminate between frequency and intensity and the distinction becomes somewhat meaningless.

The AMO began its multidecadal rise in temperature in the mid-1970s. See Figure 2. By focusing their analysis on the period of 1975 to 2010, Holland and Bruyère (2013) appear to be, in part, attempting to blame manmade greenhouse gases for an increase in activity that’s already been attributed to the natural variability of the AMO.

Figure 2

Figure 2

Off topic note: Referring to Figure 1 from Holland and Bruyère (2013), notice how the surface temperature data ends in 1999 in cell b, while the models continue for a number of years beyond then, probably to the 2005 end year of the historic CMIP5 simulations. Apparently, some climate scientists haven’t figured out what assumption a reader is forced to make when he or she sees disparities in the end dates of model-data comparisons—that the models would show very poorly if Holland and Bruyère (2013) had extended the data to the end year of the historic simulations, 2005, or to the end year of their study, which was 2010. Note also that the data begins after the start year of the models, too. In other words, most readers wonder what the authors are hiding and assume the worst.


Holland and Bruyère (2013) appears to be a flawed attempt to counter the findings of the recent (2012) IPCC Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX). See the Summary for Policymakers here. The IPCC writes:

There is low confidence in any observed long-term (i.e., 40 years or more) increases in tropical cyclone activity (i.e., intensity, frequency, duration), after accounting for past changes in observing capabilities.

Holland and Bruyère (2013) is yet another peer-reviewed study that relies on climate models as if the models represent reality, when climate models clearly do not. Eventually, the climate science community will have to come to terms with this—possibly not in my lifetime at the rate they’re going. And the portrayers of gloom and doom at SkepticalScience like Dana Nuccitelli somehow find papers like Holland and Bruyère (2013) to be credible. Nothing surprising about that.


newest oldest most voted
Notify of

As far as US landfalling hurricanes are concerned, there is absolutely nothing unusual in long term trends, and they are not rising (even taking 2004/5 into account in 10-year trends).

Models = Rubbish.
No wondered Nuttycelli was so impressed!

Nuccitelli is merely trying to find supporting documentation to a preconceived conclusion. In other words, he is using Anti-Science.


Scooter is the cause of the bad weather … all that hot air has to create something of a storm !
Never before in the history of ‘climate science’ has one boy done so much to discredit so many of his brethren.


That’s like listing the differences between Asimov’s and Heinlein’s idea of what 2050 will look like and declaring it the result of lack of science funding.

Hmmm … where was I reading about corruption in science?

Stephen Richards

Paul Homewood says:
April 30, 2013 at 4:32 am
As far as US landfalling hurricanes are concerned, there is absolutely nothing unusual in long term trends, and they are not rising (even taking 2004/5 into account in 10-year trends).
Joe Bastardi is a good read on this subject as is Ryan Maue. They are predicting a period of east coast hurriacanes / Extra-tropical storms this year and in several subsequent years.

Without “anthropogenic gas forcing”, the climate models cannot reproduce past periods such as the Minoan, Roman and Medieval warm periods and the Sporer, Maunder and Dalton cold periods. So H&B(2013) is complete nonsense.


we took an average of a bunch of imperfect models of a non-linear semi-chaotic system and subtracted another average of another bunch of equally unreliable models and called the result the ACCI (or Ar$$hole Created Change Index) and decided that this represents ‘science’.

Interesting time to publish this, when we are continuing the all-time record since a major (category 3 or larger) hurricane made landfall on the continental US. Noting well that Sandy wasn’t even a hurricane or a tropical storm at the time of landfall, let along category 1 — it was the rare and completely coincidental blend of a comparatively large but weak hurricane and a northeaster, a “perfect storm”.
Interesting also is the way CCSM4 has flattened all of the climate variation from the 1870’s on out of the blue line. All of the observed warming from this stretch is now anthropogenic. 100%. Holy Hockey Stick, Batman! I gotta say, that’s ballsy.
Now if they could just find the human causes of all of the REST of the SUBSTANTIAL climate variation observed in the thousand years or two before 1870… because obviously the true nature of the Earth’s climate system is to be perfectly flat on a century timescale.
One thing that puzzles me about figures of this sort. What, exactly, makes these models fluctuate in the remote past? How, in particular, do they manage to differentiate 1880 from 1885? What causes the ensemble to coherently move up and down, to the point where there are sharp features in the ensemble predictions, features with a nearly annual scale? What makes these features precisely correlate with the observed temperature in a lot of cases years after the model was presumably started with an initial value and then turned loose?
I’ve done a lot of Monte Carlo simulations. I understand random numbers as well or better than most people on the planet. I have a pretty good grasp of statistics. This sets my spidey-senses tingling. If I were reviewing a paper that published this figure in particular, I’d have to ask how an a priori model manages to produce coherent dips at 1883, 1903, 1963 along with ENSEMBLE predictions in between with THE SAME SCALE OF NOISE as the actual temperature series.
This is, frankly, impossible. If one runs 100 model calculations all from the same starting point but with random noise intended to simulate our uncertainty as to initial state and the precise values of drivers along the way, a SUCCESSFUL result would have 1/10th the visible noise — it would be self-smoothing the annual noise out and leaving only a curve with a gentle decadal variation. It is literally inconceivable that it would preserve the same scale of NOISE (short term variation) as the actual temperature curve. It is also inconceivable that it would end up with many/most all of its major annual fluctuations strangely coincident with fluctuations in the actual temperature record. It’s sort of like they PRODUCED this curve by taking the actual temperature record and SUBTRACTING a smoothed estimate of the “greenhouse gas correction” (only), as opposed to actually simulating the temperature record from first principles and an arbitrary set of initial conditions.
This, in turn, means that the curve is completely useless for determining the greenhouse gas correction by subtraction — it simply reproduces what the authors put in, it begs the question.
It’s as if I took the existing Dow Jones industrial average curve for the last fifty years, computed a smoothed “correction” to the curve based on the assumption that inflation proceeds exponentially at 2% a year, subtracted the exponential from the data (leaving the noise intact), and published it as a “prediction” of the DJA “without inflation”. One cannot then use the curve to “deduce” or “prove” that the inflation rate was 2%, it is built in by the original unproven assumption. Note well that this curve, too, would “look funny” because (for example) it would somehow “predict” the major market corrections along the way. Indeed, these corrections would have their scale AMPLIFIED as one moves to the right on the curves, because the scale of the characteristic noise will shift in the data, but not in the correction.
This last “impossible/unlikely” feature is also present in the data curves above — on the left the 1883 fluctuation is perfectly coherent and synchronous across both models and data. This synchronicity persists across the decades where there was little “anthropogenic warming”. However, at the end past the “hockey stick”, there are a number of other fluctuations that just happen to coincide with data features, but are a factor of 2 or 3 TOO LARGE. What’s up with that? Anthropogenic gases somehow amplify temperature and suppress natural fluctuation?
The problem is obvious. One might be able to write a market simulator that can generate potential DJA traces from (say) 1963 to the present, using initial data in 1963 to start the process. One might even be able to use as inputs at least some other broad indicators such as the length of women’s skirts or the number of sunspots per annum, and get some inherited contemporary state dependence. However, you would NEVER, EVER be able to predict the major corrections with an ENSEMBLE of runs, and if one does use the price of (say) a NON-DJA stock as an input — or the value of the SPI as an input — one would have to assert that one isn’t actually modeling the DJA at all, only using various covariant proxies to generate it and then subtracting hypothetical corrections to prove those corrections exist.
This is the rub. I do not believe it possible for anyone to a priori precisely hindcast the actual climate because it is essentially a chaotic process. If someone did a very, very good job, they might manage to smoothly interpolate the century-scale coarse grained average, since we cannot at this time actually predict what just ONE of the major inputs into climate on an annual scale is going to be doing just ONE year into the future — ENSO. Noise in ENSO alone would completely erase the coherence in any properly done ensemble computation.
Bah! Humbug!


Wait up, they aren’t just linking big hurricanes to warming, they specifically link them to Anthropogenic Warming!
So there you have it, even hurricanes are AGW aware.
(97 % of all hurricanes agree to be larger if the warming is anthropogenic)

Jim Clarke

In the fantasy world of computer gami…I mean modelling, one creates their own reality. Who would have thought that modern climate science would be so derailed by those brought up on Dungeons and Dragons.


Each time they push the equal button they get the same answer. And whenever they make a graph it looks like a hockey stick. Humm. Symptoms. Sounds like modern climate science is infected with preconceiveditus combined with mybentoma. Ones exhibiting late stages show psychotic fear of slightest increase in ambient temperatures, and in the most advanced stage hysteria and walking coma after seeing tree rings or coal. The cause of this disease is still allusive at this time, but some studies show lack of research grants may be a prominent factor for infection.
/sarc off

Pamela Gray

I am guessing a wigi board could be added to the model mixes. Anybody thought of that? Now that would be funny to compare a wigi board run with model output. Very funny!


As far as US landfalling hurricanes are concerned, there is absolutely nothing unusual in long term trends, and they are not rising (even taking 2004/5 into account in 10-year trends).

Same thing in Australia … http://www.bom.gov.au/cyclone/climatology/trends.shtml
Trends in tropical cyclone activity in the Australian region (south of equator; 105–160°E) show that the total number of cyclones appears to have decreased to the mid 1980s, and remained nearly stable since. The number of stronger cyclones (minimum central pressure less than 970 hPa) shows no clear trend over the past 40 years.
The BoM chart shows total and intense cyclones are down since the 1980s, which I suppose is “nearly” stable.
So apart from the US and Australia, I assume the world is being blown apart by hurricanes and cyclones.


I’d like to see how this is affected by the ACE, oh wait, they didn’t use empirical data, they only used model data. At some point, they’ve got to go through a thought exercise and realize every feedback has a limitation. Otherwise they’ve found the holy grail for endless energy. Almost nothing in but unlimited potential out!


Say. Wouldn’t there have to be significant warming in the first place before there could be significant effects from it? These people are nuts. Blaming every breath of wind on man made fossil fuels for a living may pay, but it sure makes a fellow look stupid if you think about it.

richard verney

rgbatduke says:
April 30, 2013 at 5:31 am
“…I’d have to ask how an a priori model manages to produce coherent dips at 1883, 1903, 1963 along with ENSEMBLE predictions in between with THE SAME SCALE OF NOISE as the actual temperature series.,,,”
Presumably, the 1883 dip is Krakatoa.
I wonder what forcing they applied for Krakatoa, and I ponder on what global temperatures would have been in the years between 1883 and 1887 but for Krakatoa.
If Krakatoa had not errupted in 1883, would the temperatures in 1883 to 1887 have been as warm as those in the 1930/40s or even as warm as those seen towards the end of the 20th century.

Frank K.

Looking at the CCSM4 ensemble global temperature comparison “observed temperatures”, the agreement is pretty bad. And why do the “observed temperatures” stop at around 2000??? Oh yeah…I guess they were inconvenient…

rgbatduke says:
April 30, 2013 at 5:31 am
This last “impossible/unlikely” feature is also present in the data curves above —
Agreed. The hindcast is not acting the way a true predictive model would act. What we are seeing is curve fitting with no predictive value. The correlation is a result of chance and selection bias. The model builder has chosen the model that accidentally delivers the answer they expect, while rejecting those models that may be delivering the true answer.
This was all studied to death 50-60 years ago when computers first became widely available. At the time they were given a name. “Linear Programming Models”. And they were fantastic at solving a certain classes of problems and complete rubbish when used on other classes of problems.
Before you can use the output of an LPM or any model, you must first validate that the model is modelling reality, and not simply giving you the answer that you expect. Because what models do better than anything else is to predict the answer the model builder is looking for. Like a computerized sooth-Sayer, models always seek to first tell you what you want to hear, because otherwise you will replace them with a different model.
What confuses model builders today as it did years ago is that models are inherently goal seeking. They are trying to give you an answer. But the question they are answering is not necessarily the question you are asking. If you want a “yes man”, your model will deliver a “yes man”, because computers are inherently amoral. They have no compulsion whatsoever about telling lies and dressing it up as truth.

Arno Arrak

In Figure 1 the observed temperature curve is phony. There is an 18 year standstill in the eighties and nineties which they have obliterated by showing a steady rise. I have objected to this periodically and GISSTEMP, HadCRUT and NCDC finally changed their eighties and nineties data last fall to show the true temperature as shown by satellites. This satellite temperature curve is figure 15 in my book “What Warming?” that I published in 2010. Until last fall the official temperature sources ignored it. They have changed their data but the curve in Figure 1 above does not show it.

Steve Keohane

rgbatduke:April 30, 2013 at 5:31 am
I think you covered it well. Thanks to you and Bob T. for keeping it real.

The first question a model builder should ask when looking at the results of a model is this: is the model telling me what I expected to see? If the answer is yes, then you must immediately suspect the answer is wrong.
If the model pretty much tells you, the builder, what you expected to see, then what is most likely is that the model has not solved the question you asked. Rather the model has solved a much simpler problem; what answer will you accept?
however, if the model predicts something much different than what the model builder believes, then the result might be interesting. it could still be completely wrong, but in a fashion that can perhaps be more easily detected.
Thus, when scientists release results, it is often much more useful to see their failures than their successes. When asked about his failure to create a working electric light, Edison answered “I haven’t failed, I have discovered 1700 different way in which to not make a light bulb”.
Similarly, if model builders would show us the contradictory results in addition to the confirmatory results, our knowledge would progress much faster. Otherwise we are doomed to re-discover the same dead ends and red herrings over and over again.

Jeff Alberts

Stephen Richards April 30, 2013 at 5:22 am
Stephen, it would really be great if you could differentiate your text from the quoted text. Even a line of hyphens after the quoted text would help. Ideally, though, learning how to use blockquote would be a plus to your commenting.

Bob Tisdale says:
April 30, 2013 at 7:41 am
There has never been a global temperature “standstill in the eighties and nineties” in “GISSTEMP, HadCRUT and NCDC” data. And there isn’t one now.
there is a widespread discontinuity around 1990 that co-incides with the collapse of the soviet union and the abandoning of much of the temperature reporting from siberia. this may be the source of the confusion.

Frank K.

Bob Tisdale says:
April 30, 2013 at 7:41 am
Bob – you are correct that “global temperature” index is never static. However, the whole concept of a single “global temperature” is contrived and not very thermodynamically useful. Moreover, I would love to see the same anomaly plots converted to absolute temperatures – has anyone done this?? Since “global warming” is essentially dominated by radiation heat transfer, absolute temperature is the important parameter – not a temperature anomaly. I seem to recall a plot showing model comparisons in absolute temperatures, and they were all over the place.

it does seem a remarkable coincidence that the period in question, the warming between 1980-2000 that climate scientists “cannot explain” except by CO2, is also the time at which there was a fantastic decline in the number of temperature reporting stations around the world.
doesn’t occam’s razor tells us that the most likely explanation has nothing to do with CO2? rather, what we are looking at is more likely an accounting error. most likely these temperatures do not reflect “record profits”. rather, what we are seeing is a change in the accounting methods, that has not been noted on the financial statements. in business, leaving this off the financial statements, this is called fraud. In climate science it is called “creative accounting”.

Frank K. says:
April 30, 2013 at 8:03 am
I seem to recall a plot showing model comparisons in absolute temperatures, and they were all over the place.
In terms of absolute temperature, the models are indistinguishable from noise. the total observed warming over the past century is miniscule in comparison to the daily, annual and regional fluctuation that occur naturally.
In effect climate science is the science of reading tea leaves and trying to attach meaning to events that for all intents and purposes are random at the scale of human lifetimes.

Chuck Nolan

In the words of the immortal Willis
“It’s models all the way down.”

Models smodels. Hmmm.. Smodle. Sodmel. Soldme. Sold ’em on global warming. Works for me.

David Jay

They also ended Figure 1 at about 2000 so they didn’t have to show actual temperatures running flat since.
How can anyone even pretend to be doing science when they “hide the pause”? How does this get past peer review? *Facepalm*

Nik Marshall-Blank

Changing the data set used for papers invalidates those papers. Anybody got a list of them?

Billy Liar

rgbatduke says:
April 30, 2013 at 5:31 am
If I were reviewing a paper that published this figure in particular, I’d have to ask how an a priori model manages to produce coherent dips at 1883, 1903, 1963 along with ENSEMBLE predictions in between with THE SAME SCALE OF NOISE as the actual temperature series.
1883 = Krakatoa
1902 = Santa Maria
1963 = Agung
1982 = El Chichon
1991 = Pinatubo
I presume these are the wiggle matching points which are the test of all GCM’s abilities. They must show appropriate dips at these points to appear credible (even though they are not).
As we know, only major volcanic eruptions and the evil gas CO2 are capable of altering earth’s temperature.

I’m flabbergasted about their inability to understand proportions. The fact that major hurricanes are increasing in proportion to all hurricanes is because the total number of hurricanes is dropping. I’ve got a graph up of the last 30 years which illustrates this quite well.
Will the sophistry ever end from these people?


How can they talk about frequency / intensity of hurricanes without looking at ACE Global ACE

Nik Marshall-Blank:
At April 30, 2013 at 9:11 am you ask

Changing the data set used for papers invalidates those papers. Anybody got a list of them?

I reply.
No, I don’t think there is such a list. But I know that at least one paper was prevented from publication by the frequency and magnitudes of alterations to global temperature data sets. See
PS I suspect you may want to read the draft of the paper and it is Appendix B of the item at the link.

Arno Arrak

rgbatduke April 30, 2013 at 5:31 am:
“… If I were reviewing a paper that published this figure in particular, I’d have to ask how an a priori model manages to produce coherent dips at 1883, 1903, 1963 along with ENSEMBLE predictions in between with THE SAME SCALE OF NOISE as the actual temperature series.”
Could it be the work of a major massage parlor? All models put in dips for an imaginary volcanic cooling based on the date of the eruption.. It can be demonstrated that volcanic cooling, so-called, is nothing more than misidentification of naturally occurring La Nina temperature dips. They don’t get it because they still don’t know that the entire global temperature curve is a concatenation of El Nino peaks with La Nina valleys in between. They match up accurately in temperature curves from all parts of the world as Müller’s data show. He, too did not understand this and considered them noise. To modelers it is noise to be wiped out with a running mean. If a volcanic eruption coincides in time with an El Nino peak the La Nina valley that follows it is immediately recruited for its “volcanic cooling” dip. But then these guys get in trouble if, by chance, the eruption coincides with a La Nina valley. What follows a La Nina valley is an El Nino peak and mysteriously, that volcano will refuse to do any cooling and actually cause a temperature rise. That is what happened to El Chichon and they are still going through contortions to find its non-existent cooling. How come Pinatubo had such a distinct cooling but El Chichon did not? Very simple, Pinatubo eruption just happened to coincide with an El Nino peak and the La Nina that followed it was recruited to serve as its volcanic cooling dip. I explained it all in simple language in my book “What Warming?” two years ago but these big experts do not read anything that their friends did not write. Your point about the scale of noise is of course important. Since what they think of as noise is in large part ENSO oscillations which determine the scale of that “noise” it is easy to match its scale with random noise input to the models. Oh, one more thing. There are exceptions to the regularity of these El Nino peaks, one example of which is the super El Nino of 1998. Its exceptional size is ignored or suppressed in most ground-based temperature curves.

Lars P.

As one poster put it a couple of days ago on a different blog post here, it is not about “climate skepticism” versus “climate alarmism”. It is pure and simple about science. Good science as it should be practiced, versus bad science as we get delivered.
The problem is deeper and needs throughout analysis. I’ve seen some good starts – if I correctly remember there was a reproducibility project. Any paper that could not be reproduce is nothing until it gets reproduced.
I would further say that any paper which does not come accompanied with the raw data and the clear description and information how the data is being “prepared” is again nothing. Zero, nada.
Then again, any public funded paper should be stored on a public server with access for the public.
I think these discussions about climate science and climate skepticism are only the start of a bigger conversation towards ensuring proper standards for science. I’ve seen enough of bad science done to suit the narrative for grant money.
And of course then come the grants. Who gets the grants for a study, and this should be a very interesting conversation. Maybe this should be more democratically done for public money?
Nevertheless some changes need to come.
The clowns are linking not only more intense hurricanes to anthropogenic climate change, but even volcanism! Incredible but true.
And where does the anthropogenic gas forcing bring the models with another quarter increase in this century (2000-2013)? How convenient that the measured temperature ceased so early. Why the data is truncated such way, who accepted it? “Hiding the decline” again.
Interesting that there is still a lot of money to waste on such “studies”.


Still no trace of ENSO, PDO, AMO in natural climate forcing reconstructions. Where did all the research money go ? How can it be that amateurs have a better understanding of climate than paid position holders ?


Rule one for climate science , when the models and reality differ in value its reality that is in error .
Bottom line all their efforts are put into producing the ‘right result ‘ not the correct one , so how they get it means nothing to them .

Bill Illis

We would have very, very few Cat 4,5 hurricanes if it wasn’t for the GHGs we have added.
That doesn’t explain why there was so many of them before the Net Anthro forcing became positive around 1970.
And it doesn’t explain why there are so few Cat 4,5 hurricanes today/lately.
The only explanation is the authors faked the numbers (which is what we find everytime someone goes into depth on one of these “we’ve found global warming” studies – you can set your watch by it).

Nik Marshall-Blank

Yet another Waper (What if pAPER). Where are the facts?

Plain Richard

AMO is the signal you get when you leave our a linear trend (such as a warming trend). Bob -as usual – is wrong

Plain Richard

AMO is the signal you get when you leave our a linear trend (such as a warming trend). Bob -as usual – is wrong . http://www.nature.com/ncomms/journal/v2/n2/fig_tab/ncomms1186_F1.html