Why is 20 years statistically significant when 10 years is not?

Guest post by James Padgett

Many of you are aware that the concept of continental drift, proposed by Alfred Wegener, was widely ridiculed by his contemporaries. This reaction was in spite of the very clear visual evidence that the continents could be fit together like a giant puzzle.

I think this is where we are in climate science today. There is an obvious answer that many experts cannot see even though a young child would understand when presented with the evidence.

Our current crop of experts cannot see simple solutions. Their science is esoteric and alchemical. It is so complex, so easy to misunderstand, that, like the ancient Greek mystery religions, there is a public dogma and then there are the internal mysteries only the initiated are given access to.

And then there are the heretics who challenge their declared truths.

That isn’t to say that many climatologists aren’t smart. On the contrary, they can be very smart, but that doesn’t preclude them from being very wrong on both collective and individual levels.

One of the most brilliant men alive in the last century, John von Neumann, believed that by the 1960’s our knowledge of atmospheric fluid dynamics would be so great, and our computer simulations so precise, that we’d be able to control the weather by making small changes to the system.

It is true that the climate models used today do a very good job with fluid dynamics, but despite that understanding we can neither predict nor control the weather (and the climate) to the degree he imagined.

An incredible genius, he made a mistake. He didn’t understand the fundamental chaos that made his vision impossible.

In regard to the climate, I hope my simple vision is closer to reality than the excuse-filled spaghetti hypothesis that currently brandishes the self-given title of “settled science.”

My proposal, that climate is primarily driven by solar and oceanic influences, is probably believed by more than a few skeptics, but hopefully I can make a compelling case for it that both small children and climate scientists can understand. To that end I’ll take a quick look at the temperature record from 1900 until the present. I will explain the case for the oceanic/solar model and articulate the excuses given by the anthropogenic camp for the decades that inconveniently do not line up with the hypothesis of carbon dioxide being the primary driver of climate change.

1900-1944:

This period is largely warming. What could possibly be the cause of that?

The sun seems to be the obvious answer. It is so obvious in fact that even most mainstream climatologists admit its influence in these years. Some also say there is an anthropogenic effect in there, somewhere, and they could be right, but it certainly isn’t obvious.

And while the Atlantic is in its cool phase over the earlier part of this period, the largest ocean, the Pacific, is warm,especially in the last couple decades, but when it turns into its cool phase….

1945-1976:

We get 30 years of cooling in the surface station record.

According to proponents of the anthropogenic model, the unprecedented increase in carbon dioxide following World War II was not only masked, but overpowered by sulfate emissions. That is an interesting excuse, but this cooling period exactly matches the cool phase of the Pacific Decadal Oscillation (PDO).

So much so that when it goes into its warm phase in…

1977-1998:

We get 20 more years of warming:

which is kicked up a notch towards the end as the Atlantic goes into its warm phase:

That leaves us with the final period from…

1999-Present:

After the super El Nino of 1998 temperatures have largely flat-lined and perhaps even dropped slightly. Both the Atlantic and Pacific are in their warm phases and the sun remains at the “high” levels following the recovery from the Little Ice Age, but the Pacific seems to be wobbling cooler and cooler as it shifts back into its cool phase.

True we are the “warmest decade on record,” but we are also the only decade on record with both oceans in their warm phases in a time of relatively high solar activity. The only comparable time would be during and around the 1930’s and early 1940’s, around the time of the Dust Bowl, and the sun wasn’t as active back then – and that’s assuming the records are an accurate reflection of global temperatures back then.

So how do climate scientists explain this lack of warming for over a decade? Ah, well they blame the sulfates again – a classic excuse, while others say that the heat has teleported deep into the oceans. I say teleported because there is no record of the journey of that missing heat into those unmeasured depths from the well-measured depths it would normally have had to travel through in order to get to that abyss.

Of course, others say this time period is simply not statistically significant, but the only period of heating we can’t directly trace to the sun, the time from 1977-1998, a mere twenty year period, is certainly statistically significant in some minds.

To that I only have one question for them:

Are you smarter than a 5th grader?

Cheers,

James Padgett

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

200 Comments
Inline Feedbacks
View all comments
Curiousgeorge
November 5, 2011 7:26 am

wayne says:
November 5, 2011 at 12:49 am
Since a twenty year period is also a 240 month period, are those 240 data points enough to be significant? And in reality we are measuring daily. Are 7300 data points enough to be significant? Never could figure out the logic right there but would love for a statistician explain it to me (and others).
Who is setting these rules here? Pure mathematics or the climate scientist community (aka IPCC)?
====================================================================
The level at which the result of a statistical analysis becomes significant is a purely arbitrary decision on the part of the analyst or his boss. Stats 101. There are conventions that have been adopted over the years ( 1, 2, or 3 sigma), but they are not carved in stone. Has to do with the power of the test to reject a given hypothesis. The analyst attempts to reduce the probability of a Type II error.

commieBob
November 5, 2011 7:26 am

R. Gates says:
November 5, 2011 at 6:21 am

This is muddled thinking at it’s worst. The climate does not change by chance.

You can make the argument that nothing happens by chance. For instance it is, in theory, possible to predict what numbers will come up when you throw the dice. All you need is a bunch of data and a good computer.
Given that you, personally, are unable to predict how the dice will land when they are thrown by a human being, you are stuck with talking about chance.
Given that we don’t have:
1 – sufficient data
2 – sufficient computer power
3 – sufficient knowledge of the underlying processes
we are stuck with talking about chance when we talk about the climate. We describe it as a chaotic system because its sensitivity to initial conditions makes accurate predictions almost impossible no matter how much computer power we can bring to bear.
http://en.wikipedia.org/wiki/Butterfly_effect

Cherry Pick
November 5, 2011 7:31 am

DAV says:
November 5, 2011 at 6:56 am
“A fine article though I’d like to point out there is no such thing as “fundamental chaos” in nature.”
I disagree. Nature has many non-linear adaptive systems where small changes in the initial conditions can result in large changes whole system. For example, the behavior of water is quite different when the temperature changes from 1C to -5C. Non-linearity means that governing equation is not linear e.g. radiation is proportional to the forth power of temperature. Adaptive means the there are positive and negative feedbacks. This is what climate sensitivity is about. For example an increase in temperature due to CO2 in the atmosphere, is immediately connected to thermal expansion of air which opposes the change.
Lottery machine is a real life example of chaos theory. You can’t predict the numbers because small variations in the initial conditions result in a random outcome. You could argue that its not random because the rolling balls have deterministic behavior but this is what we call chaos.

ferd berple
November 5, 2011 7:33 am

DRSG says:
November 5, 2011 at 7:01 am
the world is not warming, despite the evidence.
Where I live, things are cooling as compared to a month ago, but warming as compared to an hour ago.
If you go back far enough in time, there is no place on earth that was not at some point in the past warmer or colder than it is right now. Thus, depending on where you start measuring, ever point on earth is both warming and cooling right now – at the same time.

November 5, 2011 7:38 am

Since 1900 the PDO signal aligns with the temperature record, solar influences are important but perhaps play a second tier role. Some might say the PDO is an after effect of ENSO but the data is certainly not clear on this front. Some years the PDO leads the ENSO signal as we see now and other years we see the reverse. An El Nino can produce the warm water pool above New Guinea that fuels the next La Nina but what about the current situation where we see a strong La Nina forming straight after another La Nina. A cool PDO means a warm pool off the coast of Japan which can with the current prevailing winds build the necessary warm pool to fuel the Walker pump.
The current La NIna shows us that ENSO is not driving the PDO, the next one may be different..

Curiousgeorge
November 5, 2011 7:48 am

wayne says:
November 5, 2011 at 12:49 am
Since a twenty year period is also a 240 month period, are those 240 data points enough to be significant? And in reality we are measuring daily. Are 7300 data points enough to be significant? Never could figure out the logic right there but would love for a statistician explain it to me (and others).
Who is setting these rules here? Pure mathematics or the climate scientist community (aka IPCC)?
================================================================
Wayne, rather than waxing poetic about the issue, I’ll simply refer you to this: http://www.statsoft.com/textbook/statsoft-textbook-search/?Name=significance

Nick Shaw
November 5, 2011 7:53 am

@Jerome says sitting around his pool on sunny days he notes the pool gets warmer.
Perhaps, but, I live in Costa Rica and my pool, despite having a dark bottom, does not get appreciably warmer unless there are MANY sunny days AND the kids are using it all the time!
My pool is deep, to accommodate scuba classes and let me tell you, it is very cold most of the time as are most pools in this country that are not heated. When I say many sunny days, I mean 3 or 4 months at a stretch!
I’d be willing to give R.M.B. some credence for his surface tension theory!

John Conner
November 5, 2011 7:57 am

I like the simplicity of the article, but it seems to be lost on those who insist on analytical analysis.
Back to the comparison to plate tectonics – complex theories might “explain” things, but stepping back and looking at the big picture (the continental puzzle) was more correct. This is another example of Occam’s Razor. The AGW crowd (and some of our commenteres here) believe that they need to keep building a more and more complex model to explain global warming. But every complexity is subject to the biases of the researcher and the lack of a significant period of observed data.
Such is the arrogance of modern human intelligence. “If it happened, then humans must have contributed to its happening”. Or the belief that humans can do a damn thing to stop it. I firmly beleive we cannot. It’s a planet. Can we do anything to change the magnetic field? Can we preserve forests by keeping all fires (even natural ones) from happening? Better to sit back and observe for another century or two before taking drastic action. Too many real human lives are at stake.

Dr. Lurtz
November 5, 2011 8:02 am

Mathematics uses a known “equation” to produce a chaotic output. By knowing the “equation” and the input points, the output can be predicted — even though the system output is chaotic.
I am tired of hearing about how the weather/climate is chaotic. We just don’t know enough of the underlying “equation”, yet. The problem is spending billions/trillions of dollars based on part of the “equation”. Until the ego of the scientists, for making a small discovery, is brought under control, wild projections can result.
Especially in an unstable “equation” all input data is critical, therefore, 20 year, 10 year, 400 year, data is all needed and is all equally important.
The last time APG effects were “controlled” was with Freon [R-12, R-22, etc.]. Shouldn’t that Ozone hole closed by now, or is it affected by the “just discovered” UV from the Sun????

Bill Illis
November 5, 2011 8:23 am

R. Gates says:
November 5, 2011 at 6:21 am
Bill Illis says:
“To be able to determine significance, we must determine how the climate can change randomly, how much it can change by by chance.”
—–
This is muddled thinking at it’s worst. The climate does not change by chance.
——————-
Statistical significance, by definition, requires that something can change by chance. It is what the field is all about. Otherwise, there is no significance to measure. All changes, no matter what they are, are significant: they are deterministic. Any cooling trend is therefore significant.
My point is that noone really knows how to measure statistical significance for the climate. We need to be able to measure the probability of change by chance. We can’t. I would say, however, it is a non-Zero quantity unlike your claim.

November 5, 2011 8:30 am

Volker Doormann says:
November 5, 2011 at 7:14 am
Simple summation of solar tide functions from couples of six slow moving planets suggests that the main global temperature effect on Earth is controlled by the solar system:
http://volker-doormann.org/images/hadc_ghi6_1850.gif

Extend the blue curve back to 1600…

Mark M
November 5, 2011 8:38 am

malagaview says: November 5, 2011 at 3:08 am Why is 20 years statistically significant when 10 years is not?
“More importantly: Is (Tmax + Tmin) / 2 meaningful or statistically valid?
http://bishophill.squarespace.com/blog/2011/11/4/australian-temperatures.html
Thanks for the reference. The last time I tried to use range/2 of a data set to estimate an average response a process engineer and statistician asked me a couple questions- what was the variation (variance and SD) in the data for the time period in question and was the data normally distributed over the time period in question. Using essentially + 6 sigma and -6 sigma data points can be very problematic when trying to get an accurate estimate of X bar. Their next question had to do with confidence intervals on my point estimate- which I could not give them without an estimate of sigma.
Thanks once again for the reference to “Jonathan Lowe, an Australian statistician, has performed extensive analysis of weather data recorded at fixed times by Australia’s Bureau of Meteorology (BoM). This analysis is available at his blog, A Gust of Hot Air. The data comes from 21 weather stations manned by professional meteorologists.”
http://gustofhotair.blogspot.com/

James Sexton
November 5, 2011 8:38 am

For those trying to defend the idea that 10 or 11 years doesn’t say much, a bit of a history lesson would be in store. As many have pointed out, it wasn’t until the mid to late 70s did the cooling notion die out. And we went to this warming notion. The notion of CAGW was in full swing by the time Hansen played with Congress’ thermostat. How many years of data did the warmista have to get the warming ball rolling?…….. About 10 to 11 years.
Don’t play their time frame game. 10 years is relevant to what is occurring. 30 years will tell you what occurred. Both time frames are arbitrary. And both hold equal value. They simply examine different things. Will ten years tell us much when discussing solar and ocean cycles? Probably not, though you can see La Nina/El Nino occurrences. Recall, atmospheric CO2 has increased by about 30ppm since 1998. I think that is extremely relevant when discussing the hypothesis that CO2 drives our climate, especially considering 280ppm as some sort of baseline.

Leonard Weinstein
November 5, 2011 8:50 am

DAV says:
November 5, 2011 at 6:56 am
Dr. Lurtz says:
November 5, 2011 at 8:02 am
Both of you do not understand the concepts in chaos theory. I suggest you read a good book on it. For the climate, we know the NS fluid dynamics equations, and radiation equations very precisely. On top of those there are many other terms controlling climate such as cloud variation, solar effects, aerosols, slow varying ocean currents, etc. Even if we knew all relevant equations of all the physics, and measured initial conditions from a starting time and at all locations to dozens of decimal places, the basic theory in chaos shows that multiple nonlinear equations of this level will diverge from any level of calculation accuracy over time, and for weather and climate, this time is very short. There is no solution possible that will hold for long times. This is characterized by the the expression “the butterfly effect”. Until you realize the nonsense you have stated, you are clearly uninformed on this issue.

November 5, 2011 9:00 am

If somewhat arbitrary claims are made that there must be a rigid minimum threshold of a 17 year period of data as an absolute requirement to show significance statistically, and then if someone else observes that at the end of the previous 17 years there is a 11-to-13 year continuous segment showing a different (plateau-like to then declining) behavior than the (up-sloping) behavior of the first 4-to-6 years, then all bets are off on what is going to happen in the next 4-to-6 years except that several significant natural (non-anthropogenic) physical mechanisms provide indicators that at least the next decade will have persistent cooling.
Nature is instructing that a posited alarming warming by CO2 from fossil fuels is not very informative.
John

ferd berple
November 5, 2011 9:14 am

commieBob says:
November 5, 2011 at 7:26 am
You can make the argument that nothing happens by chance. For instance it is, in theory, possible to predict what numbers will come up when you throw the dice. All you need is a bunch of data and a good computer.
In which case the future is written and there is no such thing as free will. All our future choices are already determined by our current state. In which case the future is rather pointless.
Quantum mechanics suggests otherwise. That the future is not deterministic. That is an illusion of scale. The future is not a place we are traveling to. The future is a probability.
For example: When you say you are going on vacation next week that is not entirely accurate. You are planning to go on vacation, and are taking steps to increase the probability that you will in fact go on vacation, but events are also in motion which could prevent you from going. Until the vacation actually arrives, you don’t know for sure that you are in fact going to go on vacation.
We like to believe that if we knew all the events going on in the world, we could determine in advance if we were actually going to go on the vacation. Our current level of understanding of the physical world is such that you cannot calculate this in advance, even with the computer that has not yet been built, at a very fundamental level.
There was a time, many years ago, that scientists knew this to be true. However, over time that knowledge has been lost. People at a very fundamental level were not comfortable with the concept that the future is uncertain.
The Precautionary Principle was invented to replace the Uncertainty Principle – Heisenberg was German after all. Computers programs were developed to predict the decades in advance, and thus the future was thus secured for all. Scientist now are certain that all they need to make it all work is bigger computers.

JudyW
November 5, 2011 9:17 am

Ten to 20 years of time is not enough. The earth’s magnetic fields are in a constant state of change. The strength, movement, integration or disintergration of the fields seem rarely to be part of the climate argument but are probably significant drivers in climate. Along with solar influences, the location and strenght of anomalies like the South Atlantic contribute to ocean oscillations..
http://www.science27.com/Earth/index.htm
#TEN in this article at the above link shows a graph of magetic strength compared to temperatures for 100,000 years. I would have liked the peak of the last interglatial to be included in the time span since temperatures then roughly correspond to temperatures now.

Dave Springer
November 5, 2011 9:17 am

James, James, James…
Correlation is not causation.
The classic cautionary tale is to plot shoe size against income. Very high positive correlation – the bigger the shoe the larger the income.
The CO2 correlation has the distinct advantage of a proven modus operandi. The sunspot correlation does not. Even a fifth grader should know that so if I were you I’d be more careful about who I was calling not smarter than a fifth grader – people who live in glass houses shoudn’t throw stones.

ferd berple
November 5, 2011 9:23 am

R. Gates says:
November 5, 2011 at 6:21 am
This is muddled thinking at it’s worst. The climate does not change by chance.
God plays dice with the climate.

R. Gates
November 5, 2011 9:23 am

Bill Illis says:
November 5, 2011 at 8:23 am
R. Gates says:
November 5, 2011 at 6:21 am
Bill Illis says:
“To be able to determine significance, we must determine how the climate can change randomly, how much it can change by by chance.”
—–
This is muddled thinking at it’s worst. The climate does not change by chance.
——————-
Statistical significance, by definition, requires that something can change by chance. It is what the field is all about. Otherwise, there is no significance to measure. All changes, no matter what they are, are significant: they are deterministic. Any cooling trend is therefore significant.
———-
We are talking about separating signal from noise when taking about long-term climate changes, versus shorter-term, thus the significance of any change must be measured against the appropriate scale of time. It is finding that appropriate scale for the anthropogenic signal that is really the core issue, and that’s what this excellent bit of research was about:
http://www.agu.org/pubs/crossref/pip/2011JD016263.shtml
If I measure the temperature at 3 a.m. and then again at 3 p.m. And note the 30 degree difference, it would be inappropriate to conclude anything about Milankovitch cycles from that difference even though of course the difference is statistically significant and certainly not by chance. So too, if I take the temperature at the the height of a big El Nino, and then again 10 years later at the bottom of an series of La Nina, and try to draw any conclusion from that about anthropogenic forcing from CO2, that would be an inappropriate application of scale. The research referenced above make a very strong case as to why.

ferd berple
November 5, 2011 9:24 am

Dave Springer says:
November 5, 2011 at 9:17 am
The classic cautionary tale is to plot shoe size against income. Very high positive correlation – the bigger the shoe the larger the income.
I’ve got to get myself some bigger shoes!

Jason Calley
November 5, 2011 9:27 am

DAV says: “A fine article though I’d like to point out there is no such thing as “fundamental chaos” in nature. Choas Theory is a mathematical concept. Just because something appears chaotic doesn’t mean it is. One should never confuse mathematical models with reality. Chaos and Randomness are merely expressions of lack of knowledge. There is no theoretical reason that the underlying processes will not be discovered. Otherwise, the basic mission of Science is futile. If von Neumann made an error it was in underestimating the complexity of the problem possibly by several orders of magnitude.”
DAV says: “A fine article though I’d like to point out there is no such thing as “fundamental chaos” in nature. Choas Theory is a mathematical concept. Just because something appears chaotic doesn’t mean it is. One should never confuse mathematical models with reality. Chaos and Randomness are merely expressions of lack of knowledge. There is no theoretical reason that the underlying processes will not be discovered. Otherwise, the basic mission of Science is futile. If von Neumann made an error it was in underestimating the complexity of the problem possibly by several orders of magnitude.”
Let me expand slightly on what Cherry Pick has said. The chaos theory of the 1970s is a fundamentally new way of analyzing much of what we see in nature. I would respectfully disagree that what we see in nature is not chaos. The fundamental properties needed for chaotic behavior are non-linear behavior and reiteration, i.e., each succeeding moment is a function of what the immediately prior moment was. You are correct that chaos and randomness are often conflated — especially in common speech — but the two are actually very different. We know that physics has been especially successful over the years in reducing complex phenomena to the interaction of simple forces. Chaos theory does the same and has found that much of what happens in the world appears to be random — but is actually deterministic. True randomness cannot be predicted, not even in theory. Chaos, on the other hand, even though it appears to be random, can be predicted. It is deterministic, it is a system responding step by minute step to predictable natural rules. So what is the big deal, what makes chaos any different from normal, everyday physics problems? Chaos can be predicted, but there is a catch. Actually, two catches, and they are both very big. Chaotic systems can be predicted only if we have infinitely exact knowledge of all the beginning physical states, and only with a computer or calculating system at least as complicated — if not more complicated — the system which is being analyzed. Von Neumann’s mistake is completely understandable. Up until the 1970’s is was believed that all phenomena –- such as weather – were inherently simplifiable. There would be, so it was thought, a method of reducing large systems into smaller more easily analyzed systems. Chaos explains that this is not so.
So where does that leave us? Obviously, not all phenomena are chaotic – otherwise physics and science in general would have never gotten off the ground. Even chaotic systems have regularities which can, in fact be predicted, but we now know that all such predictions have specific limits. Chaotic systems tend to do “quantum jumps” from one state to another. Each specific state, if you look more closely, tends to have finer substates to it. Certainly, if we look at long time scale charts of Earth’s climate or temperature, we see large jumps, mostly bounded by the thermal properties and phase change states of water. More recently, in the 20th Century we see small step changes in temperature (like the one in the late 90s.) Instead of being able to predict what climate – or weather – we will have, we can only (at best) be able to say, “It is most likely to fall within a certain range, with these regions being most likely.” And just like a rock bouncing down a mountain slope, with each state shift, the results become more and more unpredictable. Using the world’s most powerful computers, we can give rough prediction of weather only a week or so out before it becomes too error laden to be useful. As the article points out, the addition of more data, that is oceanic and solar states, may allow us to make better (though still not arbitrarily good) predictions of longer term climate.
Chaos is not randomness, so at least we have the possibility of approaching useful prediction, but we are very limited in how close we will ever get to what von Neumann expected.

November 5, 2011 9:44 am

Leif Svalgaard says:
November 5, 2011 at 8:30 am
Volker Doormann says:
November 5, 2011 at 7:14 am
“Simple summation of solar tide functions from couples of six slow moving planets suggests that the main global temperature effect on Earth is controlled by the solar system:
http://volker-doormann.org/images/hadc_ghi6_1850.gif
Extend the blue curve back to 1600…

No problem, Sir.
http://volker-doormann.org/images/hadc_ghi6_1000.gif
Back to 1000 AD + high frequency temperature reconstruction from A.Moberg et al.
http://volker-doormann.org/images/hadc_ghi6_500.gif
Back to 500 AD + high frequency temperature reconstruction from A.Moberg et al.
Done.
V.

Dave Springer
November 5, 2011 9:49 am

ferd berple says:
November 5, 2011 at 9:14 am

In which case the future is written and there is no such thing as free will. All our future choices are already determined by our current state. In which case the future is rather pointless. Quantum mechanics suggests otherwise. That the future is not deterministic. That is an illusion of scale. The future is not a place we are traveling to. The future is a probability.
For example: When you say you are going on vacation next week that is not entirely accurate. You are planning to go on vacation, and are taking steps to increase the probability that you will in fact go on vacation, but events are also in motion which could prevent you from going. Until the vacation actually arrives, you don’t know for sure that you are in fact going to go on vacation.
We like to believe that if we knew all the events going on in the world, we could determine in advance if we were actually going to go on the vacation. Our current level of understanding of the physical world is such that you cannot calculate this in advance, even with the computer that has not yet been built, at a very fundamental level.

You had no real choice in writing the above, of course, nor in your belief that you possess free will. And I of course had no real choice but to make exactly this response.
Seriously, the jury is still out on determinism vs. nondeterminism. There is no consensus even among quantum physicists. Even Stephen Hawking famously lost a wager to Leonard Susskind over this. Hawking argued that information could be destroyed, or at least hidden forever inside a black hole, because once something fell in the only way out was via particles quantum tunneling through the event horizon (Hawking Radiation). He argued that the tunnelling was predictable only statistically and so information about the particle before it entered the black hole was lost when it tunneled back out. This actually violates a fundamental tenet of quantum mechanics – conservation of information – and Susskind argued that you can’t get around that rule even with a black hole. The mathematical details are beyond my ken but after ten years and every theoretical physicist in the world throwing in his two cents Hawking conceded and conservation of information, and thus absolute determinism which follows from it, is still safely ensconsed.

RockyRoad
November 5, 2011 9:58 am

R. Gates says:
November 5, 2011 at 5:58 am

Now, for those who want to know what the research says about why a period greater than 17 years is required to see the anthropogenic signal through short-term fluctuations, see:
http://www.agu.org/pubs/crossref/pip/2011JD016263.shtml
The only quibble with this paper I might have is that it really didn’t consider longer-term solar cycle fluctuations such as we might see during a Dalton or Maunder minimum, which would push that period even longer of course. But, as a general rule, greater than 17 years is the minimum to see the anthropogenic signal, and so anyone pointing to a shorter period as proof the anthropogenic signal is not there simply is misguided or intentionally trying to deceive.

Wait a minute… are you being honest? Admittedly it takes 17 years for their models to see signal/noise ratios sufficient to detect forcing (their term), but these were all based on computer models (simulation) based on somebody’s CAGW-determined algorithms. Want me to quote it from the summary on their paper?
Here it is:

“…a multi-model ensemble of anthropogenically-forced simulations…”

You’ve moved into the accusatory mode by saying anybody pointing to a shorter time period “as proof the anthropogenic signal is not there simply is misguided or intentinally trying to deceive” and yet who you refer to is using models.
Again.
Is it their purpose to use referential conclusions to eventually justify an anthropogenic signal just because their models show what it takes in signal/noise ratio to find one?
Probably. That’s how they do things there in climsci. They didn’t say they found it. They just indicated how long the time period would have to be to find it if you choose to believe in their “models”. So is it you who (turnabout is fair play) “is misguided or intentionally trying to deceive?”
I think the latter, but you make the call because you’re privy to that information.