Study by Mann: admits 'the pause' in global warming was not predictable, but lets models off the hook

From AGU EOS —Terri Cook, Freelance Writer

estimated-pattern-multidecadal-internal-variability-sea-surface-temps-800x600

Was the Recent Slowdown in Surface Warming Predictable?

The temporary deceleration in warming across the Northern Hemisphere earlier this century could not have been foreseen by statistical forecasting methods, a new study concludes.

SOURCE: Geophysical Research Letters

From the early 2000s to the early 2010s, there was a temporary slowdown in the large-scale warming of Earth’s surface. Recent studies have ascribed this slowing to both internal sources of climatic variability—such as cool La Niña conditions and stronger trade winds in the Pacific—and external influences, including the cooling effects of volcanic and human-made particulates in the atmosphere.

Several studies have suggested that climate models could have predicted this slowdown and the subsequent recovery several years ahead of time—implying that the models can accurately account for mechanisms that regulate decadal and interdecadal variability in the planet’s temperature. To test this hypothesis, Mann et al. combined estimates of the Northern Hemisphere’s internal climate variability with hindcasting, a statistical method that uses data from past events to compare modeling projections with the already observed outcomes.

The team’s analyses indicate that statistical methods could not have forecast the recent deceleration in surface warming because they can’t accurately predict the internal variability in the North Pacific Ocean, which played a crucial role in the slowdown. In contrast, a multidecadal signal in the North Atlantic does appear to have been predictable. According to their results, however, its much smaller signal means it will have little influence on Northern Hemisphere temperatures over the next 1 to 2 decades.

This minor signal in the North Atlantic is consistent with previous studies that have identified a regional 50- to 70-year oscillation, which played a more important role in controlling Northern Hemisphere temperatures in the middle of the 20th century than it has so far this century. Should this oscillation reassume a dominant role in the future, argue the researchers, it will likely increase the predictability of large-scale changes in Earth’s surface temperatures.


Paper:

Predictability of the recent slowdown and subsequent recovery of large-scale surface warming using statistical methods

Authors

Michael E. Mann, Byron A. Steinman, Sonya K. Miller, Leela M. Frankcombe, Matthew H. England, Anson H. Cheung

(Geophysical Research Letters, doi:10.1002/2016GL068159, 2016)

Abstract

The temporary slowdown in large-scale surface warming during the early 2000s has been attributed to both external and internal sources of climate variability. Using semiempirical estimates of the internal low-frequency variability component in Northern Hemisphere, Atlantic, and Pacific surface temperatures in concert with statistical hindcast experiments, we investigate whether the slowdown and its recent recovery were predictable. We conclude that the internal variability of the North Pacific, which played a critical role in the slowdown, does not appear to have been predictable using statistical forecast methods. An additional minor contribution from the North Atlantic, by contrast, appears to exhibit some predictability. While our analyses focus on combining semiempirical estimates of internal climatic variability with statistical hindcast experiments, possible implications for initialized model predictions are also discussed.

 

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
196 Comments
Inline Feedbacks
View all comments
Bye Doom
May 11, 2016 3:49 pm

Yet they’re sure it will be at least three degrees warmer in 2100 than now.
What will Mann say when the Plateau resumes with the coming La Nina, or turns into a valley rather than resuming foothill status?

Reply to  Bye Doom
May 11, 2016 11:12 pm

Mann needn’t worry about a La Nina cooling. The Church Priests will Karlize the temperature data to report what they need.

Jay Hope
Reply to  Bye Doom
May 12, 2016 12:50 am

I thought Mr Mann and his ilk had actually gone to some lengths to deny that the ‘pause’ ever happened at all?? Now he’s saying it did?

Reply to  Jay Hope
May 12, 2016 6:22 am

You notice he called it a temporary slowdown… Lol

george e. smith
Reply to  Jay Hope
May 12, 2016 2:12 pm

“””””….. The temporary deceleration in warming across the Northern Hemisphere earlier this century could not have been foreseen by statistical forecasting methods, a new study concludes. …..”””””
Well I hate to break the news to them, but absolutely nothing ” can be foreseen by statistical forecasting methods. ”
Statistics is always an analysis of stuff you already know because it has already happened. So you can’t even do any analysis, until you know what happened.
But then you still know absolutely nothing about anything that hasn’t happened yet, and you never will, until it has happened.
You can take a wild ass guess on what you THINK might happen.
But your guess is no better than mine on that. Neither of us knows what is going to happen next.
Nothing ever happens again, so the fact that something did happen, is no basis for assuming it will happen again, because it won’t; something else will.
G

Reply to  Jay Hope
May 12, 2016 6:33 pm

george e. smith: Well I hate to break the news to them, but absolutely nothing ” can be foreseen by statistical forecasting methods. ”
I’m guessing that you don’t use Garman or a modern cell phone to help you navigate in a place you have never been before. Are you guessing when Halley’s comet will return, or the next lunar eclipse will occur?
Put more directly, statistical modeling and forecasting methods (e.g. Kalman filters, weighted nonlinear least squares regression) are used routinely to estimate the positions and trajectories of the GPS satellites.

ironicman
Reply to  Bye Doom
May 12, 2016 1:49 am

Its more like a plateau and escarpment, the Klimatariat will have nowhere to run.

Peter Miller
May 11, 2016 3:59 pm

So, the bottom line here is that Mann and his models cannot make any worthwhile predictions.
We all already knew that, pity it has taken him so long.

Reply to  Peter Miller
May 12, 2016 6:35 pm

But he got a bunch more grant $$$.

May 11, 2016 4:04 pm

If the accuracy of climate models was graphed in numerical form as temperatures are, without alteration they would have a negative anomaly approaching 100.

David A
Reply to  Pat Ch
May 12, 2016 2:54 am

The models do not know if we are rapidly entering an ice age, or entering a global sauna…
comment image

Reply to  David A
May 12, 2016 5:22 pm

Geologists have identified the evidence of (at least) 17 previous Ice Ages separated by brief Inter-Glacials. There may be more Ice / warm cycles before that, but the evidence kept getting chewed up by the geological processes at work.
A core tenet of Geology is that the processes we see at work today were at work in the past. Given the cyclical nature of Ice Ages and Inter-Glacials, I’m betting on another 170K long Ice Age will soon be upon us, and the petty musings of Mann and his superstitious fear-mongers will be proven very wrong.
The Greenies were lucky that after their failed Ice Age prophecies there was an uptick from a rebounding from the end of the Little Ice Age. They rode that pony, but it has died. We are now entering at least a 35 year period of lower temperatures, which Mann and his disciples will shake their Ju-Ju Beans at and make all sorts of pronouncements, as they try in vain to keep milking the cash cow that will progressively dry up more each year.
I predict all sorts of drastic actions by the Alarmists, including violence, but if we Skeptics can just hold on (and stay alive long enough to pass our belief in The Scientific Method on to the next generation) we will finally be vindicated.

Michael Jankowski
May 11, 2016 4:19 pm

Reference is incorrect. It should read:
Paper:
Predictability of even more excuses for the recent slowdown and continuing failure of climate models and consensus using statistical bullshit.
Authors
Michael E. Mann, Byron A. Steinman, Sonya K. Miller, Leela M. Frankcombe, Matthew H. England, Anson H. Cheung

JohnR
Reply to  Michael Jankowski
May 11, 2016 6:43 pm

Spot on Michael – LOL

charles nelson
May 11, 2016 4:20 pm

And only a couple of years back many of them were still refusing to discuss ‘the pause’…I guess they had to wait for the El Nino spike to save face.

Julian Williams in Wales
May 11, 2016 4:26 pm

Strange bedfellows Mann and Statistics!

george e. smith
Reply to  Julian Williams in Wales
May 12, 2016 2:16 pm

We don’t need no stinkin’ statistics !
One Charlie Brown Christmas Tree in Yamal, is all you need to tell you the climate anywhere, at any time.
g

ScienceABC123
May 11, 2016 4:28 pm

“statistical methods could not have forecast the recent deceleration in surface warming because they can’t accurately predict the internal variability in the North Pacific Ocean”
Other than the fact the models don’t work they’re wonderful at supporting some people’s belief that man in the cause of global climate change.

JohnWho
Reply to  ScienceABC123
May 11, 2016 4:47 pm

OK, so they couldn’t predict the recent deceleration in surface warming but they can predict, with what, 95% confidence, that the surface will warm in an “unprecedented” manner?
Yeah, I can’t see where anyone would have a problem with that.

Reply to  ScienceABC123
May 11, 2016 6:13 pm

Semi-empirical mann bends over backwards to retain the models –
oh-you-cargo-cult-pseudo-scientist – you!

Reply to  ScienceABC123
May 11, 2016 7:33 pm

statistical methods could not have forecast the recent deceleration in surface warming…” because statistics isn’t physics. Statistics cannot predict physical events.
The fact that Mann & Co., can even write that sentence is enough to demonstrate without doubt that none of them are scientists.

Mickey Reno
Reply to  Pat Frank
May 12, 2016 3:52 am

If I read this right, Mann is asking “if we’d tried to use our models to predict the current pause, could we have done so?” But they already did NOT predict this. So what he’s really asking is can we tweak our models now to match what we now know really happened? And then he finds the answer to be “no.” So then he feels the need to equivocate, so makes up a story about how they could have predicted small component of a cool-down in the North Atlantic, if they’d wanted to.
And I need to give up fossil fuels for this level of chicanery and prestidigitation?

paqyfelyc
Reply to  Pat Frank
May 12, 2016 5:02 am

you said it all
Which let us wonder how it could be published.

Reply to  Pat Frank
May 12, 2016 5:47 am

“because statistics isn’t physics”
The mechanism is physical – AMO and NAO cycles. Statistics are neded to identify the cycles, and estimate the extent to which they can overcome natural variability, including possibility variability in the oscillations themselves.

JohnKnight
Reply to  Pat Frank
May 12, 2016 1:11 pm

“Statistics are neded to identify the cycles, and estimate the extent to which they can overcome natural variability, including possibility variability in the oscillations themselves.”
But the science is settled, right, Nick? Our CO2 emissions will cause catastrophic global warming, despite the physics only indicating slight warming, and the rest resulting from hypothetical amplification mechanisms, assumed into the models . . which tell us with certainty that assuming those in causes the models to project catastrophic warming . . Which will them become real . . unless we give out firstborn to the UN, right?

Eugene WR Gallun
Reply to  Pat Frank
May 12, 2016 2:18 pm

Mickey Reno — well, said. — Eugene WR Gallun

rdcii
Reply to  Pat Frank
May 12, 2016 9:16 pm

Nick Stokes, at 5:47 AM…”The mechanism is physical – AMO and NAO cycles. Statistics are neded to identify the cycles, and estimate the extent to which they can overcome natural variability, including possibility variability in the oscillations themselves.”
So..AMO and NOA are not part of “natural variability”? Are they then……Super-natural?
Can you point us to a paper that defines which physical processes are variable and unpredictable are “natural” vs. “supernatural”?
Sorry, but these ill-defined terms allow people to say some strange things…

Reply to  Pat Frank
May 14, 2016 2:45 pm

Nick, “Statistics are neded to identify the cycles, and estimate the extent to which they can overcome natural variability, …
Not correct, Nick. Physics is needed to identify climate cycles. Statistics might be used to estimate time of appearance, if sufficient cycles are identified as physically iso-causal.
I’m glad you showed up here by the way, so I can let you know that your BOM resolution argument was wrong. And the self-congratulatory purring among some of the commenters at your site didn’t speak well for them, either.

AGW is not Science
Reply to  ScienceABC123
May 12, 2016 9:34 am

Astounding how they can’t “see” (natural) variability as a cause of anything BUT the failure of their GIGO “models” to “predict” ANYTHING. And also astounding that they actually believe that people seeing the phrase “internal variability” are gullible enough to believe it means anything less than “We have no idea what the $^#! is going on.”

May 11, 2016 4:31 pm

What does this mean?
“In contrast, a multidecadal signal in the North Atlantic does appear to have been predictable. According to their results, however, its much smaller signal means it will have little influence on Northern Hemisphere temperatures over the next 1 to 2 decades.”
He seems to be saying that they can detect a longer signal but it doesn’t have any significance? My head hurts now.

TonyL
Reply to  jinghis
May 11, 2016 6:04 pm

If I recall properly, Mann + Co. presented a paper where they used dendrochronology to reconstruct the NAO (North Atlantic Oscillation). That’s right, tree rings as a proxy for ocean currents. No doubt using the Global Climate Teleconnection Signal (I love that one). {The paper was featured here at WUWT} This current work seems to be based on that previous effort.
Your head feel better now?

Reply to  TonyL
May 11, 2016 6:35 pm

No, and now my stomach is bothering me as well.

Reply to  TonyL
May 11, 2016 11:17 pm

I just left a large Mann-uscript in the loo. Lit a candle to alleviate the remainders and turned on the fan.
Too bad lighting a candle won’t remove the Mann-stench in Climate Science.

george e. smith
Reply to  jinghis
May 12, 2016 2:29 pm

Well it does appear to have been predictable but the facts of the matter, is that it WAS NOT PREDICTED.
So fooey; you Monday Morning quarterbackers. If you didn’t predict, don’t tell us later that you could have.
So Cartoon !!
Andy Capp, somewhat sloshed, is staggering home on radar, in the late evening fog in London. He crashes into another lost drunk staggering along, and grabs a hold of him for stability. ” ‘ear; ‘ang on tuh me, mate; I know where every drainage ditch in London is ” ; ‘ Sssploosh !!! ‘; ” See, there’s one of ’em now ! ”
Yeah; predict, my A***.
G

Analitik
May 11, 2016 4:35 pm

So the science has been unsettled?

Kevin
May 11, 2016 4:35 pm

Hmm, the total ocean heat capacity is about 1000x greater than the atmosphere. The surface layer they are modeling and measuring is the first 1.5 m of an atmosphere that extends 1000s of meters up. If approximately 1/1000 of the atmosphere’s mass is in the lower 1.5 m (seems like a reasonable approximation, but it’s a difficult calculation with pressure dropping as you go up.) Then they are measuring/modeling something that represents 1/1,000,000 of the earth’s total heat capacity, while completely ignoring the other 999,999/1,000,000 portion which is constantly and actively transferring heat in and out of the surface layer. And they wonder why the models can’t predict the surface temperature.

lemiere jacques
Reply to  Kevin
May 11, 2016 10:07 pm

may be but then comes the magic part…

AGW is not Science
Reply to  Kevin
May 12, 2016 9:45 am

Oh, but see you’re just providing us with a preview of the next round of excuse-mongering. They’ll just dust off that “heat is hiding in the oceans” (which couldn’t be heated by IR beyond the top few MICRONS of surface) canard, when they no longer have El Nino to produce “scary” thermometer numbers.

May 11, 2016 4:37 pm

But I thought Karl had abolished the “pause”.
banished by waving engine room readings over the data from specifically designed measuring devises mounted on bouys.
So how can the Mann be so off message to refer to a “Temporary deceleration”.
Not a pause of course.
I wonder what he would call a full stop?
I guess when you have nothing left to support your belief except models,then you must cling to them.
Re-examining ones beliefs is far too hard.
I almost feel sorry for the mann, way in over his head and first to be thrown to the angry mob as the mass hysteria dies.

Reply to  John Robertson
May 11, 2016 4:54 pm

But I thought Karl had abolished the “pause”.
Wah, you beat me to it.
Karl says the pause never existed, Mann says it did and explains why the models missed it.
This “consensus” seems rather fractious, does it not?

Greg Cavanagh
Reply to  John Robertson
May 11, 2016 4:58 pm

Good thing Mann isn’t a police officer, “Sir, you failed to STOP at that STOP sign. Now I’m going to have to charge you”.
Car driver “But the car hasn’t moved in 19 years, how much stopped do you want?”

MarkW
Reply to  John Robertson
May 12, 2016 6:34 am

Mann never acknowledges the pause either.
He calls it a slowdown in warming.

AGW is not Science
Reply to  MarkW
May 12, 2016 9:49 am

A variation of the old “The long term trend is still (when I start my “trend” in the LIA) up!” canard.

Marcus
May 11, 2016 4:55 pm

Wow, the threat of a president Trump has really shaken things up !!

TA
Reply to  Marcus
May 11, 2016 6:44 pm

Marcus, I like your take on the situation. With this study, these Climate Change Charlatans may very well be positioning themselves for Trump’s possible election.
I don’t think anything they do will save them from intense Trump scrutiny, though.
If Trump gets elected, the whole CAGW three-ring-circus is going to be put under a microscope.
I can’t wait! 🙂

Reply to  TA
May 12, 2016 12:13 am

Here in Britain, we can’t wait either. Trump is going to stuff it to so many people – here in Britain too.

AGW is not Science
Reply to  TA
May 12, 2016 9:50 am

For a moment I read that to say “TREE ring circus,” which would have given you the “Pun of the Climate Cycle” Award. 😀

TA
May 11, 2016 5:05 pm

From the article: “From the early 2000s to the early 2010s, there was a temporary slowdown in the large-scale warming of Earth’s surface.”
Why end it in 2010? The years subsequent to 2010 are also included in the “flatline” of the 21st century, since they are all within the margin of error.

Gary Kerkin
Reply to  TA
May 11, 2016 5:14 pm

Because once the period exceeds 10 years it is approaching a substantial fraction of the “standard” climatological period of 30 years. As it is, the “pause” is now more than half of such a period. Do you think they will ever wish to use 1991-2020 (or 2001-2030) as a reference period?

Nick Stokes
Reply to  TA
May 11, 2016 7:19 pm

“Why end it in 2010?”
They didn’t. Your quote says “early 2010s”, not 2010. In fact they took data to 2014, but note that the slowdown effect diminishes if you go to the end of that period:
“Though less pronounced if data through 2014 are used [Cowtan et al., 2015; Karl et al., 2015], the slowdown in Northern Hemisphere (NH) mean warming is evident through at least the early part of this decade.”

TA
Reply to  Nick Stokes
May 11, 2016 8:18 pm

Thanks for that clarification, Nick.

Reply to  Nick Stokes
May 11, 2016 8:33 pm

Gee, Nick! The Pacific Ocean and its Blob had nothing to do with 2014 temperatures, did they? Ram and jam through 2015 and 2016 Pacific Ocean ENSO temperatures and Cowtan and Karl will really have something to show the rubes. No big pressers projected after 2017? Get a grip.

Nick Stokes
Reply to  Nick Stokes
May 11, 2016 10:02 pm

“The Pacific Ocean and its Blob had nothing to do with 2014 temperatures”
They were talking about an observed slowdown. By 2014 they weren’t observing a slowdown any more. No use saying – well there really was an observed slowdown, except for the Blob, and ENSO etc. On that basis, you could project the slowdown for ever.

May 11, 2016 5:06 pm

Mann is off-message. I thought they did away with the pause, just like Mann did away with the Medieval warm. Very convenient things, models and proxies. About as good as onieromancy, but convenient.

Paul Westhaver
Reply to  Tom Halla
May 11, 2016 5:26 pm

Tom, He is trying to buy street cred. He’ll jump on the pause bandwagon long enough to be quoted and therefore relevant. …shallow has been.

MangoChutney
Reply to  Paul Westhaver
May 11, 2016 10:50 pm

Shallow never was

William R
May 11, 2016 5:13 pm

Since when do the warmists use statistical models anyway? I got the impression all their supercomputer-powered models where determistic…wholly dependent on input variables, assumptions, and initial conditions. Statistical models wouldn’t have really predicted a specific outcome either, only assigned a probability estimate. Bottom line, this warmist propaganda piece is fail on top of fail inside a fail.

commieBob
Reply to  William R
May 11, 2016 6:06 pm

Exactly.
If you can’t do a deterministic model, that means you don’t understand the process. That, in turn, means your predictions of future behaviour should not be believed.

PiperPaul
Reply to  commieBob
May 11, 2016 6:23 pm

“If you put tomfoolery into a computer, nothing comes out of it but tomfoolery. But this tomfoolery, having passed through a very expensive machine, is somehow ennobled and no-one dares criticize it.”
– Pierre Gallois

Reply to  commieBob
May 12, 2016 2:18 pm

Deterministic models don’t work on just about everything, especially on computers. see Edward Norton Lorenz’s paper Deterministic Nonperiodic Flow for the full explaination, the Short answer;

An early pioneer of the theory was Edward Lorenz whose interest in chaos came about accidentally through his work on weather prediction in 1961.[7] Lorenz was using a simple digital computer, a Royal McBee LGP-30, to run his weather simulation. He wanted to see a sequence of data again and to save time he started the simulation in the middle of its course. He was able to do this by entering a printout of the data corresponding to conditions in the middle of his simulation which he had calculated last time. To his surprise the weather that the machine began to predict was completely different from the weather calculated before. Lorenz tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 was printed as 0.506. This difference is tiny and the consensus at the time would have been that it should have had practically no effect. However, Lorenz had discovered that small changes in initial conditions produced large changes in the long-term outcome.[39] Lorenz’s discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modelling cannot, in general, make precise long-term weather predictions. Chaos theory

Also note that weather is an instance of climate and if weather is chaotic, than Self-similarity forces climate to be chaotic as well.

Reply to  William R
May 11, 2016 6:54 pm

Since when do the warmists use statistical models anyway?

Yes. You beat me to it, but that’s the real problem and also the reason the models they’ve developed using “hindcasting” can never work.
Anyone with even a basic understanding of statistical modeling knows you can’t ever use an empirically derived model outside its calibration range; it is never valid to extrapolate from an empirical model.
I’ve had this discussion with devotees of the CMIP models many times, on every occasion I’ve been assured I don’t know squat about how those models work, that they are in fact theoretical models based on physical relationships (i.e. the so called “greenhouse” effect) and therefor may legitimately be used to predict the behavior of Earth’s climate. Mann very clearly just denied that and in so doing admitted the methods currently used to model climate change are completely invalid. They can never succeed.
It’s a fundamental rule: You can’t ever extrapolate from an empirical model. Unless you have a complete theoretical model of the system, it will fail to predict. It may appear to succeed randomly, but it will never consistently predict the behavior of the modeled system. This is burned into the brain of every single experiment designer employing statistical methods.
Under this declaration, there is absolutely no reason to continue funding this research. It’s snake oil. Plain and simple.

Reply to  Bartleby
May 11, 2016 7:06 pm

Anyone with even a basic understanding of statistical modeling knows you can’t ever use an empirically derived model outside its calibration range; it is never valid to extrapolate from an empirical model.

I have argued for years that the climate models are nothing but expensive curve fitting programs. The super computers and millions of lines of code is just the lipstick they put on the curve fitting pig.

Johann Wundersamer
Reply to  Bartleby
May 12, 2016 3:26 am

My landlord does this every year.
From the data base of years gone the monthly costs of winters heating are derived – every new year to get accomodated.
Predictions -forget it.

Nick Stokes
Reply to  William R
May 11, 2016 7:31 pm

“Since when do the warmists use statistical models anyway? I got the impression all their supercomputer-powered models where deterministic…wholly dependent on input variables, assumptions, and initial conditions. “
People use statistical models when appropriate. But yes, GCM’s are solutions of the basic conservation equations (mass, momentun, energy) locally. That is precisely the issue of Mann’s paper. Some say that the slowdown was statistically predictable by identifying PDO, AMO etc. GCM’s don’t generally do that, for good reason. The physics they use is local, within elements, and superimposing other effects has to be done vary carefully. Still, it is done, most notably with radiation. Mann is finding out whether adding observed cycle information (without mechanism) would improve the result. He’s something of an AMO enthusiast himself. But no, he says, statistically it wouldn’t have helped predict the slowdown. NAO gives some help, but only predicts a small effect.

Reply to  Nick Stokes
May 11, 2016 11:30 pm

Gavin Schmidt probably was one of his anonymous reviwers. He likes this sort of hand waving.

R Hargrove
Reply to  Nick Stokes
May 12, 2016 1:15 am

Mann was a physics hick when he started scamming fake hockey stick generators as statistics, and you’re one now.
What’s the name of the law of thermodynamics to solve temperature of gas in chemistry?
What’s it’s equation and what do all the factors stand for?
Show where the Green House Gas Effect fits when solving temperature in gas physics.
Or you’re just another fake like all these other Green House Gas Effect barking
hicks.

Nick Stokes
May 11, 2016 at 7:31 pm
“Since when do the warmists use statistical models anyway? I got the impression all their supercomputer-powered models where deterministic…wholly dependent on input variables, assumptions, and initial conditions. “
People use statistical models when appropriate. But yes, GCM’s are solutions of the basic conservation equations (mass, momentun, energy) locally. That is precisely the issue of Mann’s paper. Some say that the slowdown was statistically predictable by identifying PDO, AMO etc. GCM’s don’t generally do that, for good reason. The physics they use is local, within elements, and superimposing other effects has to be done vary carefully. Still, it is done, most notably with radiation. Mann is finding out whether adding observed cycle information (without mechanism) would improve the result. He’s something of an AMO enthusiast himself. But no, he says, statistically it wouldn’t have helped predict the slowdown. NAO gives some help, but only predicts a small effect.

Johann Wundersamer
Reply to  Nick Stokes
May 12, 2016 3:55 am

Nick Stokes on May 11, 2016 at 7:31 pm
“Since when do the warmists use statistical models anyway? I got the impression all their supercomputer-powered models where deterministic…wholly dependent on input variables, assumptions, and initial conditions. “
People use statistical models when appropriate.
________________________
And yes, Nick: people use heavy metall when appropriate.
What are you gonna
CON vince
us.

MarkW
Reply to  Nick Stokes
May 12, 2016 6:37 am

How can you model something that even the IPCC’ers admit they don’t understand?

Paul Westhaver
May 11, 2016 5:24 pm

“The team’s analyses indicate that statistical methods could not have forecast the recent deceleration in surface warming because they can’t accurately predict the internal variability in the North Pacific Ocean, which played a crucial role in the slowdown.”
Well if the models can’t predict the past then what good are they for predicting the future. This sounds like an indictment of their own sloppy half-a$$ed conjuring. I am surprised that the twit Mann didn’t instrument trees to get a signal for the future…. somehow.

Proud Skeptic
May 11, 2016 5:33 pm

Translation – We simply have no idea what is going on and we are scrambling to find something in our models and everything else we have said that sounds like we aren’t completely full of it.

Latitude
May 11, 2016 5:40 pm

In contrast, a multidecadal signal in the North Atlantic does appear to have been predictable
=====
There’s so much wrong with this lucky guess……..

May 11, 2016 5:52 pm

It has been shown that the surface temperature series not a random walk but a Hurst process at a wide range of time scales. So it is unlikely that the models will ever get it right with OLS trends that treat the temperature time series as Gaussian and independent and ignore memory, dependence, and persistence in the data. See for example
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2776867

Nick Stokes
Reply to  chaamjamal
May 11, 2016 7:34 pm

“It has been shown that the surface temperature series not a random walk but a Hurst process at a wide range of time scales.”
By whom? It is neither.
” So it is unlikely that the models will ever get it right with OLS trends that treat the temperature time series as Gaussian and independent “
GCMs do nothing like that.

Reply to  Nick Stokes
May 11, 2016 8:41 pm

Of course they do, except for stuff they want to throw in.

Marcus
May 11, 2016 5:55 pm

…Our greatest threat is the liberal media !comment image?oh=bcbf8ae36f937ac701f5a9a0aeb68708&oe=57AE30AB

May 11, 2016 5:56 pm

Statistical models are not fit for the purpose of forecasting climate trends. Forecasting general trends, with probable useful accuracy ,is straightforward when we know where we are with regard to the natural millennial and 60 year cycles which are obvious by simple inspection of the temperature and solar records.
See Figs 1, 5 ,5a and 8 at
http://climatesense-norpag.blogspot.com/2016/03/the-imminent-collapse-of-cagw-delusion.html
Here is Fig 1 which illustrates the general trends, forecasts and in particular the millennial peak and trend inversion at about 2003.comment image
Figure 1 above compares the IPCC forecast with the Akasofu paper forecast and with the simple but most economic working hypothesis of this post (green line) that the peak at about 2003 is the most recent peak in the millennial cycle so obvious in the temperature data.The data also shows that the well documented 60 year temperature cycle coincidentally peaks at about the same time.
Here are Figs 5 and 5acomment imagecomment image
The cooling trends are truncated to exclude the current El Nino as an end point. The Enso events are temporary aberrations which don’t have much influence on the trends – see the 1998 and 2010 events in Figs 5 and 5a.

Reply to  Dr Norman Page
May 11, 2016 11:26 pm

By anyother name a Bond Event in the Holocene, aka a D-O event in the larger-longer Pleistocene.

H. D. Hoese
May 11, 2016 6:10 pm

What is semiempirical?

Reply to  H. D. Hoese
May 11, 2016 6:31 pm

I taught I taw a puddy-cat!
I did, I Did, I DID saw a puddy tat!

nankerphelge
Reply to  H. D. Hoese
May 11, 2016 7:04 pm

Thought the same but there is a definition. Sounds like an oxymoron to me. I fact I reckon it is.

CaligulaJones
Reply to  H. D. Hoese
May 12, 2016 8:41 am

In the government, they have base funding, one-time funding, and recurring one-time funding…I only wish I was making that up.

Reply to  H. D. Hoese
May 12, 2016 2:48 pm

nonemperical is a theoretical model that is a construct of logic and underlying principals that makes predictions, by which observations can be made and experiments can be made which will strengthen or weaken confidence, empirical is based on, concerned with, or verifiable by observation or experience rather than theory or pure logic; so it follows that Semiemperical is not based on pure logic and underlying principals and is only supported by adjusted data products!

Raven
Reply to  H. D. Hoese
May 12, 2016 10:07 pm

It must be a bit like like semiepregnant.

May 11, 2016 6:18 pm

It seems the opening line of the paper presumes to know the future by stating flatly that the pause was temporary.
It seems patently unscientific to start with a predetermined conclusion.
Of course, this is no surprise to me or anyone else who has kept up with the great yet settled debate, but it does immediately flag the author, for even a new-comer to the issue, as highly biased and therefore not to be taken as a credible source of scientific information.

thingodonta
May 11, 2016 6:20 pm

” the internal variability of the North Pacific, which played a critical role in the slowdown, does not appear to have been predictable using statistical forecast methods”.
They didn’t know about the PDO so it wasn’t ‘predictable’, now they do, they have to adjust the models.

Reply to  thingodonta
May 11, 2016 9:03 pm

thingodonta, you didn’t read what they said. They said that North Pacific internal variability was not “… predictable …” That means that future models still won’t use “… statistical forecast methods.” North Pacific internal variability will not (cannot in their world) be used in future models going forward. The next IPCC Report, then, need not concern itself with inaccurate prior “projections” as they relate to the intervening past. Thus, the writers of history just erase portions of the past. In the future, those pointing out discrepancies are dismissed as cranks.

Editor
May 11, 2016 6:44 pm

William R nails it, bit I’ll add my bit anyway: The climate computer models (see here) operate entirely on smallish bits of local weather. That is, they use a global grid of smallish cells interacting in small time slices (by definition, conditions in a small area over a short time are “weather”). They are not statistical models.
The article says “Several studies have suggested that climate models could have predicted this slowdown [..]. To test this hypothesis, Mann et al [compared] [..] compare modeling projections with the already observed outcomes. The team’s analyses indicate that statistical methods could not have forecast the recent deceleration in surface warming“.
The finding by Mann et al is therefore completely false, because what they did (applying statistical methods to model projections and observed outcomes) has nothing whatsoever to do with what goes on inside climate models and therefore cannot determine whether the climate models could have predicted the slowdown. All the study did was to show that the models did not predict the slowdown.
But we knew that already.

Reply to  Mike Jonas
May 11, 2016 7:05 pm

Mike, I assume (see here) was meant to be a link? It didn’t come through. I’m very interested because what you’ve done is refute Mann’s understanding of the models, and argument I’ve faced many times before when trying to point the fundamental architectural and mathematical failures of GCM models.
I’d really like to read and understand your sources.

Editor
Reply to  Bartleby
May 11, 2016 9:23 pm

Bartleby – the link is one of a series of articles that I wrote a while back, looking at and into the models. The missing link was : https://wattsupwiththat.com/2015/11/08/inside-the-climate-computer-models/
The sources for that article are linked from the article. There’s also a link to my previous article (take special note of the footnotes!).

Reply to  Bartleby
May 11, 2016 9:47 pm

Thanks Mike, I appreciate it!

Reply to  Bartleby
May 11, 2016 10:39 pm

Mike writes in the citation above:

The harsh reality is that any science is wrong if its mathematics is wrong. The mathematics of the climate models’ internal workings is wrong.

Thanks for the treatment Mike, now I believe I have a better understanding of the GCR models and the term “semiempirical”.
After reading your review, I don’t feel compelled to withdraw my criticism; these models may include some poorly understood theoretical components, but they also attempt to “blend” those theoretical parts with empirical models. In my opinion the approach is both flawed and doomed to failure.
I don’t think the criticism I made, that it isn’t legitimate to extrapolate from empirical models is invalid. They appear to be doing exactly that, while at the same time claiming the models are based on theoretical models. I have no idea how such an approach ever made it passed a scientific review since it is completely unfounded.

Reply to  Bartleby
May 11, 2016 10:53 pm

PS.
Thinking just a bit more about this (and after putting down a couple glasses of good red wine so don’t take this too seriously) I’d speculate the GCM models adapted for Climate Change use resulted from a small group of post-docs having an “oh shit!” moment when they began to understand the true complexity of statistical thermodynamics, not to mention the outrageous computational resources needed to make any progress at all.
Books have been written on the perils of attempting to simultaneously solve multiple partial differentials uniquely, how could they make anyone believe they’d cornered they’d market? Where are the papers?
I definitely agree with you conclusions. This is all smoke and mirrors.

Reply to  Bartleby
May 12, 2016 7:45 am

Carbon dioxide in the atmosphere cannot be compared to blue dye in a river. Metacycles like PDO even less so. Filtering them out and reinserting them is just a tautology.
Statistics is just witching, useful only as an indication of where to drill for physics.

Nick Stokes
Reply to  Mike Jonas
May 11, 2016 7:47 pm

“The finding by Mann et al is therefore completely false, because what they did (applying statistical methods to model projections and observed outcomes) has nothing whatsoever to do with what goes on inside climate models and therefore cannot determine whether the climate models could have predicted the slowdown. All the study did was to show that the models did not predict the slowdown.”
No, that isn’t what the study did. It showed that statistical models could not have predicted the slowdown (using cycles), or at least not much. He’s really interested in whether a newish style of decadal predictions using AMO etc information might have been able to do better, These are initialised, and would operate, he suggests, by removing the AMO component from the initialising data, and restoring it in the results. But that isn’t going to work until some statistical predictability is shown.

Editor
Reply to  Nick Stokes
May 11, 2016 9:11 pm

With all due respect, that’s tosh. “Several studies have suggested that climate models could have predicted this slowdown [..]. To test this hypothesis, Mann et al [..]“. Mann et al purported to be testing the hypothesis that climate models, NB. not statistical models, could have predicted …..

Reply to  Nick Stokes
May 11, 2016 9:25 pm

Ferchristsakes, there has been randomly varying temperatures, with a general downtrend over the last few thousands of years, over this interglacial. There is no statistical predictability over any timeframe. Other than the use of pseudo-scientific mumbo jumbo, there is no way to ascribe CO2 to the minor jig from 1975 to 1998. Public alarm will play out, at great cost, to the benefit of the obvious.
Dave Fair

Nick Stokes
Reply to  Nick Stokes
May 11, 2016 9:58 pm

Mike,
With all due respect, that’s tosh. “Several studies have suggested that climate models could have predicted this slowdown [..].
No. The press release didn’t specify GCMs. In fact, they are talking about decadal prediction – could have been clearer. But the paper is very clear. They tested statistical models – not GCMs. There is some use of EBM’s, which is different again. But what is perfectly clear, from both abstract and paper, is that Mann was testing statistical models. And the purpose, clear in the paper, is to test whether they (AMO etc) might be useful in decadal prediction.
Nobody expects that GCMs would predict the slowdown. They aren’t initialised to do that. ” possible implications for initialized model predictions are also discussed”,/i> is the key phrase in the abstract.

Editor
Reply to  Nick Stokes
May 11, 2016 10:15 pm

Nick – OK I accept your explanation. But the article as written did say otherwise.
I’m amused by your statement “Nobody expects that GCMs would predict the slowdown. They aren’t initialised to do that.“. You and I know that the climate models can never produce any climate prediction of any value, but in the wider community surely everybody expects the models to make reasonable climate predictions, and surely everybody expects climate models to be initialised correctly for the real climate. Isn’t it now long overdue for the wider community to be told that the climate models bear absolutely no relationship to the real climate.

Nick Stokes
Reply to  Nick Stokes
May 11, 2016 10:36 pm

“Isn’t it now long overdue for the wider community to be told that the climate models bear absolutely no relationship to the real climate.”
No, climate models do track the evolution of earth climate. They just don’t do weather, because they aren’t initialised to do that. For the longer scale weather (ENSO) they do show the appropriate oscillations – they just can’t predict the phasing.
Here’s an analogy. You’re injecting blue dye into a river. You want to predict where and how much you’ll see blue downstream. If you carefully measure the river flow at the input point, you might predict the downstream trajectory for a metre or so. That’s like NWP. But pretty soon (a few metres), you won’t be able to say where the eddies will take it.
But if you look 1 km downstream, and you know the injection rate, you’ll be able to give a good estimate of how blue the sampled water is, and it won’t depend on the eddies that were around at injection time. All that has faded, but conservation of mass still works, and gives a good answer, even though medium term prediction failed. That’s because of mixing. The weather itself doesn’t mix, but the climate averaging process does, and conservation holds.

RACookPE1978
Editor
Reply to  Nick Stokes
May 11, 2016 10:53 pm

But your analogy assumes the “scientist” – who is demanding the “right” to experiment with the world’s entire economic drivers, the life and health and well-being of every person living on the planet – does know everything sufficient about the river to APPLY the well-known approximations and assumptions of “mass flow” of the river. Of the ink. And of all of the “other stuff” affeting the river, the blue ink, and all of the rest of the stuff going on that affects the river between the test point and every subsequent point downstream.
We do know, absolutely, that the global average temperature has increased, been steady, decreased, been steady, and increased when CO2 has been steady.
We do know, convincingly, that global average temperature has increased, been steady, decreased, and been steady, and increased while CO2 has been steadily increasing.
Thus, we do know, absolutely, that the global circulation models are incomplete. They do NOT sufficiently predict the future, because they do NOT predict even the present, the past, nor the “invented” past.
For example: Your little analogy falls apart if the blue ink evaporates. If the ink is alcohol-based, and so volume added and the change in final volume do not correspond. If the water evaporates, or goes into a sinkhole. Gets sucked out by an irrigation pump or city water works. Gets added by the clean discharge from a factory or a sewage treatment plant. Gets disturbed by a filling lock or opening dam. Gets sucked up by a deer or antelope playing nearby. Gets the sewage from a bear in the woods.
1. There is NO HARM from CO2 being added to the atmosphere – except when the “scientists” and their tame politicians and social-multipliers become propagandists.

Reply to  Nick Stokes
May 11, 2016 10:57 pm

Nick writes:

It showed that statistical models could not have predicted the slowdown

Nick, this isn’t news though. The important part is they used techniques that are known (well known) to not work. Ever. Under any circumstance.
They appear to have blithely violated every rule of mathematical inference. This isn’t a small thing.

Reply to  Nick Stokes
May 12, 2016 12:46 am

RACookPE1978 writes:

We do know, convincingly, that global average temperature has increased, been steady, decreased, and been steady, and increased while CO2 has been steadily increasing.

However we also know that CO₂ has increased significantly, while temperature has not?
And isn’t this observation alone sufficient to break the model that correlation equals causality, which appears to be the basic hypothesis of AGW? Correlation isn’t causality, but the complete absence of correlation certainly must deny causality?

Reply to  Nick Stokes
May 12, 2016 5:17 am

Nick writes

But if you look 1 km downstream, and you know the injection rate, you’ll be able to give a good estimate of how blue the sampled water is, and it won’t depend on the eddies that were around at injection time. All that has faded

This analogy has assumptions that make it invalid IMO. Firstly you can see how fast its flowing and so you can guess how fast the dye will take with reasonable accuracy even though you cant know the detail. Flow rate at one point in the river implies flow rate throughout. AGW simply isn’t like that, the flow rate of the river is what we’re trying to work out.
Secondly there are many injectors of ink, not just the one you’re interested in. Also there are consumers of ink. And all that that muddies the waters as far as knowing whether your ink is what you’re looking at downstream.

JohnKnight
Reply to  Nick Stokes
May 12, 2016 9:42 pm

Nick,
“No, climate models do track the evolution of earth climate.”
No maybe at all? No if they got it right? Just do? . .
Logically speaking, you just said it’s not even possible to mess up a climate model such that it will not track the evolution of earth climate, and that is very hard for me to believe. Did you mean something more like they’re only designed to track earth climate, not weather?

South River Independent
Reply to  Mike Jonas
May 11, 2016 9:10 pm

Like Bartleby, I would like to see the details of the climate models. I am reviewing John Casti’s text on mathematical models (Reality Rules) to refresh my memory before checking to see if the climate models are “good.” (In an earlier life I built some models to perform reliability and maintainability anayses, but they were much simpler than a climate model.) A main point is that a model is not the system being modeled. Although a model does not include all the factors that affect the actual system, it must include all the most important factors and must be coded correctly. Another issue is round-off error, which can be magnified in sequential runs of a model where the output of one run is used as the input for the next run without correcting the round-off error. Unfortunately, the current climate models are not likely to be abandoned until a better model is developed and adopted. Casti emphasizes that the person doing the modeling must have extensive knowledge of the system and also extensive knowledge of math model development including the algorithms used to represent how the system actually functions, but does not need to be an expert. The community of AGW skeptics should be working to develop a better model (or system of models) or a more accurate alternative method to project future climate performance.

Editor
Reply to  South River Independent
May 11, 2016 9:32 pm

See my reply to Bartleby above. Note that I don’t go far into the models, because long before you get to the sort of detail that you are talking about it becomes very clear that the models are incapable of working.You would like to see a better model (or system of models) or a more accurate alternative method. From my article you can see that the models are heading up a blind alley. As they are currently structured, you just can’t get a “better” or “more accurate” model. A completely different kind of model – top down instead of bottom up – is needed.

Nick Stokes
Reply to  South River Independent
May 11, 2016 10:47 pm

“I would like to see the details of the climate models.”
CCSM is a well documented GCM. The detailed scientific description is here. The code is available via here.

South River Independent
Reply to  South River Independent
May 12, 2016 6:17 am

Mr. Jonas and Mr. Stokes, thanks for your responses. SRI

South River Independent
Reply to  South River Independent
May 12, 2016 6:24 am

Mmmmm, subversion code. Interesting terminology for the climate model. Descriptive?

1 2 3