On Steinman et al. (2015) – Michael Mann and Company Redefine Multidecadal Variability And Wind Up Illustrating Climate Model Failings

Guest Post By Bob Tisdale

For the past few years, we’ve been showing in numerous blog posts that the observed multidecadal variations in sea surface temperatures of the North Atlantic (known as the Atlantic Multidecadal Oscillation) are not represented by the forced components of the climate models stored in the CMIP5 archive (which were used by the IPCC for their 5th Assessment Report). We’ve done this by using the Trenberth and Shea (2006) method of determining the Atlantic Multidecadal Oscillation, in which global sea surface temperature anomalies (60S-60N) are subtracted from the sea surface temperature anomalies of the North Atlantic (0-60N, 80W-0). As shown in Figure 1, sea surface temperature data show multidecadal variations in the North Atlantic above and beyond those of the global data, while the climate model outputs, represented by the multi-model mean of the models stored in the CMIP5 archive, do not. (See the post here regarding the use of the multi-model mean.) We’ll continue to use the North Atlantic as an example throughout this post for simplicity sake.

Figure 1

Figure 1 (Figure 3 from the post Questions the Media Should Be Asking the IPCC – The Hiatus in Warming.)

Michael Mann and associates have attempted to revise the definition of multidecadal variability in their new paper Steinman et al. (2015) Atlantic and Pacific multidecadal oscillations and Northern Hemisphere temperatures. Michael Mann goes on to describe their efforts in the RealClimate post Climate Oscillations and the Global Warming Faux Pause. There Mann writes:

We propose and test an alternative method for identifying these oscillations, which makes use of the climate simulations used in the most recent IPCC report (the so-called “CMIP5” simulations). These simulations are used to estimate the component of temperature changes due to increasing greenhouse gas concentrations and other human impacts plus the effects of volcanic eruptions and observed changes in solar output. When all those influences are removed, the only thing remaining should be internal oscillations. We show that our method gives the correct answer when tested with climate model simulations.

It appears their grand assumption is that the outputs of the climate models stored in the CMIP5 archive can be used as a reference for how surface temperatures should actually have warmed…when, as shown as an example in Figure 1, climate models show no skill at being able to simulate the multidecadal variability of North Atlantic. (There are posts linked at the end of this article that show climate models are not capable of simulating sea surface temperatures over multidecadal time frames, including the satellite era.)

Let’s take a different look at what Steinman et al. have done. Figure 2 compares the model and observed sea surface temperature anomalies of the North Atlantic for the period of 1880 to 2014. The data are represented by the NOAA ERSST.v3b dataset, and the models are represented by the multi-model mean of the climate models stored in the CMIP5 archive. Both the model outputs and the sea surface temperature data have been smoothed with 121-month filters, the same filtering used by NOAA for their AMO data.

Figure 2

Figure 2

As illustrated, the data indicate the surfaces of the North Atlantic are capable of warming and cooling at rates that are very different over multidecadal periods than the forced component of the climate models. The forced component is represented by the multi-model mean. (Once again, see the post here about the use of the multi-model mean.) In fact, the surfaces of the North Atlantic warmed from about 1910 to about 1940 at a rate that was much higher than hindcast by the models. They then cooled from about 1940 to the mid-1970s at a rate that was very different than the models. Not too surprisingly, as a result of their programming, the models then align much better during the period after the mid-1970s.

Steinman et al., according to Mann’s blog post, have subtracted the models from the data. This assumes that all of the warming since the mid-1970s is caused by the forcings used to drive the climate models. That’s a monumental assumption when the data have indicated the surfaces of the North Atlantic are capable of warming at rates that are much higher than the forced component on the models. In other words, they’re assuming that the North Atlantic since the mid-1970s has not once again warmed at a rate that is much higher than forced by manmade greenhouse gases.

What Steiman et al. have done is similar to subtracting an exponential curve from a sine wave…where the upswing in the exponential curve aligns with the last minimum to maximum of the sine wave…without first establishing a relationship between the two totally different curves.

MICHAEL MANN PRESENTED A CLEAR INDICATION OF HOW POORLY CLIMATE MODELS SIMULATE MULTIDECADAL SURFACE TEMPERATURE VARIABILITY

I had to laugh when I saw the following illustration presented in Michael Mann’s blog post at RealClimate. I assume it’s from Steinman et al. In it, the simulations of the surface temperatures (represented by the multi-model mean of CMIP5-archived models) of the North Atlantic, North Pacific and Northern Hemisphere surface temperatures have been subtracted from the data. That illustration clearly shows that the climate models in the CMIP5 archive are not capable of simulating the multidecadal variations in the sea surface temperatures of the North Atlantic and the North Pacific or the surface temperatures of the Northern Hemisphere.

Figure 3  Illustration from RealClimate Post

Figure 3

In other words, that illustration presents model failings.

If we were to invert those curves, by subtracting reality (data) from computer-aided speculation (models), the resulting differences would show how greatly the models have overestimated the warming of the North Pacific and Northern Hemisphere in recent years.

What were they thinking? We’d let that go by without calling it to everyone’s attention?

Thank you, Michael Mann and Steinman et al (2015). You’ve made my day.

OTHER REFERENCES

We’ve illustrated and discussed how poorly climate models simulate sea surface temperatures in the posts:

For more information on the Atlantic Multidecadal Oscillation, refer to the NOAA Frequently Asked Questions About the Atlantic Multidecadal Oscillation (AMO) webpage and the posts:

CLOSING

Some readers might think that Steinman et al. is nothing more than misdirection, a.k.a. smoke and mirrors. What do you think?

Thanks to blogger “Alec aka Daffy Duck” for the heads-up.

0 0 votes
Article Rating
135 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
BFL
February 26, 2015 5:51 pm

This should be in your reference section as it is the most educational piece that I’ve seen on the subject of climate models.
http://wattsupwiththat.com/2015/02/24/are-climate-modelers-scientists/

Reply to  BFL
February 27, 2015 12:56 am

Important aspects of the AMO variability ignored by the climate models:
North Atlantic decadal and Multidecadal Oscillation AMO (de-trended N. Atlantic SST) can be successfully explained and numerically represented by solar- geomagnetic interactions.
http://www.vukcevic.talktalk.net/GSCp.gif
N. Hemisphere’s climate is under control of the polar and sub-tropical jet-streams, whereby the long term zonal-merdional positioning of jet streams depends on the extent and strength of three primary cells (Pollar, Ferrel and Hadley).
http://www.srh.noaa.gov/jetstream//global/images/jetstream3.jpg
Since the equatorial temperature changes little, it is the Arctic temperature which moves jet streams latitudinal location.
Solar magnetic activity reaches the Earth’s poles in form of geomagnetic storms. NASA: “a two-hour average sub-storm releases total energy of five hundred thousand billion (5 x 10^14) Joules. That’s approximately equivalent to the energy of a magnitude 5.5 earthquake”
This is in form of the electric current ionising upper layers of the atmosphere, whereby the atmospheric flow is affected by the changes in the resultant magnetic field (Lorentz law). The Earth’s field (i.e. magnetospheres shielding) is not constant (the internal oscillations are due to the cores differential rotation – see J. Dickey, JPL).
The strength the solar incursions is modulated by the interactions of two fields, since it is strongest at the poles, effect on the polar vortex and subsequently the Arctic’s jet stream would be most strongest.
Geomagnetic effect is also clearly demonstrated in the Arctic temperatures up-trend and its multidecadal oscillations with correlation factor R2>0.8.
http://www.vukcevic.talktalk.net/AT-GMF1.gif
Inevitable conclusion must be:
It is the sun!

DD More
Reply to  vukcevic
February 27, 2015 7:49 am

Vuk,
Encyclopedia Americana. Danbury, CT: Grolier, 1995: 532.
By today’s standards the two bombs dropped on a Japan were small — equivalent to 15,000 tons of TNT in the case of the Hiroshima bomb and 20,000 tons in the case of the Nagasaki bomb.
In international standard units (SI), one ton of TNT is equal to 4.184 × 109 joule (J)
Hiroshima bomb TNT 15000
ton-TNT to Joules 4.18E+09
Joules total 6.276E+13
a two-hour average sub-storm releases total energy of five hundred thousand billion (5 x 10^14) Joules. That’s approximately equivalent to the energy of a magnitude 5.5 earthquake”
5.00E+14 / 6.276E+13 = 7.97 in the correct ‘climate science’ equivalency.
Please revise your 5.5 earthquakes to 8 Hiroshima bombs so all of us will then understand the power.
Good post by the way.

Reply to  vukcevic
February 27, 2015 8:07 am

Is it the sun ?
Svalgaard :” As the magnetospheric ring current and the auroral electrojets and their return currents that are responsible for geomagnetic activity have generally North-South directed magnetic effects (strongest at night), the daytime variation of the
Y or East component is a suitable proxy for the strength of
the SR ionospheric current system..”
But what all this has to do with the AMO ?
Data shows direct correlation of the AMO to the Y or East component
http://www.vukcevic.talktalk.net/GMEC-AMO.gif
at latitude of 60N the home of the polar jet-stream
http://www.srh.noaa.gov/jetstream//global/images/jetstream3.jpg

BFL
Reply to  vukcevic
February 27, 2015 11:49 am

Is it possible that these could become strong enough to cause extended glaciation/ice ages??

Reply to  vukcevic
February 27, 2015 12:26 pm

BFL hi,
Polar jet stream’s trajectory (as far as I can ascertain) appear to be swung from ‘zonal’ to ‘meridional’ direction by the geomagnetic field just west of the Hudson Bay (affected by geomagnetic storms) and the Icelandic Low, the atmospheric semi-permanent system. In the winter it is located to the south-west of Greenland, controlled by the Atlantic drift current down-welling but in the summer months moves to Iceland’s north.
Let’s assume that for some reason (e.g. the Arctic warm current inflow across Greenland- Scotland ridge is weakened, leading to increase in the summer see ice) the northern summer down-welling may cease. In such case the polar jet stream may get stuck in the strong merdional flow for many years. The process would be self reinforcing by positive feedback. Great Lakes ice would persist thought the summer months providing initial conditions for the onset of the next Ice Age.
The jet stream’s controlling factor can be clearly deduced from the fact that during the last glacial, most if not all of Siberia was free of the ice sheet while N. America was under incredible 3,000 meters tick ice.
http://www.qpg.geog.cam.ac.uk/research/projects/englishchannelformation/1453389260_3dcecb561c.jpg
Illustration is from the University of Cambridge, thus we can assume it is the best knowledge available.

Richard M
February 26, 2015 5:54 pm

Looks like more circular logic from the AGW propagandists.

Robert of Ottawa
Reply to  Richard M
February 26, 2015 7:32 pm

You mean epicyclic reasoning

old construction worker
Reply to  Richard M
March 1, 2015 6:58 am

I have a question. Since someone is on a “Witch Hunt” Who funded the research paper? Is there a past conflict of “interest”. If so, then the authors should be ban from summiting any research paper fired from their jobs.

Tim
February 26, 2015 5:55 pm

More garbage by Mann and his associated clowns.

Reply to  Tim
February 26, 2015 7:34 pm

Coulrophobia.

bones
February 26, 2015 5:58 pm

Subtracting the models from the data only changes the sign of the result. Either way it shows that the models have completely failed to capture the oscillations. Steinman et al. must have applied some serious smoothing to the differences in Figure 3.

Mark from the Midwest
Reply to  bones
February 26, 2015 6:11 pm

Or it suggests that they were just modeling a model, (or models), in the first place

Michael Jankowski
Reply to  bones
February 26, 2015 6:12 pm

Yeah Fig 3 almost more like a theoretical sketch than actual data as much as it has been smoothed. The “uncertainties” for the temperature impacts of the PDO since 2000 are basically non-existent, too, lol.
Remember when the acolytes used model runs with “only natural forcings” as sunlight and volcanoes and compared them to model runs which included GHGs to demonstrate the only way to get temperature variations the likes of which we were observing is to have GHG forcings included? And now that they need a convenient excuse for the pause, “internal variability” is involved, lol.

Michael Jankowski
February 26, 2015 6:00 pm

Mann and others are providing the “it’s worse than we thought” excuse about how it’s REALLY going to warm when the “pause” ends.
Forget that they poo-poo’d any idea that natural variability could be so dramatic. It somehow can cancel all the warming from the current high levels of CO2 in the atmosphere but was claimed to be basically negligible back when CO2 levels were lower.
It’s amazing that they are given any credibility at all anymore.

Gentle Tramp
Reply to  Michael Jankowski
February 27, 2015 11:07 am

Thus we see how amazing flexible “Settled Science” can be… 🙂
Let’s wait and see how much more flexible Mann & Co will grow when finally “The Pause” will end differently than they hope, namely by changing in a colder than warmer climate… 😉

Mterebus
Reply to  Gentle Tramp
March 1, 2015 3:21 pm

Our hapless friend Michael Mann is stuck in the web of lies of his own making. I want to see him scrambling some more for the straws he needs.

Reply to  Bob Tisdale
February 27, 2015 4:22 am

Well, according to the Guardian (where I am currently banned form posting after querying how the UEA CRU fossil fuel funding was different from Dr Soon’s),

Steinman said the new work was a substantial step forward and employed state-of-the-art climate models that previous studies on the subject had not used.

So it’s state of the art models then.
Makes more sense than any state of science.

Mterebus
Reply to  Bob Tisdale
March 1, 2015 3:45 pm

The gospel of global warming has been made into the bible of Common Core must know and follow 13th commandment. Never mind the reality of the world tilting into an overdue return of Ice Age showing its fangs with full display of ferocity. Michael Mann is riding the same wave of opportunistic deception akin to certain Trofim Lysenko, a favorite and recipient of Stalin most wanted medal.

Bart Tali
February 26, 2015 6:12 pm

Faux pause, you say? Then it was a faux rise also.

Reply to  Bart Tali
February 26, 2015 7:35 pm

Faux up ?

lee
Reply to  Streetcred
February 26, 2015 7:54 pm

Faux pas?

rh
Reply to  Streetcred
February 27, 2015 5:00 am

Definitely fauxed up.

Robert Way
February 26, 2015 6:15 pm

This post shows a fundamental misunderstanding about how climate models work. You would never expect for climate models to all simulate multidecadal variability in the same phase and magnitude simply because these are process-based models that have their own synoptics and various forcings as input. This is why taking a large ensemble should average out all the natural variability leaving only the forced response. However, the forced response in the models is underestimated because the lack of updated forcings for the historical runs – e.g. many (almost all) models assume no volcanic forcing past 2005 whereas there’s strong evidence of a moderate forcing. Unless you have the forcings correct the forced response the approach of detrending using the models will cause some attenuation of the actual signal.

Michael Jankowski
Reply to  Robert Way
February 26, 2015 6:36 pm

Yeah, that’s it. Climate scientists have been looking for every possible explanation for “the pause,” and they haven’t found a way to account for volcanic forcing over the past decade. What exactly is the time lag in determining volcanic activity, and when can we expect to see this added to the models? Why didn’t the peer-reviewers catch that? Won’t make Steinman et al 2015 just as silly when these “moderate forcings” are added.

Reply to  Robert Way
February 26, 2015 6:50 pm

Does the climate not vary internally on time scales longer than decades?

1saveenergy
Reply to  bobbyvalentine466921
February 27, 2015 4:35 pm

Not if you are selling a product that requires precise parameters to be believed.
But in the real world…..yes

RACookPE1978
Editor
Reply to  Robert Way
February 26, 2015 7:08 pm

Robert Way

This is why taking a large ensemble should average out all the natural variability leaving only the forced response. However, the forced response in the models is underestimated because the lack of updated forcings for the historical runs – e.g. many (almost all) models assume no volcanic forcing past 2005 whereas there’s strong evidence of a moderate forcing. Unless you have the forcings correct the forced response the approach of detrending using the models will cause some attenuation of the actual signal.

What “strong evidence” of “a moderate forcing” (since 2005)? There has been NO change on atmospheric clarity since the early 1993-94 eruption!
http://www.esrl.noaa.gov/gmd/webdata/grad/mloapt/mlo_transmission.gif
How can you (anyone) insert a moderate forcing into the solution to a problem (CO2 has risen strongly but atmospheric temperatures have not changed) when the “symptom” of the moderate volcanic forcing (a dirtier atmosphere) is entirely absent?

Admad
Reply to  RACookPE1978
February 27, 2015 12:44 am

+ several million

Matthew R Marler
Reply to  Robert Way
February 26, 2015 7:35 pm

Robert Way: This is why taking a large ensemble should average out all the natural variability leaving only the forced response.
The ensemble mean is an unbiased estimate of a population defined from the model, its variants, and the parameter estimates with their uncertainties. But what has that population mean got to do with nature? We know that the components of the model are supposed to be based on physics, but some parameters are more or less guessed at. Almost all the model runs to date are higher than the data that they might have predicted and might be tested against, and the mean is significantly discrepant from the data. The claim that the ensemble mean is the “forced response” is not supported by anything more than the hope that it is.
.Steinman et al essentially show that if the ensemble mean is assumed to be an accurate representation of the “forced response”, then the PDO, AMO, and NMO can be redefined to “correct” the model and bring it close to the data. The circularity is obvious. The result is important only if there can be collected a bunch of independent evidence (independent of the model), confirming that these are good estimates of PDO, AMO, and NMO.
This is explained in their supporting online material.

Bill_W
Reply to  Robert Way
February 26, 2015 7:45 pm

Robert Way, I think model variability is a better term to use than natural variability in your post above.

AJB
Reply to  Robert Way
February 27, 2015 2:53 am

“This is why taking a large ensemble should average out all the natural variability leaving only the forced response.”
The average of fantasy being fixation.

Mickey Reno
Reply to  Robert Way
February 27, 2015 4:58 am

So, averaging a bunch of wrong guesses leads, a priori, to the correct guess? Sweet – almost like magic!

Richard M
Reply to  Mickey Reno
February 27, 2015 7:51 am

Hey … maybe I should generate several tax forms all showing the government owes me huge rebates and then average them. That way I can claim it is reality and the government should not be able to object …. right? 😉

Alx
Reply to  Robert Way
February 27, 2015 5:55 am

Climate models can be useful in understanding how one discreet component influences climate, how much that influence is, how it stacks up against other influences and the chaotic relationship between the influences is unknown. To claim otherwise is to be dishonest, ignorant, or delusional.
I can in my sink determine how much of an alkaline substance to add to a volume of water to change the alkaline level to a certain level. I can then calculate how much alkaline is required to change the oceans the same amount and subsequently use a model to forecast multiple time lines using various amounts and various substances to determine when the alkaline change predicted will occur. Simple math and chemistry right? Yes, but no relationship to reality. Oceans are much more complex than simulations that artificially add various alkaline substances to it over time. A simple example but this is how climate modelers using limited knowledge make extravagant claims disassociated with reality.
The fundamental misunderstanding is climate modelers having little understanding of how little their climate models simulate.

Jim G
Reply to  Robert Way
February 27, 2015 7:47 am

So to summarize; ten wrongs averaged do make a right?

Reply to  Jim G
February 27, 2015 10:14 am

good for you jim g cut right to the heart of the issue!

Crispin in Waterloo
Reply to  Robert Way
February 27, 2015 8:20 am

Robert, if models work as you describe, they are no more representative of the climate than the programmer’s ability to think up inputs and their forcing effects. In short, if the programmer thinks CO2 forcing effect is 4 degrees per doubling, an ensemble of runs will ‘predict’ that because it is built in.
Another programmer who thinks that CO2 doubling will lead to 1.0 degree of warming will find, all said and done, that their ensemble of runs gives 1.0 as the answer.
Your description is correct, and that is why I don’t believe the output from the models. If they operated isolated from and were not tuned using the past, and agreed with the past, I could believe their forecasts of future temperatures. In fact they are trained using the past, and are unable to predict the present “out of sample”. That is a fatal failure, in my book.
Only by incorporating a pretty comprehensive set of solar and geomagnetic inputs will they come close to replicating the past and present. When that happens (and it will) the influence of CO2 will be seen, at its present level and rate of change, to be quite a minor, though real, influence.

Nic Lewis
Reply to  Robert Way
February 27, 2015 1:33 pm

Robert,
I agree with the first half of your comment, but I don’t follow your claim that the forced response in models is underestimated due to omission of post 2005 volcanic forcing (which anyway seems a minor factor). Surely its omission leads to models [further] overestimating the forced response?
BTW, I thought your comments at RealClimate made very good points. I’m glad you realise what rubbish the Steinman paper is. And well done for bringing up the Booth paper – which is certainly relevant to the natural AMO vs aerosol N Atlantic previous cooling debate – and the devastating critique of it by Zhang et al.

Michael Jankowski
February 26, 2015 6:23 pm

Yes, it’s just the models variance from reality…but if you smooth it and make it look pretty, you have some of the humps and dips in the right places and can pretend it’s meaningful.
Can you imagine using these curves to represent “internal variability” and trying to get something published in 2000 arguing that a measurable amount of the observed warming from 1980 to 1990 (eyeball says 0.2 deg C…0.1 from AMO, 0.05 from each of the two others) was due to “internal variability?”

Steve Thayer
February 26, 2015 6:30 pm

I can just see Michael Mann showing Figure 1 to a group at the IPCC, touting the model predictions, looking like Jim Carrey’s character “The Pet Detective” as he points to the figure and says “LLLLLLIKE a GLOVE!”

rh
Reply to  Steve Thayer
February 27, 2015 5:05 am

I can also see him bent over and [trimmed. .mod].

Paul
Reply to  rh
February 27, 2015 5:44 am

I bet Josh would have fun with that one…

February 26, 2015 6:47 pm

Mann: “We show that our method gives the correct answer when tested with climate model simulations.”
There is something wrong with those people. I don’t know what the name for it is, but there’s something wrong with them. They substitute their software-outputs for reality.
I do not say this as an insult: I am merely describing what I observe. Over and over and over again, when purporting to verify an idea, they compare the idea, not to real-life observations, but to their software-outputs.
I worked in software for two decades. I programmed mostly for business applications, accounts receivable, payables, payroll, inventory, and the like. I cannot imagine writing software that does not have to be checked against something in the real world.
But Mann et al. actually seem to believe that their computer simulations are more real than reality. Isn’t that why they check their new ideas against software-outputs?
So, I conclude there is something wrong with those people. Is there a name for what’s wrong with them? Or are my observations about them incorrect? I ask these questions quite sincerely.

PiperPaul
Reply to  Lane Core Jr. (@OneLaneHwy)
February 26, 2015 7:14 pm

Look up NPD, some items there.
http://www.outofthefog.net/Disorders/NPD.html

Bob Weber
Reply to  Lane Core Jr. (@OneLaneHwy)
February 26, 2015 7:30 pm

The answer is they believe in “consensus reality”, not “objective reality”.
In other words ‘make believe’ is more real to them than truth, call it wishful thinking.

Reply to  Bob Weber
February 26, 2015 7:47 pm

Feynman would say, “They are still waiting and wondering why the airplanes haven’t landed yet.”

Tom in Florida
Reply to  Lane Core Jr. (@OneLaneHwy)
February 26, 2015 7:30 pm

I read the same sentence and came to the conclusion that the problem is they believe the “correct answer” is what the model simulations say it is without regard to any reality.
All hail the mighty models.

lee
Reply to  Tom in Florida
February 26, 2015 7:59 pm

My guess is as good as the model guess; therefore it is right.

Tom
Reply to  Lane Core Jr. (@OneLaneHwy)
February 26, 2015 11:06 pm

I do not know what it is called, but it is related to this precept: “It is difficult to get a man to understand something when his salary depends upon his not understanding it.” Upton Sinclair.

Reply to  Lane Core Jr. (@OneLaneHwy)
February 27, 2015 6:29 am

Thanks, everybody, for your thoughts.

NancyG22
Reply to  Lane Core Jr. (@OneLaneHwy)
February 27, 2015 12:37 pm

Disconfirmed expectancy. Other than buying a book about it Wikipedia covers it. It explains why the cult of global warming deny temps have not been rising, have become shrill, attack people that point out their failed prophecy, the need to have followers to prop up their beliefs, and explains the attitude that even if global warming is a hoax, the actions taken will benefit the earth. It explains a lot, in my opinion.

Mterebus
Reply to  Lane Core Jr. (@OneLaneHwy)
March 1, 2015 3:57 pm

“But Mann et al. actually seem to believe that their computer simulations are more real than reality. Isn’t that why they check their new ideas against software-outputs?” Mike knows where his bread comes from, and will do what needs to be done for him to survive another day and be paid for it and for tomorrow as well. It has nothing to do with any science in Newton understanding of it.

Cold in Wisconsin
Reply to  Lane Core Jr. (@OneLaneHwy)
March 1, 2015 5:50 pm

I believe it is called Delusion, believing your own lies.

Barry
February 26, 2015 6:51 pm

I agree this post shows a fundamental misunderstanding of climate models. The point of Steinman et al. is that there is an underlying trend (called AGW), with natural variability on top of it. What we have seen during the “faux pause” is natural variability that offsets the continued upward trend. As soon as the natural cycle is on the upswing, we will see a “double whammy” effect and rapid warming.

Richard M
Reply to  Barry
February 26, 2015 7:26 pm

You mean that “natural variability” that was ignored by climate scientists in the past? Where was that in their previous model work? Oh wait, they obviously didn’t have a clue how the climate actually works but you now want us to believe they’ve had a revelation and have been blessed with divine knowledge. Yeah, what flavor kool-aid are you imbibing?

Reply to  Barry
February 26, 2015 7:39 pm

” … an underlying trend (called AGW)” which tends to zero, “with natural variability on top of it.” There that now makes sense.

Chris Hanley
Reply to  Barry
February 26, 2015 7:49 pm

“there is an underlying trend (called AGW) …”.
==============================
http://www.woodfortrees.org/graph/hadcrut4gl/from:1845/mean:24/plot/hadcrut4gl/from:1845/trend
You can call it what you like but it amounts to a long-term trend of ~ 0.5C per century, hardly a reason to send the developed economies into a tailspin; let alone deny the majority of the world’s population the lifestyle currently enjoyed in those economies.

Dawtgtomis
Reply to  Chris Hanley
February 27, 2015 9:12 am

And that’s good news Barry, an upward trend means we are still in the interglacial epoch!

Dawtgtomis
Reply to  Chris Hanley
February 27, 2015 9:16 am

…and where will the extra or kinetic energy be released from for a “double whammy”?

David A
Reply to  Barry
February 26, 2015 9:30 pm

Barry Barry Barry, really?, “a double whammy”. It sound more like magic then science. The models are a complete fail on every level. Now that they (climate alarmists) have discovered ocean cycles they need to learn about them. The earth will still be their teacher. The AMO will turn, and they will not like it.

4 eyes
Reply to  Barry
February 27, 2015 5:04 am

Barry, then model the natural variability. Your assertion implies that you know what the natural variability is. Natural variability wasn’t discussed much before the models couldn’t match history, now it’s the explain all for the difference. You’re just making it all up as you go along.

rh
Reply to  Barry
February 27, 2015 5:21 am

Barry from sks says: ” As soon as the natural cycle is on the upswing, we will see a “double whammy” effect and rapid warming.”
I have to be honest and say I wish Barry was correct. A warmer Earth with more CO2 for the initiation of life would be how I would choose to live the remainder of my life, and how I’d like to leave the planet for my descendants. Unfortunately, it looks like we’re headed for a repeat of the 1970’s, or even worse, the 1870’s.

Hugh
Reply to  rh
February 27, 2015 8:55 am

So you believe temperature will drop to the level it was in the seventies?
May I ask why?

Dawtgtomis
Reply to  rh
February 27, 2015 10:50 am

Hugh, study the sun and the way oceans run, then take heed of history’s warning.

Paul Courtney
Reply to  Barry
February 27, 2015 10:28 am

Barry: Is that a prediction, or just a projection?

chris y
Reply to  Barry
February 28, 2015 9:13 am

Barry- You write “What we have seen during the “faux pause” is natural variability that offsets the continued upward trend.”
Hansen vociferously disagrees with you as recently as 2003:
“As we shall see, the small forces that drove millennial climate changes are now overwhelmed by human forcings.”
Hansen et al., 2003 activist bulletin, Columbia University
But then the gobsmacking pause initiates a rethink from The Hansen:
“”The longevity of the recent protracted solar minimum, at least two years longer than prior minima of the satellite era, makes that solar minimum potentially a potent force for cooling,” Hansen and his co-authors said.”
Hansen et al., “Earth’s energy imbalance and implications”, 2011 activist report.
“The 5-year mean global temperature has been flat for a decade, which we interpret as a combination of natural variability and a slowdown in the growth rate of the net climate forcing…The annual increment in the greenhouse gas forcing (Fig. 5) has declined from about 0.05 W/m2 in the 1980s to about 0.035 W/m2 in recent years.”
Hansen et al, 2013 activist bulletin, Columbia University
Still waiting for Hansen’s rethink on the dead-certain anthropogenic interpretation of warming from 1970 – 2000….

Matthew R Marler
February 26, 2015 7:01 pm

From their supporting online material: Regression Method
To calculate the AMO, PMO, and NMO we 1) regressed the observed mean temperature
series onto the model derived estimate of the forced component, 2) estimated the forced
component of observed variability using the linear model from step 1, then 3) subtracted
the forced component from the observations to isolate the internal variability component.

Everything rides on their model-based redefinition of AMO, PMO, and NMO.

Dave in Canmore
Reply to  Matthew R Marler
February 27, 2015 12:16 pm

In other words, their results are arbitrary. That’s my take away.

Doug Proctor
February 26, 2015 7:18 pm

Derivative science: set assumptions as fact, derive impact, conclude results correct.
Computationalally correct, representationally unknown.

Matthew R Marler
February 26, 2015 7:21 pm

They also write, in the SOM:
The AMO, PMO, and NMO amplitudes are seen to be unusually large with the
detrending approach (Fig. S5A). Particularly striking are the very large positive trends in
the AMO and NMO at the end of the series, which were indeed predicted (Figs. 2,S2–S4)
as structural artifacts of the method. The root mean square (RMS) amplitude of the NMO
is 0.14oC, more than twice the simulated amplitude of the hemispheric multidecadal
variability from Knight et al. (3). The AMO and PMO have estimated amplitudes of
0.15oC and 0.09oC, respectively, and show high levels of apparent correlation with each
other (R2=0.563, lag = 0, statistically significant at p=0.05 level for a one-sided test—see
next section for details about the associated calculation). The AMO, PMO, and NMO
collectively give the appearance of a “stadium wave” pattern (18,19), wherein each varies
coherently but at variable relative lag.
Our regional regression approach yields AMO, PMO, and NMO series that are
dramatically different from those obtained with the detrending approach. Absent now are
the very large positive trends in the AMO and NMO near the end of the series. The
amplitude of the NMO (0.07oC using CMIP5-All) is half that inferred from the
detrending approach. Unlike with the detrending approach, the maximum lagged
correlation between the AMO and PMO (R2=0.334 lag = 3) is no longer statistically
significant.

Consequently, their climate model plus the natural variability yields the observed “faux pause”, with the model-based redefinition of AMO, PMO, and NMO.

February 26, 2015 8:11 pm

What I see in Fig 3 is that AMO and PDO were phased locked until 1995. After that PDO peaked and then really acclerated downward from 2002 onward while AMO continued up. In other words they are no longer phase locked.
That realization says that the warming of the 80’s and 90’s was entirely natural, not “model described CO2 forcing”. Since 2002, the divergence between AMO and PDO has kept temps generally flat, save for the occasional mild La Nina or El Nino of the past 13 years. With that firmly in hand, it says CO2 forcing is lost in the noise of the natural variability of the AMO and PDO tracking in and out of phase over many decades.

David A
Reply to  David A
February 26, 2015 9:34 pm

This is particularly likely when one measures global atmospheric T via the consistent satellites, vs the ever changing surface data sets. (FUBAR surface record))

February 26, 2015 8:22 pm

If only people understood – mixturing, tempering and/or “correcting” actual figures never ever is allowed in theories of science.
And
if
one want to make a computer model, one need to take ALL not parts of every needed parameters AND analyse each one’s premises one by one.

February 26, 2015 8:27 pm

Maybe the Dingo ate my Global Warming!

February 26, 2015 8:27 pm

I am glad some really familiar with AMO and PDO is on this. I have not got a copy of the article yet. I did check the first author’s prior research. He does not seem to have been involved in this area up until now, which strikes me as very odd. Mann on the other hand cut his teeth on the AMO. Something does not feel right about this. It also took a while for this paper to get to press. Given that it addresses a hot topic, the reasons for the delay may be instructive.

Richard M
Reply to  bernie1815
February 27, 2015 8:02 am

I’d love to see the first draft. I’m guessing the author tried to use the real AMO and PDO and result was devastating to the claims of future warming. Hence, the other authors came along and they created the fictional AMO and PMO that would have no effect on future warming since it is derived from models with predetermined warming.

Theo Goodwin
February 26, 2015 8:28 pm

Mr. Tisdale writes:
“In other words, they’re assuming that the North Atlantic since the mid-1970s has not once again warmed at a rate that is much higher than forced by manmade greenhouse gases.”
No more damning criticism of a scientist can be made. And Mr. Tisdale made the criticism stick.
Warmist Climate Science has never been anything if not top down. The standard argument form is Circular Reasoning: hide your conclusion in your premises.

Alx
Reply to  Theo Goodwin
February 27, 2015 5:57 am

Whats funny is I do not even think they are hiding their conclusion in their premises.
Whats perplexing is why they are not called out by it during peer review.

Joe Bastardi
February 26, 2015 8:30 pm

In the meantime, the AMO has crashed as Gray opined it would. While it was not known as the AMO back in the 1970s, Gray opined we would go into the warming period mid 1990s till about 2020 and then start back down ( this was back in the late 1970s) He nailed it before all this became twisted by people pushing an agenda and coming to a conclusion based on that.
I would love one of these climate scientists that tell me after the fact why it happened to just once, forecast something like we are seeing now with the AMO. Its no different than the guy on Monday Morning telling you why a team won or lost a game, or he could have done better

policycritic
Reply to  Joe Bastardi
February 26, 2015 10:53 pm

Joe, would you do a Saturday video on the relationship between the PDO and AMO and how the +/- works? Thx.

Reply to  Joe Bastardi
February 27, 2015 1:02 am

This slightly off topic, but Mr. Bastardi you are one of my heros!!!

TedM
Reply to  Danny G. Sage
February 27, 2015 2:30 am

Ditto

rh
Reply to  Joe Bastardi
February 27, 2015 5:35 am

The place I work has a winter betting pool, where we all guess how the winter will turn out: Max snow, min temp, number of days below zero, etc. I base my “guesses” on the weatherbell.com Saturday summary videos and have been kicking butt against the “scientists” I work with. They think my success is akin to the secretary winning the football pool basing her choices on uniform colors. Maybe next year I’ll let them in on my “secret weapon”.

February 26, 2015 8:32 pm

Ben Booth writes a perspective for the Steinman, et al, 2015 paper.
http://www.sciencemag.org/content/347/6225/952.summary
In his Perspective, he uses the figure shown below.
The AMO curve shown does not look correct to me, as I am under the impression the AMO has been positive since about 1990.
http://i57.tinypic.com/157q1f.png

Richard M
Reply to  Joel O’Bryan
February 27, 2015 8:10 am

The AMO and PMO of the paper are model derived fictions. They have nothing to do with reality.

Hugh
Reply to  Richard M
February 27, 2015 8:59 am

Facepalm. Seriously?

February 26, 2015 8:37 pm

Thanks, Bob.
What were they thinking? They were thinking we have been thoroughly Grubberized and will believe anything they feed us.
Smoke is real, mirrors too, Steinman et al. (2015), not at all.

February 26, 2015 8:53 pm

Here is Steinman’s Fig1A.
http://i57.tinypic.com/2qv85jr.jpg
Here is a closeup shot of the Fig 1A of the last 20 years. The observed is outside the 95% confidence interval. Model fail.
http://i57.tinypic.com/296lzzd.jpg
The Figure 1 legend is here:
http://i57.tinypic.com/2yx3i8o.jpg
Fig 1 B and C here:
(more CMIP 5 failures highlighted)
http://i59.tinypic.com/34es5jt.jpg

Richard M
Reply to  Joel O’Bryan
February 27, 2015 8:12 am

That’s why their fictional PMO has a huge drop right at the end.

SteveS
Reply to  Joel O’Bryan
February 27, 2015 12:54 pm

I thought the models “trained” on the years up to 2005……after that is when the models made future predictions……does anyone know if this is correct

Rud Istvan
Reply to  SteveS
February 27, 2015 1:17 pm

The CMIP5 protocol published in 2009, revised 2011, table 1, ensembles 1.1 and 1.2 were decadal and 3 decadal hindcasts back from 2005. So the parameterizations were to best fit from roughly 1975 to 2005. This is evident from the stuff Ed Hawkins posted in 2013 (use google images). The least divergence between CMIP5 and observed temp (he used HadCru4) is exactly that period.

February 26, 2015 8:54 pm

The Steinman ,Mann Miller paper recognizes that there are serious differences between the NH, N Pacific and N Atlantic SST model runs and the observed temperatures. The authors are particularly concerned to explain the recent “Pause”. They subdivide the ocean system into three separate regional components which they label AMO,NMO and PMO ( somewhat redefined AMO NAO and PDO )
Basically what the paper does is to calculate the differences between models and observations and then attribute the difference to an unexplained “internal variability” in the ocean temperatures. The authors conclude that internal multidecadal variability in NH SST temperatures accounts for the discrepancy between models and observation and that it also likely offset anthropogenic warming over the last decade . They add that this effect will reverse ( at some unspecified date) and add to anthropogenic warming in coming decades.
The AMO PMO and NMO curves in their figure 3c show, more or less, the well known 60 year periodicity in the temperature data. see Figs 15 and 16 at
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
In other words they are trying to improve the models and save the model forecasts by adding to them
the effects of the PDO ,AMO and NAO.
Unfortunately they continue to make the egregious schoolboy error of tuning their models back about 120 years when the main periodicity is millennial. (Figs5-9 at the link) The recent pause is more accurately described as a cooling since 2003 which date represents a peak in both the 60 year and 1000 year periodicities. I estimate that the cooling trend of the millennial cycle will reverse in about 2650 as opposed to in the coming decades. See the peak at
http://www.woodfortrees.org/plot/rss/from:1980.1/plot/rss/from:1980.1/to:2003.6/trend/plot/rss/from:2003.6/trend
That the Steinman et al paper got through peer review for Science Magazine says much about the current state of establishment science. However in a short comment on the paper in the same Science issue Ben Booth of the Hadley center does sound a refreshingly cautionary ( for Science Mag and Hadley ) note saying that the paper is only useful if the current models accurately represent both the external drivers of past climate and the climate responses to them and that there is reason to be cautious in both of these areas. This comment is an encouraging sign that empirical reality may be finally making an impression on the establishment consciousness. If the expected sharp cooling in 2017-2018 suggested by the drop in the Ap index and Neutron Monitor data in Figs 13 and 14 of the post linked above actually occurs it should just about finish off the whole CAGW meme.

rh
Reply to  Dr Norman Page
February 27, 2015 5:59 am

“If the expected sharp cooling in 2017-2018 suggested by the drop in the Ap index and Neutron Monitor data in Figs 13 and 14 of the post linked above actually occurs it should just about finish off the whole CAGW meme.”
I don’t think anything can finish off the cagw meme. If this graph(lifted from your site), didn’t do it, then reality doesn’t matter to them.
3.bp.blogspot.com/-zLZvFvWqy8Y/U8REucSDlfI/AAAAAAAAASg/-f_VHXdfaQY/s1600/CMIP5-73-models-vs-obs-20N-20S-MT-5-yr-means1.png

BFL
Reply to  Dr Norman Page
February 27, 2015 1:35 pm

Must….adjust…..reality…… to….. fit….. models……(sleepwalk mode).
Just amazing that the AGW groupies are so illogical as to never catch on to the constantly shifting excuses for the model failures.

February 26, 2015 9:14 pm

Here is Fig 3 from Steinman et al.
http://i60.tinypic.com/6z0tvo.jpg
Note: In Panel A, the phase locked nature of the oscillations must have contributed mightly to the warming from 1975 to 1995. Then the authors only accede that the Pause occurred only when they went negative after 2000.
They wrote in conclusion (my bold):

“Our findings have strong implications for the
attribution of recent climate changes. We find
that internal multidecadal variability in Northern
Hemisphere temperatures (the NMO), rather
than having contributed to recent warming, likely
offset anthropogenic warming over the past
decade. This natural cooling trend appears to reflect
a combination of a relatively flat, modestly
positive AMO and a sharply negative-trending
PMO. Given the pattern of past historical variation,
this trend will likely reverse with internal
variability instead, adding to anthropogenic warming
in the coming decades.

What a load of BS: In the coming decades the AMO and PDO will be both negative, and grand cooling, along with a possible quiet solar magnetic period spells quite a bit of problem for mankind in the next 20 years. And a death knell to CAGW.

February 26, 2015 9:15 pm

The models do not contain the effect of oceanic oscillations (A). Therefore the observed temperature record (B) and the model mean (C) can be used to estimate (internal variability A) .
A = B – C
This seems to me what Steinman et al 2015 is all about. The paper claims that the model “back projections” can be used to demonstrate a loose fit between the warming and cooling caused by prior oceanic oscillations.
Why did they not combine the effects of oceanic oscillations and observed warming (B-A)? Since the range in A seems to be about 0.4 degrees Celsius and the warming (B) seems to have been around 0.7 degrees Celsius this ought to give a non-trivial result. And C would be derive by a purely empirical method would it not?
Then C = B – A. In effect, this approach would tell us what the output of the model should have been based on observations of the real world. Any model or combination of models could be used to test against C to estimate the fit between observations and the models. C could be compared with Csubm where Csubn is the output from model n.
Moreover, if every model were tested by deriving the difference between C and Csubn, then a new multi-model statistic could be derived that would represent the multi-model best estimate of what should be observed.
I conclude, based on this thought experiment that what is wrong with Steinman et al 2015 is that the authors have treated the output from the models as being equivalent to observations.
The study design is fatally flawed.

Reply to  Frederick Colbourne
February 27, 2015 8:38 am

“Why did they not combine the effects of oceanic oscillations and observed warming (B-A)?”
Doing would vividly highlight model error, e. Your C should have been subtracted from Csubn to give e. Csubn – C = e. If Steinman et al were adherents of science, this is what scientists do who want to assess validity of the models.
That Steinman chose the opposite, and attempt to redefine observation to fit the model failures exposes them as pseudo-scientists. Along with their fellow pseudo-scientist editors at Science Mag, what they have done is ultimatly to further destroy the reputation of science.

thingadonta
February 26, 2015 9:31 pm

So it’s not a real pause when something natural makes the temperature pause, it’s only a pause when human activities cause a pause. So natural pauses don’t count, and neither do natural warming factors.
Someone might look back at this one day and shake their head.

brian jackson
Reply to  Bob Tisdale
February 27, 2015 10:59 am

I am shaking my head now!!!!!!!

observa
February 26, 2015 10:04 pm

Some poignant quotes from an article about it in The Australian-
‘FORCES of natural climate variability have caused the apparent slowdown in global warming this century but the effect will be temporary, according to new research.
Byron Steinman, of the University of Minnesota Duluth, and Michael Mann and Sonya Miller, of Pennsylvania State University, found that these natural, or “internal”, forces had recently been offsetting the rise in global mean surface temperature caused by increasing greenhouse gas concentrations.
They published their results in the latest edition of the American journal Science.
The deceleration in global mean surface temperature this century despite rising greenhouse gas levels has fuelled the climate wars.
Greenhouse sceptics have seized on it as evidence that the Intergovernmental Panel on Climate Change and other scientists adopting the “consensus” view have exaggerated the risk of global warming.
Some research, including studies by Australian scientists, suggests that an increase in the heat taken up by the deeper waters of the Pacific as well as a pronounced strengthening of Pacific trade winds in recent years due to natural climate variability is responsible.
The team used modelling results from the big international science program, the Coupled Model Intercomparison Project phase 5, to estimate the externally forced component – mainly due to human activity – of northern hemisphere temperature readings since 1854.
“We subtracted this externally forced component from the observational data to isolate the internal variability in northern hemisphere temperatures caused by the Atlantic multidecadal oscillation and a component of the Pacific decadal oscillation, ”Professor Steinman told The Australian. (These natural climate systems are defined by temperature patterns across the oceans and influence climate globally.)
“This showed that the current slowdown is being driven largely by a negative internal variability trend in the Pacific,” he said.
He said the negative shift had been counteracting some of the anthropogenic warming.
“In coming decades, the trend will likely reverse and accelerate the increase in surface temperatures,” he said.’
Welcome to post-normal science- “In coming decades the trend will likely.”……[fill in whatever takes your fancy folks]
Idiots!

observa
February 26, 2015 10:15 pm

Remember the process- Global Warming morphs to Climate Change morphs to Extreme Weather Events and now morphs to Forces of Natural Climate Variability morphs to The Trend Will Likely…???
Woohoo roll the drums and sound the trumpets! Victory over the skeptics and deniers at long last.

February 27, 2015 12:57 am

Bob
I mentioned SST and the period between 1910 and 9140 before in one of your other posts. From looking at how the Met Office addressed bucket corrections (i.e. with lots of assumptions and one cursory experiment performed 20 odd years ago) I don’t know if that rise in temperature is “real”.
The original data was adjusted so the change was less marked but I’m wondering if even this is an under-estimation of bucket bias. There may not be that much of a temperature change in that period. Or it may even be as warm if not warmer in the early 20th century than now?
If anyone has time you can check out the uncertainty description in the HADSST data sets. The bucket measuring technique hasn’t been characterised fully which leaves a lot more uncertainty on the table. They don’t address measurement process to a degree that a good scientist and engineer should which makes me wonder what were temperatures really doing back then.
At any rate, a good post Bob.

February 27, 2015 1:42 am

The following article appeared on the MSN home page for a few hours discussing the Steinman, et al article plus an article on how long the pause would last. I thought that it was kind of interesting to see that the chances of the “pause” lasting 20 years was only 1%. My question is what happens, when the pause lasts another 15 years (plus the ~ 18 years it has already lasted), as it has in the historical past. I wonder what the odds of that happening are?
http://www.msn.com/en-us/weather/topstories/scientists-now-know-why-global-warming-has-slowed-down-and-it%e2%80%99s-not-good-news-for-us/ar-BBhZW8r
The more I learn, the less I know.
Dan Sage

oppti
February 27, 2015 2:00 am

There is a underlying trend caused by humans-Global dimming until 1980 and then global brightening. Aerosols used to cool now it does not in NH at least.

basicstats
February 27, 2015 2:51 am

Is this just a reworking of Mann, Steinman and Miller(2014)? Replacing output from Energy Balance Model(s) with that from GCM’s. With the same ‘models too good to reject’ premise of the previous paper, as described by Matthew Marler above. Comprehensively examined by Nic Lewis at the time, as I recall.
Is ‘Science’ short of climate papers?

Reply to  basicstats
February 27, 2015 8:12 am

Science is All In on the pseudo-science that is ACO2=CAGW. Marcia McNutt &Co. seemed to have forgotten, like Mann an Steinman have here, that model outputs are not observation. And most importantly, they want to bend reality to fit the model, because the model promises boatloads of research funding.

Bill Illis
February 27, 2015 3:56 am

If there is, indeed, an underlying warming trend caused by GHGs, then it is a very low trend, certainly less than the theory predicts.
If the explanation for the pause, is natural variability, then that variability also exists in the past as well. The “natural” extension would be to pull that past variability out of the past temperatures and arrive at the underlying GHG trend. Go back to 1880, estimate the trend. Theory has to be re-written.
How come Michael Mann, Trenberth and Foster and Rahmstorf stop at carrying out the natural extension. You know what. They have done exactly that but have chosen to not present the results. Because it says the Theory has to be rewritten.

rgbatduke
February 27, 2015 4:33 am

Hi Bob,
I think you are being far too kind when you accept the notion that the MME mean is, in fact, a meaningful quantity. Indeed, calling the collection of models in CMIP5 a “statistical ensemble” is a bit of a travesty all by itself. They are not independent and identically distributed samples drawn from a distribution of perfectly correct models (plus noise). They fail in this on almost every specific criterion in the statement. They are not independent — they share code, history, and many of them were written as minor variants of the same program by a single federal agency that is funded at phenomenal levels because of what they predict, project, prophesy, whatever. They are not identically distributed objects generated by a random process unless dice were used at some point during the actual construction of the models (as opposed to within the models in some sort of Monte Carlo), although sometimes the code itself might look as though is was written by mad monkeys armed with dice. Nor are they drawn from a distribution of perfectly correct models plus errors that are collectively free from bias. Rather, since they share so much actual code and so many of the same limitations, they are almost certainly not collectively free from bias, including systematic bias introduced by shared errors in the dynamical assumptions and physics.
Finally, the process that they are modelling is not a linear, or even a well-behaved, process. The models being averaged individually fail to come close to replicating the dynamical spatiotemporal scales visible in the real world data. That is, they have the wrong autocorrelation, the wrong amplitude of fluctuation, and a generally diverging envelope of possible future trajectories per model, where some of the models included are so obviously incorrect that it is difficult to take them seriously but they are all included anyway because the worst models show the most warming and without that, even the MME mean would be far closer to reality and far less “catastrophic”.
Two other comments. Your curves above, for the most part, present lines as if those lines are “the” temperatures being presented. In actual fact, those lines all come with error bars. In a sane computational universe, those error bars would start at a substantial level in the present (HadCRUT4’s acknowledged total error is around 0.2 C in the present) and would increase to many times the present error in the increasingly remote past. IMO the claims for precision/accuracy in the remote past are absurd — HadCRUT4 claims total error in 1850 is only around 0.4 C for the global anomaly, twice that of 2015, for example. This is absurd, given the importance of things like Pacific Ocean temperatures in any estimate of global temperature or its anomaly and given the simple fact that ENSO was only discovered, named, and subsequently studied in roughly 1893. In 1850 vast tracts of the Earth were terra incognita, not inhabited or systematically measured by Europeans wielding even indifferent thermometric instrumentation let alone the comparatively high precision instrumentation of the present. If the best HadCRUT4 can manage is halving the total error claimed in 1850 with the vast collection of modern thermometers at their current disposal, including the entire ARGO array, there is something seriously wrong, and yet 0.2 C seems quite reasonable for a current error estimate, possibly generous given that HadCRUT4 does not, apparently, correct for certain systematic biases such as the UHI effect.
Still, decorating the lines that appear on your graphs with even error estimates that fail a statistical common-sense sanity check and are probably seriously underestimated is better than presenting the lines themselves as if they are free from error, or as if error is confined to the width of the drawn lines. Yes, it makes the graphs messier, but without them the graphs are potentially meaningless.
I cannot emphasize this point enough, because it is pervasive in public presentations of climate science. It also leads to my second comment. In the graph above of the AMO, PMO, and NMO, the displayed errors are truly absurd. As I noted above, ENSO was only discovered in 1893, and expeditions to study it were subsequently launched. Perhaps by 1900 it and the PMO were being observed in a reasonably systematic way by scientists, although at the time they doubtless had to launch “expeditions” to do so and I’m quite certain that the record is sparse and incomplete well into the 20th century. In contrast, the Atlantic was heavily trafficked and surrounded on nearly all sides by ports with cities and thermometers. Yet it is the NMO that has the large, apparently diverging error bars in the graph above, followed by the AMO (both errors exploding pre-1900) while the PDO was, apparently, known then to better precision in 1880 than it was in 1970.
Say what?
A second point to make is that these curves supposedly are the result of double differences. By that I mean that they are the result of data that has twice had a “mean” background behavior subtracted out. The first time is when actual thermometric data has some base value subtracted (as if this base value, often the result of a local average over some modern-era reference period, is known to infinite precision) to form the “anomaly”. The second time is when the global anomaly, either surface temperature or sea surface temperature, is subtracted out to discover the (A,N,P) multidecadal “oscillation”. From the sound of it, the curves above have a third subtraction, the CMIP5 MME mean.
There are rules for compounding precision. If I subtract two big numbers — such as (for example) 288.54 and 288.17 to make a small number, 0.37 — the small number loses three significant figures of precision. If I subtract two big numbers uncertain at some level — such as (for example) 288.54 \pm 0.2 and 288.17 \pm 0.2, the result is (in crude terms) 0.37 \pm 0.4, which basically means that we have no idea what the result is. If we then take two of these numbers: 0.37 \pm 0.4 and 0.32 \pm 0.4 and subtract them, we get something like 0.05 \pm 0.8.
These are serious problems. I’m using a very simple lab device to teach physics at the moment that has a wheel — actually I think it is a mouse wheel — that measures how far a cart travels at roughly 1 mm of precision. One then has to estimate things like velocity and acceleration from this mm scale data. In a typical experiment, the cart moves along at speeds from 0 to 500 mm/sec, and samples the wheel output at a temporal resolution of maybe 100 Hz. Velocity is estimated by taking numbers like .687 and .689 (two successive wheel readings in mm) and dividing by 0.01 (multiplying by 100) to get 0.2 m/sec. Acceleration is formed by taking two successive velocity estimates and subtracting them (and dividing by the sampling time). As you can see, there is a problem with this when the cart is moving this slowly. The acceleration thus formed has no significant digits left. It actually looks almost like a random variable perhaps very slightly biased from zero on a graph.
In the case of a rolling cart, of course, we can make certain assumptions about monotonicity and the second order linear nature of the underlying dynamical system that permit us to do better — smooth the data over multiple data points, fit higher order curves to the primary data and differentiate those fits instead of using direct data differences — but those all come at a price in precision and entail numerous assumptions about the distribution of actual errors in the measuring apparatus as well as the underlying data. Some of the results are non-physical — accelerations start to happen well before the change in data that signals e.g. an actual collision. There are no free lunches in data analysis as you are always limited by the actual information content of the data and cannot squeeze a signal out of noise without assuming a knowledge that all too often one does not have.
In any event, I call foul on the AMO/PMO/NMO data. The error bars are completely unbelievable (a few hundredths of a degree of precision in an anomaly of an anomay, larger errors in 1970 than in 1880, really?) , and the curves are far, far too smooth and regular.
rgb

Reply to  rgbatduke
February 27, 2015 5:04 am

rgb:
Nicely said. I particularly like your unequivocal and apt phrasing:
“IMO the claims for precision/accuracy in the remote past are absurd — HadCRUT4 claims total error in 1850 is only around 0.4 C for the global anomaly, twice that of 2015, for example. This is absurd, given the importance of things like Pacific Ocean temperatures in any estimate of global temperature or its anomaly and given the simple fact that ENSO was only discovered, named, and subsequently studied in roughly 1893. ”
It all seems to me to be so much magical thinking, false precision and hubris. Even to talk about error bars with such a level of uncertainty and lack of knowledge strikes me as absurd.

kim
Reply to  bernie1815
February 27, 2015 5:49 am

1993.
===

richard verney
Reply to  rgbatduke
February 28, 2015 12:34 am

“This is absurd, given the importance of things like Pacific Ocean temperatures in any estimate of global temperature or its anomaly and given the simple fact that ENSO was only discovered, named, and subsequently studied in roughly 1893.”
////////////////////////////////////
I agree with the point you make regarding past error bars. The reality is that we have no good information on GLOBAL temperatures pre the 1930s, and ocean temperatures are riddled with errors and prior to ARGO extremely unreliable.
I frequently make the point that we do not know whether, on a global basis, temperatures are warmer today than they were in the 1880s or the 1930s, but as far as the US is concerned it was probably warmer in the 1930s than today. That is the extent of our knowledge.
Whilst I accept that ocean phenomena were beginning to be studied in the late 19th early 20 th century, I consider that the recognition and study of ENSO was a little later than you are suggesting. See http://www.earthgauge.net/wp-content/fact_sheets/CF_ENSO.pdf
“Now well known to scientists, the El Niño-Southern Oscillation (ENSO) was discovered in stages. The term El Niño (“the infant” in Spanish) was likely coined in the 19th century by Peruvian fishermen who noticed the appearance of a warm current of water every few years around Christmas. The cause of the current’s appearance was a mystery to them. In 1899, India experienced a severe drought-related famine, prompting greater focus on understanding the
Indian monsoon system, arguably the nation’s most important source of water. In the early 1900’s, the British Mathematician Sir Gilbert Walker noticed a statistical correlation between the monsoon’s behavior and semiregular variation in atmospheric pressure over the tropical Pacific. He coined this variation the Southern Oscillation, defined as the periodic shift in atmospheric pressure differences between Tahiti (in the southeastern Pacific) and Darwin, Australia (near Indonesia). It was
not until 1969, however, that meteorologist and early numerical weather modeler Jacob Bjerknes proposed that the El Niño phenomenon off the coast of South America and the Southern Oscillation were linked through a circulation system that he termed the Walker circulation (see image right). ENSO has since become recognized as the strongest and most ubiquitous source
of inter-annual climate variability.”

February 27, 2015 5:10 am

North Atlantic Ocean’s temperature oscillations past, present and future are closely related by the Arctic’s events
http://www.vukcevic.talktalk.net/NAII.gif
NA ice index: http://www.essc.psu.edu/essc_web/seminars/spring2006/Mar1/Bond%20et%20al%202001.pdf
Arctic GMF : http://www.gfz-potsdam.de/fileadmin/gfz/sec23/data/Models/CALSxK/cals7k2.zip

Michigan
February 27, 2015 8:13 am

The global warming propheteers (or profiteers) leave out one major significant fact that this “new research” ignores.
Steinman says, “It appears as though internal variability has offset warming over the last 15 or so years,”
However if natural “internal variability” has caused “the pause” the past 15 years, how do we know that natural “internal variability” didn’t contribute the the warming for the previous 15 years leading up to 1998?
Thats the fallacy in models and attributing the slight warming leading up to 1998 to a less than 1/100th of 1% increase in CO2 level in the overall makeup of the atmosphere. The the temporary cooling is other causes, then the temporary warming up until then could also be other causes.
By the way, where is our false promises of Global Warming this winter? I have lived in Michigan for 22 years now, and I thought last winter was cold, but this year has been even more brutal.
Dell from Michigan

February 27, 2015 8:45 am

Further to my earlier comment
http://wattsupwiththat.com/2015/02/26/on-steinman-et-al-2015-michael-mann-and-company-redefine-multidecadal-variability-and-wind-up-illustrating-climate-model-failings/#comment-1870168
This paper adds absolutely nothing to our understanding of climate science and indeed perpetuates the grossest error of the models on which the IPCC CAGW scare is based,.All they do is go a very round about route to find the 60 year +/- periodicity in the Hadley temperature data which anyone can see at a glance
http://3.bp.blogspot.com/-fsZYBCaAYRo/U9aXzNnfWJI/AAAAAAAAAVc/CfFP12Oh438/s1600/HadSST314.jpg
The Hadley peaks are obvious at about 1880 1940 and 2000. They finally massage their data to produce more or less the same peaks in the red line in their Fig 3C see above Joelobryan 2/26 / 9:14 pm
The same periodicity is seen in the GISS data – their FIg 3A
They then attribute these temperature periodicities to “natural internal variability” in the ocean systems.
as if this advances our understanding by giving the model – reality differences another name. Nowhere do the suggest what is driving these variabilities.
As to the future they simply say the internal variability derived cooling trend will reverse in the coming decades. Looking at their own curves it is easy to draw the conclusion that with the last peak at about 2000 they would expect cooling until 2030 and renewed warming until 2060.Presumably that would be a modulation of the underlying model linear increase which they would attribute to CO2.However they perpetuate the modelers scientific disaster by ignoring in all their estimates and attributions the 1000 year periodicity so obvious in the temperature data,- see Figs 5-9 and the cooling forecasts at
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
Again the model procedure is exactly like taking the temperature trend from say Jan – July and then projecting it forwards linearly for 10years or so -Junk science at its worst.

Alx
February 27, 2015 9:25 am

We show that our method gives the correct answer when tested with climate model simulations.

This says it all. Basically says we test and prove our climate models by running more climate models.
I worked in the insurance sector for awhile as IT program manager. If a development team told me they planned on validating results by comparing individual policy processing with more software they were going to develop, I would know it was time to re-constitute the team. The only way to validate results is to compare against verified results, which was either someone who understood the required policy processing backwards and forwards or an older system that had a proven track record over multiple years.
In the case of climate models this would be verifying results against past climate, instead of past climate models. Climate models have re-written the rule of validation and now the known or observed is less important than model output.
Which means bookies everywhere take note, the winner of the next international soccer championship is the team predicted by consensus and statistical modeling to win, not the team who actually wins.

February 27, 2015 9:54 am

All forward modelling will inevitably end wrong, since two important variables (mentioned in the comment above) and there many others, are unpredictable. Further more even if one can predict their intensity, degree and effect of their interaction can be determined only after the event.
To paraphrase Steven Mosher: Climate models are “un needed”.

February 27, 2015 10:58 am

Vukcevic
I would appreciate any comments you may have on my previous comments -see Norman Page at 2/27/ 8:45
and on the cooling forecasts at the blogpost linked earlier.

Reply to  Dr Norman Page
February 27, 2015 11:29 am

Hi Dr. Page
I often read your comments and occasionally look at your blog, but normally do not often comment outside of my ‘comfort zone’.
I just posted this elsewhere:
“Both 10Be and C14 nucleation are strongly modulated by the Earth’s field. Pre-instrumental paleo-magnetic data are going back ‘millions’ of years but dating is not particularly accurate + or – 50 years/millennium (usually carbon dated, circular judgment!).
Declination/inclination compass readings go back to 1600, magnetometer data to 1840. Magnetometer obtained data show that the Earth’s field beside its own independent variability has a strong 22 year component, much stronger than the heliospheric magnetic field at the Earth’s orbit (implying common driving force ?!). For the above reasons all estimates of the solar activity pre-1600 (sunspot count availability) can not be taken with any degree of certainty.”
I have in past occasionally commented on the reliability of the 10Be data, it is an opinion which can be taken into account, or as it happens it is mostly ignored.
regards, mv.

Paul Carter
February 27, 2015 12:45 pm

Not long ago it was claimed that the proof of AGW was that human CO2 was the only explanation for the difference between the models and actual temperature, because all other natural factors had been accounted for. Now they’re claiming that there ARE other factors. This new claim by Steinman et al. means their original ‘evidence’ for AGW was wrong.

February 27, 2015 1:46 pm

Vukcevic Thanks for your reply- I always follow with interest your posts and comments. I agree entirely with your correlation of the detrended NH temperatures with the Geo-solar cycle. This clearly shows the 60 year +/- periodicity seen in the Hadley temperature data and obviously is commensurable with the Saturn/ Jupiter
lap 3x 19.859.= 59.577. This too is commensurable with the 960 +/- year cycle 16x 19.859 = 953.232 which also equals the USJ lap.
Of course Leif would do his nut at the mention of such “correlations” but it points to the place where solar physicists should think about possible connecting processes.. I suggest torque and torsion at the tachycline for openers, although I have an uneasy feeling that electro magnetic effects may also be involved.
All I do in my cooling forecasts is simply say that the underling temperature trend detrended out in your graph is obviously part of the 350 year uptrend of the 960 +/- periodicity seen in the temperature data – see Figs 5-9 at
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
By projecting the 960 year cycle forward, from its current peak climate forecasting appears to me to be reasonably simple and obvious – at least as far as getting into the ballpark is concerned.
The entire IPCC – modeling approach on which the whole UNFCCC circus is based is simply an example of the academic herd instinct and scientific incompetence to the point of stupidity and an unwillingness to see and use the obvious as the first approach to problems.
Where is your comment on the reliability of the 10Be data? Regards Norman.

Reply to  Dr Norman Page
February 27, 2015 2:22 pm

My comments on WUWT are often linked to THIS short article (you could google the link).
Following illustration shows that some of the long term solar components based on 10Be data are also found in the Earth’s magnetic field variability
http://www.vukcevic.talktalk.net/Stein1-Vuk.gif

Reply to  Dr Norman Page
February 27, 2015 2:30 pm

Here is google selection
https://www.google.co.uk/search?esrch=Agad%3A%3APublic&hl=en-GB&source=hp&q=CET%2610Be.htm&gbv=2&oq=CET%2610Be.htm&gs_l=heirloom-hp.3…67513.67513.1.67722.1.1.0.0.0.0.56.56.1.1.0.msedr…0…1ac.1.34.heirloom-hp..2.0.0.i_L5bkRvhgQ

Reply to  Dr Norman Page
February 27, 2015 2:49 pm

Steinhilber used Dongge cave (China) to re-affirm accuracy of his TSI reconstruction long term periodicity, but I found that geopolar magnetic field data (http://www.gfz-potsdam.de/fileadmin/gfz/sec23/data/Models/CALSxK/cals7k2.zip) is a more accurate representation than the Steinhilber TSI.
http://www.vukcevic.talktalk.net/Dongge.gif
In my view it is difficult to disentangle what portion of the 10Be is modulated by solar and what by the Earth’s magnetic field.

February 27, 2015 2:41 pm

Fantastic – I see my favorite 1000 year (just under) periodicity stands out prominently-I will certainly use this graph in future posts if you don’t mind.( with proper attribution of course.

Reply to  Dr Norman Page
February 27, 2015 2:52 pm

You are welcome to it, but the attribution may not do you much good.