On Steinman et al. (2015) – Michael Mann and Company Redefine Multidecadal Variability And Wind Up Illustrating Climate Model Failings

Guest Post By Bob Tisdale

For the past few years, we’ve been showing in numerous blog posts that the observed multidecadal variations in sea surface temperatures of the North Atlantic (known as the Atlantic Multidecadal Oscillation) are not represented by the forced components of the climate models stored in the CMIP5 archive (which were used by the IPCC for their 5th Assessment Report). We’ve done this by using the Trenberth and Shea (2006) method of determining the Atlantic Multidecadal Oscillation, in which global sea surface temperature anomalies (60S-60N) are subtracted from the sea surface temperature anomalies of the North Atlantic (0-60N, 80W-0). As shown in Figure 1, sea surface temperature data show multidecadal variations in the North Atlantic above and beyond those of the global data, while the climate model outputs, represented by the multi-model mean of the models stored in the CMIP5 archive, do not. (See the post here regarding the use of the multi-model mean.) We’ll continue to use the North Atlantic as an example throughout this post for simplicity sake.

Figure 1

Figure 1 (Figure 3 from the post Questions the Media Should Be Asking the IPCC – The Hiatus in Warming.)

Michael Mann and associates have attempted to revise the definition of multidecadal variability in their new paper Steinman et al. (2015) Atlantic and Pacific multidecadal oscillations and Northern Hemisphere temperatures. Michael Mann goes on to describe their efforts in the RealClimate post Climate Oscillations and the Global Warming Faux Pause. There Mann writes:

We propose and test an alternative method for identifying these oscillations, which makes use of the climate simulations used in the most recent IPCC report (the so-called “CMIP5” simulations). These simulations are used to estimate the component of temperature changes due to increasing greenhouse gas concentrations and other human impacts plus the effects of volcanic eruptions and observed changes in solar output. When all those influences are removed, the only thing remaining should be internal oscillations. We show that our method gives the correct answer when tested with climate model simulations.

It appears their grand assumption is that the outputs of the climate models stored in the CMIP5 archive can be used as a reference for how surface temperatures should actually have warmed…when, as shown as an example in Figure 1, climate models show no skill at being able to simulate the multidecadal variability of North Atlantic. (There are posts linked at the end of this article that show climate models are not capable of simulating sea surface temperatures over multidecadal time frames, including the satellite era.)

Let’s take a different look at what Steinman et al. have done. Figure 2 compares the model and observed sea surface temperature anomalies of the North Atlantic for the period of 1880 to 2014. The data are represented by the NOAA ERSST.v3b dataset, and the models are represented by the multi-model mean of the climate models stored in the CMIP5 archive. Both the model outputs and the sea surface temperature data have been smoothed with 121-month filters, the same filtering used by NOAA for their AMO data.

Figure 2

Figure 2

As illustrated, the data indicate the surfaces of the North Atlantic are capable of warming and cooling at rates that are very different over multidecadal periods than the forced component of the climate models. The forced component is represented by the multi-model mean. (Once again, see the post here about the use of the multi-model mean.) In fact, the surfaces of the North Atlantic warmed from about 1910 to about 1940 at a rate that was much higher than hindcast by the models. They then cooled from about 1940 to the mid-1970s at a rate that was very different than the models. Not too surprisingly, as a result of their programming, the models then align much better during the period after the mid-1970s.

Steinman et al., according to Mann’s blog post, have subtracted the models from the data. This assumes that all of the warming since the mid-1970s is caused by the forcings used to drive the climate models. That’s a monumental assumption when the data have indicated the surfaces of the North Atlantic are capable of warming at rates that are much higher than the forced component on the models. In other words, they’re assuming that the North Atlantic since the mid-1970s has not once again warmed at a rate that is much higher than forced by manmade greenhouse gases.

What Steiman et al. have done is similar to subtracting an exponential curve from a sine wave…where the upswing in the exponential curve aligns with the last minimum to maximum of the sine wave…without first establishing a relationship between the two totally different curves.

MICHAEL MANN PRESENTED A CLEAR INDICATION OF HOW POORLY CLIMATE MODELS SIMULATE MULTIDECADAL SURFACE TEMPERATURE VARIABILITY

I had to laugh when I saw the following illustration presented in Michael Mann’s blog post at RealClimate. I assume it’s from Steinman et al. In it, the simulations of the surface temperatures (represented by the multi-model mean of CMIP5-archived models) of the North Atlantic, North Pacific and Northern Hemisphere surface temperatures have been subtracted from the data. That illustration clearly shows that the climate models in the CMIP5 archive are not capable of simulating the multidecadal variations in the sea surface temperatures of the North Atlantic and the North Pacific or the surface temperatures of the Northern Hemisphere.

Figure 3  Illustration from RealClimate Post

Figure 3

In other words, that illustration presents model failings.

If we were to invert those curves, by subtracting reality (data) from computer-aided speculation (models), the resulting differences would show how greatly the models have overestimated the warming of the North Pacific and Northern Hemisphere in recent years.

What were they thinking? We’d let that go by without calling it to everyone’s attention?

Thank you, Michael Mann and Steinman et al (2015). You’ve made my day.

OTHER REFERENCES

We’ve illustrated and discussed how poorly climate models simulate sea surface temperatures in the posts:

For more information on the Atlantic Multidecadal Oscillation, refer to the NOAA Frequently Asked Questions About the Atlantic Multidecadal Oscillation (AMO) webpage and the posts:

CLOSING

Some readers might think that Steinman et al. is nothing more than misdirection, a.k.a. smoke and mirrors. What do you think?

Thanks to blogger “Alec aka Daffy Duck” for the heads-up.

Advertisements

135 thoughts on “On Steinman et al. (2015) – Michael Mann and Company Redefine Multidecadal Variability And Wind Up Illustrating Climate Model Failings

    • Important aspects of the AMO variability ignored by the climate models:
      North Atlantic decadal and Multidecadal Oscillation AMO (de-trended N. Atlantic SST) can be successfully explained and numerically represented by solar- geomagnetic interactions.
      http://www.vukcevic.talktalk.net/GSCp.gif
      N. Hemisphere’s climate is under control of the polar and sub-tropical jet-streams, whereby the long term zonal-merdional positioning of jet streams depends on the extent and strength of three primary cells (Pollar, Ferrel and Hadley).
      http://www.srh.noaa.gov/jetstream//global/images/jetstream3.jpg
      Since the equatorial temperature changes little, it is the Arctic temperature which moves jet streams latitudinal location.
      Solar magnetic activity reaches the Earth’s poles in form of geomagnetic storms. NASA: “a two-hour average sub-storm releases total energy of five hundred thousand billion (5 x 10^14) Joules. That’s approximately equivalent to the energy of a magnitude 5.5 earthquake”
      This is in form of the electric current ionising upper layers of the atmosphere, whereby the atmospheric flow is affected by the changes in the resultant magnetic field (Lorentz law). The Earth’s field (i.e. magnetospheres shielding) is not constant (the internal oscillations are due to the cores differential rotation – see J. Dickey, JPL).
      The strength the solar incursions is modulated by the interactions of two fields, since it is strongest at the poles, effect on the polar vortex and subsequently the Arctic’s jet stream would be most strongest.
      Geomagnetic effect is also clearly demonstrated in the Arctic temperatures up-trend and its multidecadal oscillations with correlation factor R2>0.8.
      http://www.vukcevic.talktalk.net/AT-GMF1.gif
      Inevitable conclusion must be:
      It is the sun!

      • Vuk,
        Encyclopedia Americana. Danbury, CT: Grolier, 1995: 532.
        By today’s standards the two bombs dropped on a Japan were small — equivalent to 15,000 tons of TNT in the case of the Hiroshima bomb and 20,000 tons in the case of the Nagasaki bomb.
        In international standard units (SI), one ton of TNT is equal to 4.184 × 109 joule (J)
        Hiroshima bomb TNT 15000
        ton-TNT to Joules 4.18E+09
        Joules total 6.276E+13
        a two-hour average sub-storm releases total energy of five hundred thousand billion (5 x 10^14) Joules. That’s approximately equivalent to the energy of a magnitude 5.5 earthquake”
        5.00E+14 / 6.276E+13 = 7.97 in the correct ‘climate science’ equivalency.
        Please revise your 5.5 earthquakes to 8 Hiroshima bombs so all of us will then understand the power.
        Good post by the way.

      • Is it the sun ?
        Svalgaard :” As the magnetospheric ring current and the auroral electrojets and their return currents that are responsible for geomagnetic activity have generally North-South directed magnetic effects (strongest at night), the daytime variation of the
        Y or East component is a suitable proxy for the strength of
        the SR ionospheric current system..”
        But what all this has to do with the AMO ?
        Data shows direct correlation of the AMO to the Y or East component
        http://www.vukcevic.talktalk.net/GMEC-AMO.gif
        at latitude of 60N the home of the polar jet-stream
        http://www.srh.noaa.gov/jetstream//global/images/jetstream3.jpg

      • BFL hi,
        Polar jet stream’s trajectory (as far as I can ascertain) appear to be swung from ‘zonal’ to ‘meridional’ direction by the geomagnetic field just west of the Hudson Bay (affected by geomagnetic storms) and the Icelandic Low, the atmospheric semi-permanent system. In the winter it is located to the south-west of Greenland, controlled by the Atlantic drift current down-welling but in the summer months moves to Iceland’s north.
        Let’s assume that for some reason (e.g. the Arctic warm current inflow across Greenland- Scotland ridge is weakened, leading to increase in the summer see ice) the northern summer down-welling may cease. In such case the polar jet stream may get stuck in the strong merdional flow for many years. The process would be self reinforcing by positive feedback. Great Lakes ice would persist thought the summer months providing initial conditions for the onset of the next Ice Age.
        The jet stream’s controlling factor can be clearly deduced from the fact that during the last glacial, most if not all of Siberia was free of the ice sheet while N. America was under incredible 3,000 meters tick ice.
        http://www.qpg.geog.cam.ac.uk/research/projects/englishchannelformation/1453389260_3dcecb561c.jpg
        Illustration is from the University of Cambridge, thus we can assume it is the best knowledge available.

    • I have a question. Since someone is on a “Witch Hunt” Who funded the research paper? Is there a past conflict of “interest”. If so, then the authors should be ban from summiting any research paper fired from their jobs.

  1. Subtracting the models from the data only changes the sign of the result. Either way it shows that the models have completely failed to capture the oscillations. Steinman et al. must have applied some serious smoothing to the differences in Figure 3.

    • Yeah Fig 3 almost more like a theoretical sketch than actual data as much as it has been smoothed. The “uncertainties” for the temperature impacts of the PDO since 2000 are basically non-existent, too, lol.
      Remember when the acolytes used model runs with “only natural forcings” as sunlight and volcanoes and compared them to model runs which included GHGs to demonstrate the only way to get temperature variations the likes of which we were observing is to have GHG forcings included? And now that they need a convenient excuse for the pause, “internal variability” is involved, lol.

  2. Mann and others are providing the “it’s worse than we thought” excuse about how it’s REALLY going to warm when the “pause” ends.
    Forget that they poo-poo’d any idea that natural variability could be so dramatic. It somehow can cancel all the warming from the current high levels of CO2 in the atmosphere but was claimed to be basically negligible back when CO2 levels were lower.
    It’s amazing that they are given any credibility at all anymore.

    • Thus we see how amazing flexible “Settled Science” can be… 🙂
      Let’s wait and see how much more flexible Mann & Co will grow when finally “The Pause” will end differently than they hope, namely by changing in a colder than warmer climate… 😉

      • Our hapless friend Michael Mann is stuck in the web of lies of his own making. I want to see him scrambling some more for the straws he needs.

    • Well, according to the Guardian (where I am currently banned form posting after querying how the UEA CRU fossil fuel funding was different from Dr Soon’s),

      Steinman said the new work was a substantial step forward and employed state-of-the-art climate models that previous studies on the subject had not used.

      So it’s state of the art models then.
      Makes more sense than any state of science.

    • The gospel of global warming has been made into the bible of Common Core must know and follow 13th commandment. Never mind the reality of the world tilting into an overdue return of Ice Age showing its fangs with full display of ferocity. Michael Mann is riding the same wave of opportunistic deception akin to certain Trofim Lysenko, a favorite and recipient of Stalin most wanted medal.

  3. This post shows a fundamental misunderstanding about how climate models work. You would never expect for climate models to all simulate multidecadal variability in the same phase and magnitude simply because these are process-based models that have their own synoptics and various forcings as input. This is why taking a large ensemble should average out all the natural variability leaving only the forced response. However, the forced response in the models is underestimated because the lack of updated forcings for the historical runs – e.g. many (almost all) models assume no volcanic forcing past 2005 whereas there’s strong evidence of a moderate forcing. Unless you have the forcings correct the forced response the approach of detrending using the models will cause some attenuation of the actual signal.

    • Yeah, that’s it. Climate scientists have been looking for every possible explanation for “the pause,” and they haven’t found a way to account for volcanic forcing over the past decade. What exactly is the time lag in determining volcanic activity, and when can we expect to see this added to the models? Why didn’t the peer-reviewers catch that? Won’t make Steinman et al 2015 just as silly when these “moderate forcings” are added.

      • Not if you are selling a product that requires precise parameters to be believed.
        But in the real world…..yes

    • Robert Way

      This is why taking a large ensemble should average out all the natural variability leaving only the forced response. However, the forced response in the models is underestimated because the lack of updated forcings for the historical runs – e.g. many (almost all) models assume no volcanic forcing past 2005 whereas there’s strong evidence of a moderate forcing. Unless you have the forcings correct the forced response the approach of detrending using the models will cause some attenuation of the actual signal.

      What “strong evidence” of “a moderate forcing” (since 2005)? There has been NO change on atmospheric clarity since the early 1993-94 eruption!
      http://www.esrl.noaa.gov/gmd/webdata/grad/mloapt/mlo_transmission.gif
      How can you (anyone) insert a moderate forcing into the solution to a problem (CO2 has risen strongly but atmospheric temperatures have not changed) when the “symptom” of the moderate volcanic forcing (a dirtier atmosphere) is entirely absent?

    • Robert Way: This is why taking a large ensemble should average out all the natural variability leaving only the forced response.
      The ensemble mean is an unbiased estimate of a population defined from the model, its variants, and the parameter estimates with their uncertainties. But what has that population mean got to do with nature? We know that the components of the model are supposed to be based on physics, but some parameters are more or less guessed at. Almost all the model runs to date are higher than the data that they might have predicted and might be tested against, and the mean is significantly discrepant from the data. The claim that the ensemble mean is the “forced response” is not supported by anything more than the hope that it is.
      .Steinman et al essentially show that if the ensemble mean is assumed to be an accurate representation of the “forced response”, then the PDO, AMO, and NMO can be redefined to “correct” the model and bring it close to the data. The circularity is obvious. The result is important only if there can be collected a bunch of independent evidence (independent of the model), confirming that these are good estimates of PDO, AMO, and NMO.
      This is explained in their supporting online material.

    • Robert Way, I think model variability is a better term to use than natural variability in your post above.

    • “This is why taking a large ensemble should average out all the natural variability leaving only the forced response.”
      The average of fantasy being fixation.

    • Robert Way says: “This post shows a fundamental misunderstanding about how climate models work.”
      There’s no misunderstanding on my part, Robert. You obviously have misunderstood what was presented in this post. I’ve illustrated in simple terms what Steinman et al. have done. You, on the other hand, have failed to grasp Steinman et al, as evidenced by your comments at RealClimate and Michael Mann’s replies to you.
      Robert Way says: “You would never expect for climate models to all simulate multidecadal variability in the same phase and magnitude simply because these are process-based models that have their own synoptics and various forcings as input.”
      You have low expectations, Robert. Maybe the modelers need to change their expectations too. Because what they’re giving us is crap. Maybe the modelers need to initialize their hindcasts/forecasts from actual conditions, so that they can prove the model are capable of simulating Earth’s climate. Otherwise, it appears to many of us that the modelers avoid initializing from existing conditions because the models would fail in those efforts.
      As illustrated and discussed in the post, climate modelers are assuming the latest upswing in surface temperatures is caused by manmade greenhouse gases, etc., when the instrument temperature records and climate models indicate the surfaces of the Northern Hemisphere, the North Atlantic, and North Pacific are capable of warming at rates that are comparable to the recent warming without being forced to do so.
      What I find remarkable, Robert, is that the climate science community expects us to believe their climate model-based projects of future climate when the models have shown no skill at being able to simulate past climate….and, as Trenberth reminded us back in 2007, that climate models are not simulating Earth’s climate as it exists now or as it has existed at any time in the past.
      Others have responded to the rest of your comment about volcanic aerosols.
      Regards

    • So, averaging a bunch of wrong guesses leads, a priori, to the correct guess? Sweet – almost like magic!

      • Hey … maybe I should generate several tax forms all showing the government owes me huge rebates and then average them. That way I can claim it is reality and the government should not be able to object …. right? 😉

    • Climate models can be useful in understanding how one discreet component influences climate, how much that influence is, how it stacks up against other influences and the chaotic relationship between the influences is unknown. To claim otherwise is to be dishonest, ignorant, or delusional.
      I can in my sink determine how much of an alkaline substance to add to a volume of water to change the alkaline level to a certain level. I can then calculate how much alkaline is required to change the oceans the same amount and subsequently use a model to forecast multiple time lines using various amounts and various substances to determine when the alkaline change predicted will occur. Simple math and chemistry right? Yes, but no relationship to reality. Oceans are much more complex than simulations that artificially add various alkaline substances to it over time. A simple example but this is how climate modelers using limited knowledge make extravagant claims disassociated with reality.
      The fundamental misunderstanding is climate modelers having little understanding of how little their climate models simulate.

    • Robert, if models work as you describe, they are no more representative of the climate than the programmer’s ability to think up inputs and their forcing effects. In short, if the programmer thinks CO2 forcing effect is 4 degrees per doubling, an ensemble of runs will ‘predict’ that because it is built in.
      Another programmer who thinks that CO2 doubling will lead to 1.0 degree of warming will find, all said and done, that their ensemble of runs gives 1.0 as the answer.
      Your description is correct, and that is why I don’t believe the output from the models. If they operated isolated from and were not tuned using the past, and agreed with the past, I could believe their forecasts of future temperatures. In fact they are trained using the past, and are unable to predict the present “out of sample”. That is a fatal failure, in my book.
      Only by incorporating a pretty comprehensive set of solar and geomagnetic inputs will they come close to replicating the past and present. When that happens (and it will) the influence of CO2 will be seen, at its present level and rate of change, to be quite a minor, though real, influence.

    • Robert,
      I agree with the first half of your comment, but I don’t follow your claim that the forced response in models is underestimated due to omission of post 2005 volcanic forcing (which anyway seems a minor factor). Surely its omission leads to models [further] overestimating the forced response?
      BTW, I thought your comments at RealClimate made very good points. I’m glad you realise what rubbish the Steinman paper is. And well done for bringing up the Booth paper – which is certainly relevant to the natural AMO vs aerosol N Atlantic previous cooling debate – and the devastating critique of it by Zhang et al.

  4. Yes, it’s just the models variance from reality…but if you smooth it and make it look pretty, you have some of the humps and dips in the right places and can pretend it’s meaningful.
    Can you imagine using these curves to represent “internal variability” and trying to get something published in 2000 arguing that a measurable amount of the observed warming from 1980 to 1990 (eyeball says 0.2 deg C…0.1 from AMO, 0.05 from each of the two others) was due to “internal variability?”

  5. I can just see Michael Mann showing Figure 1 to a group at the IPCC, touting the model predictions, looking like Jim Carrey’s character “The Pet Detective” as he points to the figure and says “LLLLLLIKE a GLOVE!”

  6. Mann: “We show that our method gives the correct answer when tested with climate model simulations.”
    There is something wrong with those people. I don’t know what the name for it is, but there’s something wrong with them. They substitute their software-outputs for reality.
    I do not say this as an insult: I am merely describing what I observe. Over and over and over again, when purporting to verify an idea, they compare the idea, not to real-life observations, but to their software-outputs.
    I worked in software for two decades. I programmed mostly for business applications, accounts receivable, payables, payroll, inventory, and the like. I cannot imagine writing software that does not have to be checked against something in the real world.
    But Mann et al. actually seem to believe that their computer simulations are more real than reality. Isn’t that why they check their new ideas against software-outputs?
    So, I conclude there is something wrong with those people. Is there a name for what’s wrong with them? Or are my observations about them incorrect? I ask these questions quite sincerely.

    • The answer is they believe in “consensus reality”, not “objective reality”.
      In other words ‘make believe’ is more real to them than truth, call it wishful thinking.

    • I read the same sentence and came to the conclusion that the problem is they believe the “correct answer” is what the model simulations say it is without regard to any reality.
      All hail the mighty models.

    • I do not know what it is called, but it is related to this precept: “It is difficult to get a man to understand something when his salary depends upon his not understanding it.” Upton Sinclair.

    • Disconfirmed expectancy. Other than buying a book about it Wikipedia covers it. It explains why the cult of global warming deny temps have not been rising, have become shrill, attack people that point out their failed prophecy, the need to have followers to prop up their beliefs, and explains the attitude that even if global warming is a hoax, the actions taken will benefit the earth. It explains a lot, in my opinion.

    • “But Mann et al. actually seem to believe that their computer simulations are more real than reality. Isn’t that why they check their new ideas against software-outputs?” Mike knows where his bread comes from, and will do what needs to be done for him to survive another day and be paid for it and for tomorrow as well. It has nothing to do with any science in Newton understanding of it.

  7. I agree this post shows a fundamental misunderstanding of climate models. The point of Steinman et al. is that there is an underlying trend (called AGW), with natural variability on top of it. What we have seen during the “faux pause” is natural variability that offsets the continued upward trend. As soon as the natural cycle is on the upswing, we will see a “double whammy” effect and rapid warming.

    • You mean that “natural variability” that was ignored by climate scientists in the past? Where was that in their previous model work? Oh wait, they obviously didn’t have a clue how the climate actually works but you now want us to believe they’ve had a revelation and have been blessed with divine knowledge. Yeah, what flavor kool-aid are you imbibing?

    • ” … an underlying trend (called AGW)” which tends to zero, “with natural variability on top of it.” There that now makes sense.

    • Barry Barry Barry, really?, “a double whammy”. It sound more like magic then science. The models are a complete fail on every level. Now that they (climate alarmists) have discovered ocean cycles they need to learn about them. The earth will still be their teacher. The AMO will turn, and they will not like it.

    • Barry, nice try at misdirection, but it didn’t work.
      You wrote, “The point of Steinman et al. is that there is an underlying trend (called AGW), with natural variability on top of it.”
      The flawed assumption in Steinman et al. is that the models properly simulate the “underlying trend (called AGW)”, when they’ve shown no skill at simulating those temperatures.

    • Barry, then model the natural variability. Your assertion implies that you know what the natural variability is. Natural variability wasn’t discussed much before the models couldn’t match history, now it’s the explain all for the difference. You’re just making it all up as you go along.

    • Barry from sks says: ” As soon as the natural cycle is on the upswing, we will see a “double whammy” effect and rapid warming.”
      I have to be honest and say I wish Barry was correct. A warmer Earth with more CO2 for the initiation of life would be how I would choose to live the remainder of my life, and how I’d like to leave the planet for my descendants. Unfortunately, it looks like we’re headed for a repeat of the 1970’s, or even worse, the 1870’s.

    • Barry- You write “What we have seen during the “faux pause” is natural variability that offsets the continued upward trend.”
      Hansen vociferously disagrees with you as recently as 2003:
      “As we shall see, the small forces that drove millennial climate changes are now overwhelmed by human forcings.”
      Hansen et al., 2003 activist bulletin, Columbia University
      But then the gobsmacking pause initiates a rethink from The Hansen:
      “”The longevity of the recent protracted solar minimum, at least two years longer than prior minima of the satellite era, makes that solar minimum potentially a potent force for cooling,” Hansen and his co-authors said.”
      Hansen et al., “Earth’s energy imbalance and implications”, 2011 activist report.
      “The 5-year mean global temperature has been flat for a decade, which we interpret as a combination of natural variability and a slowdown in the growth rate of the net climate forcing…The annual increment in the greenhouse gas forcing (Fig. 5) has declined from about 0.05 W/m2 in the 1980s to about 0.035 W/m2 in recent years.”
      Hansen et al, 2013 activist bulletin, Columbia University
      Still waiting for Hansen’s rethink on the dead-certain anthropogenic interpretation of warming from 1970 – 2000….

  8. From their supporting online material: Regression Method
    To calculate the AMO, PMO, and NMO we 1) regressed the observed mean temperature
    series onto the model derived estimate of the forced component, 2) estimated the forced
    component of observed variability using the linear model from step 1, then 3) subtracted
    the forced component from the observations to isolate the internal variability component.

    Everything rides on their model-based redefinition of AMO, PMO, and NMO.

  9. Derivative science: set assumptions as fact, derive impact, conclude results correct.
    Computationalally correct, representationally unknown.

  10. They also write, in the SOM:
    The AMO, PMO, and NMO amplitudes are seen to be unusually large with the
    detrending approach (Fig. S5A). Particularly striking are the very large positive trends in
    the AMO and NMO at the end of the series, which were indeed predicted (Figs. 2,S2–S4)
    as structural artifacts of the method. The root mean square (RMS) amplitude of the NMO
    is 0.14oC, more than twice the simulated amplitude of the hemispheric multidecadal
    variability from Knight et al. (3). The AMO and PMO have estimated amplitudes of
    0.15oC and 0.09oC, respectively, and show high levels of apparent correlation with each
    other (R2=0.563, lag = 0, statistically significant at p=0.05 level for a one-sided test—see
    next section for details about the associated calculation). The AMO, PMO, and NMO
    collectively give the appearance of a “stadium wave” pattern (18,19), wherein each varies
    coherently but at variable relative lag.
    Our regional regression approach yields AMO, PMO, and NMO series that are
    dramatically different from those obtained with the detrending approach. Absent now are
    the very large positive trends in the AMO and NMO near the end of the series. The
    amplitude of the NMO (0.07oC using CMIP5-All) is half that inferred from the
    detrending approach. Unlike with the detrending approach, the maximum lagged
    correlation between the AMO and PMO (R2=0.334 lag = 3) is no longer statistically
    significant.

    Consequently, their climate model plus the natural variability yields the observed “faux pause”, with the model-based redefinition of AMO, PMO, and NMO.

  11. What I see in Fig 3 is that AMO and PDO were phased locked until 1995. After that PDO peaked and then really acclerated downward from 2002 onward while AMO continued up. In other words they are no longer phase locked.
    That realization says that the warming of the 80’s and 90’s was entirely natural, not “model described CO2 forcing”. Since 2002, the divergence between AMO and PDO has kept temps generally flat, save for the occasional mild La Nina or El Nino of the past 13 years. With that firmly in hand, it says CO2 forcing is lost in the noise of the natural variability of the AMO and PDO tracking in and out of phase over many decades.

  12. If only people understood – mixturing, tempering and/or “correcting” actual figures never ever is allowed in theories of science.
    And
    if
    one want to make a computer model, one need to take ALL not parts of every needed parameters AND analyse each one’s premises one by one.

  13. I am glad some really familiar with AMO and PDO is on this. I have not got a copy of the article yet. I did check the first author’s prior research. He does not seem to have been involved in this area up until now, which strikes me as very odd. Mann on the other hand cut his teeth on the AMO. Something does not feel right about this. It also took a while for this paper to get to press. Given that it addresses a hot topic, the reasons for the delay may be instructive.

    • I’d love to see the first draft. I’m guessing the author tried to use the real AMO and PDO and result was devastating to the claims of future warming. Hence, the other authors came along and they created the fictional AMO and PMO that would have no effect on future warming since it is derived from models with predetermined warming.

  14. Mr. Tisdale writes:
    “In other words, they’re assuming that the North Atlantic since the mid-1970s has not once again warmed at a rate that is much higher than forced by manmade greenhouse gases.”
    No more damning criticism of a scientist can be made. And Mr. Tisdale made the criticism stick.
    Warmist Climate Science has never been anything if not top down. The standard argument form is Circular Reasoning: hide your conclusion in your premises.

    • Whats funny is I do not even think they are hiding their conclusion in their premises.
      Whats perplexing is why they are not called out by it during peer review.

  15. In the meantime, the AMO has crashed as Gray opined it would. While it was not known as the AMO back in the 1970s, Gray opined we would go into the warming period mid 1990s till about 2020 and then start back down ( this was back in the late 1970s) He nailed it before all this became twisted by people pushing an agenda and coming to a conclusion based on that.
    I would love one of these climate scientists that tell me after the fact why it happened to just once, forecast something like we are seeing now with the AMO. Its no different than the guy on Monday Morning telling you why a team won or lost a game, or he could have done better

    • Joe, would you do a Saturday video on the relationship between the PDO and AMO and how the +/- works? Thx.

    • The place I work has a winter betting pool, where we all guess how the winter will turn out: Max snow, min temp, number of days below zero, etc. I base my “guesses” on the weatherbell.com Saturday summary videos and have been kicking butt against the “scientists” I work with. They think my success is akin to the secretary winning the football pool basing her choices on uniform colors. Maybe next year I’ll let them in on my “secret weapon”.

  16. Thanks, Bob.
    What were they thinking? They were thinking we have been thoroughly Grubberized and will believe anything they feed us.
    Smoke is real, mirrors too, Steinman et al. (2015), not at all.

    • I thought the models “trained” on the years up to 2005……after that is when the models made future predictions……does anyone know if this is correct

      • The CMIP5 protocol published in 2009, revised 2011, table 1, ensembles 1.1 and 1.2 were decadal and 3 decadal hindcasts back from 2005. So the parameterizations were to best fit from roughly 1975 to 2005. This is evident from the stuff Ed Hawkins posted in 2013 (use google images). The least divergence between CMIP5 and observed temp (he used HadCru4) is exactly that period.

  17. The Steinman ,Mann Miller paper recognizes that there are serious differences between the NH, N Pacific and N Atlantic SST model runs and the observed temperatures. The authors are particularly concerned to explain the recent “Pause”. They subdivide the ocean system into three separate regional components which they label AMO,NMO and PMO ( somewhat redefined AMO NAO and PDO )
    Basically what the paper does is to calculate the differences between models and observations and then attribute the difference to an unexplained “internal variability” in the ocean temperatures. The authors conclude that internal multidecadal variability in NH SST temperatures accounts for the discrepancy between models and observation and that it also likely offset anthropogenic warming over the last decade . They add that this effect will reverse ( at some unspecified date) and add to anthropogenic warming in coming decades.
    The AMO PMO and NMO curves in their figure 3c show, more or less, the well known 60 year periodicity in the temperature data. see Figs 15 and 16 at
    http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
    In other words they are trying to improve the models and save the model forecasts by adding to them
    the effects of the PDO ,AMO and NAO.
    Unfortunately they continue to make the egregious schoolboy error of tuning their models back about 120 years when the main periodicity is millennial. (Figs5-9 at the link) The recent pause is more accurately described as a cooling since 2003 which date represents a peak in both the 60 year and 1000 year periodicities. I estimate that the cooling trend of the millennial cycle will reverse in about 2650 as opposed to in the coming decades. See the peak at
    http://www.woodfortrees.org/plot/rss/from:1980.1/plot/rss/from:1980.1/to:2003.6/trend/plot/rss/from:2003.6/trend
    That the Steinman et al paper got through peer review for Science Magazine says much about the current state of establishment science. However in a short comment on the paper in the same Science issue Ben Booth of the Hadley center does sound a refreshingly cautionary ( for Science Mag and Hadley ) note saying that the paper is only useful if the current models accurately represent both the external drivers of past climate and the climate responses to them and that there is reason to be cautious in both of these areas. This comment is an encouraging sign that empirical reality may be finally making an impression on the establishment consciousness. If the expected sharp cooling in 2017-2018 suggested by the drop in the Ap index and Neutron Monitor data in Figs 13 and 14 of the post linked above actually occurs it should just about finish off the whole CAGW meme.

  18. Here is Fig 3 from Steinman et al.
    http://i60.tinypic.com/6z0tvo.jpg
    Note: In Panel A, the phase locked nature of the oscillations must have contributed mightly to the warming from 1975 to 1995. Then the authors only accede that the Pause occurred only when they went negative after 2000.
    They wrote in conclusion (my bold):

    “Our findings have strong implications for the
    attribution of recent climate changes. We find
    that internal multidecadal variability in Northern
    Hemisphere temperatures (the NMO), rather
    than having contributed to recent warming, likely
    offset anthropogenic warming over the past
    decade. This natural cooling trend appears to reflect
    a combination of a relatively flat, modestly
    positive AMO and a sharply negative-trending
    PMO. Given the pattern of past historical variation,
    this trend will likely reverse with internal
    variability instead, adding to anthropogenic warming
    in the coming decades.

    What a load of BS: In the coming decades the AMO and PDO will be both negative, and grand cooling, along with a possible quiet solar magnetic period spells quite a bit of problem for mankind in the next 20 years. And a death knell to CAGW.

  19. The models do not contain the effect of oceanic oscillations (A). Therefore the observed temperature record (B) and the model mean (C) can be used to estimate (internal variability A) .
    A = B – C
    This seems to me what Steinman et al 2015 is all about. The paper claims that the model “back projections” can be used to demonstrate a loose fit between the warming and cooling caused by prior oceanic oscillations.
    Why did they not combine the effects of oceanic oscillations and observed warming (B-A)? Since the range in A seems to be about 0.4 degrees Celsius and the warming (B) seems to have been around 0.7 degrees Celsius this ought to give a non-trivial result. And C would be derive by a purely empirical method would it not?
    Then C = B – A. In effect, this approach would tell us what the output of the model should have been based on observations of the real world. Any model or combination of models could be used to test against C to estimate the fit between observations and the models. C could be compared with Csubm where Csubn is the output from model n.
    Moreover, if every model were tested by deriving the difference between C and Csubn, then a new multi-model statistic could be derived that would represent the multi-model best estimate of what should be observed.
    I conclude, based on this thought experiment that what is wrong with Steinman et al 2015 is that the authors have treated the output from the models as being equivalent to observations.
    The study design is fatally flawed.

    • “Why did they not combine the effects of oceanic oscillations and observed warming (B-A)?”
      Doing would vividly highlight model error, e. Your C should have been subtracted from Csubn to give e. Csubn – C = e. If Steinman et al were adherents of science, this is what scientists do who want to assess validity of the models.
      That Steinman chose the opposite, and attempt to redefine observation to fit the model failures exposes them as pseudo-scientists. Along with their fellow pseudo-scientist editors at Science Mag, what they have done is ultimatly to further destroy the reputation of science.

  20. So it’s not a real pause when something natural makes the temperature pause, it’s only a pause when human activities cause a pause. So natural pauses don’t count, and neither do natural warming factors.
    Someone might look back at this one day and shake their head.

  21. Some poignant quotes from an article about it in The Australian-
    ‘FORCES of natural climate variability have caused the apparent slowdown in global warming this century but the effect will be temporary, according to new research.
    Byron Steinman, of the University of Minnesota Duluth, and Michael Mann and Sonya Miller, of Pennsylvania State University, found that these natural, or “internal”, forces had recently been offsetting the rise in global mean surface temperature caused by increasing greenhouse gas concentrations.
    They published their results in the latest edition of the American journal Science.
    The deceleration in global mean surface temperature this century despite rising greenhouse gas levels has fuelled the climate wars.
    Greenhouse sceptics have seized on it as evidence that the Intergovernmental Panel on Climate Change and other scientists adopting the “consensus” view have exaggerated the risk of global warming.
    Some research, including studies by Australian scientists, suggests that an increase in the heat taken up by the deeper waters of the Pacific as well as a pronounced strengthening of Pacific trade winds in recent years due to natural climate variability is responsible.
    The team used modelling results from the big international science program, the Coupled Model Intercomparison Project phase 5, to estimate the externally forced component – mainly due to human activity – of northern hemisphere temperature readings since 1854.
    “We subtracted this externally forced component from the observational data to isolate the internal variability in northern hemisphere temperatures caused by the Atlantic multidecadal oscillation and a component of the Pacific decadal oscillation, ”Professor Steinman told The Australian. (These natural climate systems are defined by temperature patterns across the oceans and influence climate globally.)
    “This showed that the current slowdown is being driven largely by a negative internal variability trend in the Pacific,” he said.
    He said the negative shift had been counteracting some of the anthropogenic warming.
    “In coming decades, the trend will likely reverse and accelerate the increase in surface temperatures,” he said.’
    Welcome to post-normal science- “In coming decades the trend will likely.”……[fill in whatever takes your fancy folks]
    Idiots!

  22. Remember the process- Global Warming morphs to Climate Change morphs to Extreme Weather Events and now morphs to Forces of Natural Climate Variability morphs to The Trend Will Likely…???
    Woohoo roll the drums and sound the trumpets! Victory over the skeptics and deniers at long last.

  23. Bob
    I mentioned SST and the period between 1910 and 9140 before in one of your other posts. From looking at how the Met Office addressed bucket corrections (i.e. with lots of assumptions and one cursory experiment performed 20 odd years ago) I don’t know if that rise in temperature is “real”.
    The original data was adjusted so the change was less marked but I’m wondering if even this is an under-estimation of bucket bias. There may not be that much of a temperature change in that period. Or it may even be as warm if not warmer in the early 20th century than now?
    If anyone has time you can check out the uncertainty description in the HADSST data sets. The bucket measuring technique hasn’t been characterised fully which leaves a lot more uncertainty on the table. They don’t address measurement process to a degree that a good scientist and engineer should which makes me wonder what were temperatures really doing back then.
    At any rate, a good post Bob.

  24. The following article appeared on the MSN home page for a few hours discussing the Steinman, et al article plus an article on how long the pause would last. I thought that it was kind of interesting to see that the chances of the “pause” lasting 20 years was only 1%. My question is what happens, when the pause lasts another 15 years (plus the ~ 18 years it has already lasted), as it has in the historical past. I wonder what the odds of that happening are?
    http://www.msn.com/en-us/weather/topstories/scientists-now-know-why-global-warming-has-slowed-down-and-it%e2%80%99s-not-good-news-for-us/ar-BBhZW8r
    The more I learn, the less I know.
    Dan Sage

  25. There is a underlying trend caused by humans-Global dimming until 1980 and then global brightening. Aerosols used to cool now it does not in NH at least.

  26. Is this just a reworking of Mann, Steinman and Miller(2014)? Replacing output from Energy Balance Model(s) with that from GCM’s. With the same ‘models too good to reject’ premise of the previous paper, as described by Matthew Marler above. Comprehensively examined by Nic Lewis at the time, as I recall.
    Is ‘Science’ short of climate papers?

    • Science is All In on the pseudo-science that is ACO2=CAGW. Marcia McNutt &Co. seemed to have forgotten, like Mann an Steinman have here, that model outputs are not observation. And most importantly, they want to bend reality to fit the model, because the model promises boatloads of research funding.

  27. If there is, indeed, an underlying warming trend caused by GHGs, then it is a very low trend, certainly less than the theory predicts.
    If the explanation for the pause, is natural variability, then that variability also exists in the past as well. The “natural” extension would be to pull that past variability out of the past temperatures and arrive at the underlying GHG trend. Go back to 1880, estimate the trend. Theory has to be re-written.
    How come Michael Mann, Trenberth and Foster and Rahmstorf stop at carrying out the natural extension. You know what. They have done exactly that but have chosen to not present the results. Because it says the Theory has to be rewritten.

  28. Hi Bob,
    I think you are being far too kind when you accept the notion that the MME mean is, in fact, a meaningful quantity. Indeed, calling the collection of models in CMIP5 a “statistical ensemble” is a bit of a travesty all by itself. They are not independent and identically distributed samples drawn from a distribution of perfectly correct models (plus noise). They fail in this on almost every specific criterion in the statement. They are not independent — they share code, history, and many of them were written as minor variants of the same program by a single federal agency that is funded at phenomenal levels because of what they predict, project, prophesy, whatever. They are not identically distributed objects generated by a random process unless dice were used at some point during the actual construction of the models (as opposed to within the models in some sort of Monte Carlo), although sometimes the code itself might look as though is was written by mad monkeys armed with dice. Nor are they drawn from a distribution of perfectly correct models plus errors that are collectively free from bias. Rather, since they share so much actual code and so many of the same limitations, they are almost certainly not collectively free from bias, including systematic bias introduced by shared errors in the dynamical assumptions and physics.
    Finally, the process that they are modelling is not a linear, or even a well-behaved, process. The models being averaged individually fail to come close to replicating the dynamical spatiotemporal scales visible in the real world data. That is, they have the wrong autocorrelation, the wrong amplitude of fluctuation, and a generally diverging envelope of possible future trajectories per model, where some of the models included are so obviously incorrect that it is difficult to take them seriously but they are all included anyway because the worst models show the most warming and without that, even the MME mean would be far closer to reality and far less “catastrophic”.
    Two other comments. Your curves above, for the most part, present lines as if those lines are “the” temperatures being presented. In actual fact, those lines all come with error bars. In a sane computational universe, those error bars would start at a substantial level in the present (HadCRUT4’s acknowledged total error is around 0.2 C in the present) and would increase to many times the present error in the increasingly remote past. IMO the claims for precision/accuracy in the remote past are absurd — HadCRUT4 claims total error in 1850 is only around 0.4 C for the global anomaly, twice that of 2015, for example. This is absurd, given the importance of things like Pacific Ocean temperatures in any estimate of global temperature or its anomaly and given the simple fact that ENSO was only discovered, named, and subsequently studied in roughly 1893. In 1850 vast tracts of the Earth were terra incognita, not inhabited or systematically measured by Europeans wielding even indifferent thermometric instrumentation let alone the comparatively high precision instrumentation of the present. If the best HadCRUT4 can manage is halving the total error claimed in 1850 with the vast collection of modern thermometers at their current disposal, including the entire ARGO array, there is something seriously wrong, and yet 0.2 C seems quite reasonable for a current error estimate, possibly generous given that HadCRUT4 does not, apparently, correct for certain systematic biases such as the UHI effect.
    Still, decorating the lines that appear on your graphs with even error estimates that fail a statistical common-sense sanity check and are probably seriously underestimated is better than presenting the lines themselves as if they are free from error, or as if error is confined to the width of the drawn lines. Yes, it makes the graphs messier, but without them the graphs are potentially meaningless.
    I cannot emphasize this point enough, because it is pervasive in public presentations of climate science. It also leads to my second comment. In the graph above of the AMO, PMO, and NMO, the displayed errors are truly absurd. As I noted above, ENSO was only discovered in 1893, and expeditions to study it were subsequently launched. Perhaps by 1900 it and the PMO were being observed in a reasonably systematic way by scientists, although at the time they doubtless had to launch “expeditions” to do so and I’m quite certain that the record is sparse and incomplete well into the 20th century. In contrast, the Atlantic was heavily trafficked and surrounded on nearly all sides by ports with cities and thermometers. Yet it is the NMO that has the large, apparently diverging error bars in the graph above, followed by the AMO (both errors exploding pre-1900) while the PDO was, apparently, known then to better precision in 1880 than it was in 1970.
    Say what?
    A second point to make is that these curves supposedly are the result of double differences. By that I mean that they are the result of data that has twice had a “mean” background behavior subtracted out. The first time is when actual thermometric data has some base value subtracted (as if this base value, often the result of a local average over some modern-era reference period, is known to infinite precision) to form the “anomaly”. The second time is when the global anomaly, either surface temperature or sea surface temperature, is subtracted out to discover the (A,N,P) multidecadal “oscillation”. From the sound of it, the curves above have a third subtraction, the CMIP5 MME mean.
    There are rules for compounding precision. If I subtract two big numbers — such as (for example) 288.54 and 288.17 to make a small number, 0.37 — the small number loses three significant figures of precision. If I subtract two big numbers uncertain at some level — such as (for example) 288.54 \pm 0.2 and 288.17 \pm 0.2, the result is (in crude terms) 0.37 \pm 0.4, which basically means that we have no idea what the result is. If we then take two of these numbers: 0.37 \pm 0.4 and 0.32 \pm 0.4 and subtract them, we get something like 0.05 \pm 0.8.
    These are serious problems. I’m using a very simple lab device to teach physics at the moment that has a wheel — actually I think it is a mouse wheel — that measures how far a cart travels at roughly 1 mm of precision. One then has to estimate things like velocity and acceleration from this mm scale data. In a typical experiment, the cart moves along at speeds from 0 to 500 mm/sec, and samples the wheel output at a temporal resolution of maybe 100 Hz. Velocity is estimated by taking numbers like .687 and .689 (two successive wheel readings in mm) and dividing by 0.01 (multiplying by 100) to get 0.2 m/sec. Acceleration is formed by taking two successive velocity estimates and subtracting them (and dividing by the sampling time). As you can see, there is a problem with this when the cart is moving this slowly. The acceleration thus formed has no significant digits left. It actually looks almost like a random variable perhaps very slightly biased from zero on a graph.
    In the case of a rolling cart, of course, we can make certain assumptions about monotonicity and the second order linear nature of the underlying dynamical system that permit us to do better — smooth the data over multiple data points, fit higher order curves to the primary data and differentiate those fits instead of using direct data differences — but those all come at a price in precision and entail numerous assumptions about the distribution of actual errors in the measuring apparatus as well as the underlying data. Some of the results are non-physical — accelerations start to happen well before the change in data that signals e.g. an actual collision. There are no free lunches in data analysis as you are always limited by the actual information content of the data and cannot squeeze a signal out of noise without assuming a knowledge that all too often one does not have.
    In any event, I call foul on the AMO/PMO/NMO data. The error bars are completely unbelievable (a few hundredths of a degree of precision in an anomaly of an anomay, larger errors in 1970 than in 1880, really?) , and the curves are far, far too smooth and regular.
    rgb

    • rgb:
      Nicely said. I particularly like your unequivocal and apt phrasing:
      “IMO the claims for precision/accuracy in the remote past are absurd — HadCRUT4 claims total error in 1850 is only around 0.4 C for the global anomaly, twice that of 2015, for example. This is absurd, given the importance of things like Pacific Ocean temperatures in any estimate of global temperature or its anomaly and given the simple fact that ENSO was only discovered, named, and subsequently studied in roughly 1893. ”
      It all seems to me to be so much magical thinking, false precision and hubris. Even to talk about error bars with such a level of uncertainty and lack of knowledge strikes me as absurd.

    • “This is absurd, given the importance of things like Pacific Ocean temperatures in any estimate of global temperature or its anomaly and given the simple fact that ENSO was only discovered, named, and subsequently studied in roughly 1893.”
      ////////////////////////////////////
      I agree with the point you make regarding past error bars. The reality is that we have no good information on GLOBAL temperatures pre the 1930s, and ocean temperatures are riddled with errors and prior to ARGO extremely unreliable.
      I frequently make the point that we do not know whether, on a global basis, temperatures are warmer today than they were in the 1880s or the 1930s, but as far as the US is concerned it was probably warmer in the 1930s than today. That is the extent of our knowledge.
      Whilst I accept that ocean phenomena were beginning to be studied in the late 19th early 20 th century, I consider that the recognition and study of ENSO was a little later than you are suggesting. See http://www.earthgauge.net/wp-content/fact_sheets/CF_ENSO.pdf
      “Now well known to scientists, the El Niño-Southern Oscillation (ENSO) was discovered in stages. The term El Niño (“the infant” in Spanish) was likely coined in the 19th century by Peruvian fishermen who noticed the appearance of a warm current of water every few years around Christmas. The cause of the current’s appearance was a mystery to them. In 1899, India experienced a severe drought-related famine, prompting greater focus on understanding the
      Indian monsoon system, arguably the nation’s most important source of water. In the early 1900’s, the British Mathematician Sir Gilbert Walker noticed a statistical correlation between the monsoon’s behavior and semiregular variation in atmospheric pressure over the tropical Pacific. He coined this variation the Southern Oscillation, defined as the periodic shift in atmospheric pressure differences between Tahiti (in the southeastern Pacific) and Darwin, Australia (near Indonesia). It was
      not until 1969, however, that meteorologist and early numerical weather modeler Jacob Bjerknes proposed that the El Niño phenomenon off the coast of South America and the Southern Oscillation were linked through a circulation system that he termed the Walker circulation (see image right). ENSO has since become recognized as the strongest and most ubiquitous source
      of inter-annual climate variability.”

  29. The global warming propheteers (or profiteers) leave out one major significant fact that this “new research” ignores.
    Steinman says, “It appears as though internal variability has offset warming over the last 15 or so years,”
    However if natural “internal variability” has caused “the pause” the past 15 years, how do we know that natural “internal variability” didn’t contribute the the warming for the previous 15 years leading up to 1998?
    Thats the fallacy in models and attributing the slight warming leading up to 1998 to a less than 1/100th of 1% increase in CO2 level in the overall makeup of the atmosphere. The the temporary cooling is other causes, then the temporary warming up until then could also be other causes.
    By the way, where is our false promises of Global Warming this winter? I have lived in Michigan for 22 years now, and I thought last winter was cold, but this year has been even more brutal.
    Dell from Michigan

  30. Further to my earlier comment
    http://wattsupwiththat.com/2015/02/26/on-steinman-et-al-2015-michael-mann-and-company-redefine-multidecadal-variability-and-wind-up-illustrating-climate-model-failings/#comment-1870168
    This paper adds absolutely nothing to our understanding of climate science and indeed perpetuates the grossest error of the models on which the IPCC CAGW scare is based,.All they do is go a very round about route to find the 60 year +/- periodicity in the Hadley temperature data which anyone can see at a glance
    http://3.bp.blogspot.com/-fsZYBCaAYRo/U9aXzNnfWJI/AAAAAAAAAVc/CfFP12Oh438/s1600/HadSST314.jpg
    The Hadley peaks are obvious at about 1880 1940 and 2000. They finally massage their data to produce more or less the same peaks in the red line in their Fig 3C see above Joelobryan 2/26 / 9:14 pm
    The same periodicity is seen in the GISS data – their FIg 3A
    They then attribute these temperature periodicities to “natural internal variability” in the ocean systems.
    as if this advances our understanding by giving the model – reality differences another name. Nowhere do the suggest what is driving these variabilities.
    As to the future they simply say the internal variability derived cooling trend will reverse in the coming decades. Looking at their own curves it is easy to draw the conclusion that with the last peak at about 2000 they would expect cooling until 2030 and renewed warming until 2060.Presumably that would be a modulation of the underlying model linear increase which they would attribute to CO2.However they perpetuate the modelers scientific disaster by ignoring in all their estimates and attributions the 1000 year periodicity so obvious in the temperature data,- see Figs 5-9 and the cooling forecasts at
    http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
    Again the model procedure is exactly like taking the temperature trend from say Jan – July and then projecting it forwards linearly for 10years or so -Junk science at its worst.

  31. We show that our method gives the correct answer when tested with climate model simulations.

    This says it all. Basically says we test and prove our climate models by running more climate models.
    I worked in the insurance sector for awhile as IT program manager. If a development team told me they planned on validating results by comparing individual policy processing with more software they were going to develop, I would know it was time to re-constitute the team. The only way to validate results is to compare against verified results, which was either someone who understood the required policy processing backwards and forwards or an older system that had a proven track record over multiple years.
    In the case of climate models this would be verifying results against past climate, instead of past climate models. Climate models have re-written the rule of validation and now the known or observed is less important than model output.
    Which means bookies everywhere take note, the winner of the next international soccer championship is the team predicted by consensus and statistical modeling to win, not the team who actually wins.

  32. All forward modelling will inevitably end wrong, since two important variables (mentioned in the comment above) and there many others, are unpredictable. Further more even if one can predict their intensity, degree and effect of their interaction can be determined only after the event.
    To paraphrase Steven Mosher: Climate models are “un needed”.

    • Hi Dr. Page
      I often read your comments and occasionally look at your blog, but normally do not often comment outside of my ‘comfort zone’.
      I just posted this elsewhere:
      “Both 10Be and C14 nucleation are strongly modulated by the Earth’s field. Pre-instrumental paleo-magnetic data are going back ‘millions’ of years but dating is not particularly accurate + or – 50 years/millennium (usually carbon dated, circular judgment!).
      Declination/inclination compass readings go back to 1600, magnetometer data to 1840. Magnetometer obtained data show that the Earth’s field beside its own independent variability has a strong 22 year component, much stronger than the heliospheric magnetic field at the Earth’s orbit (implying common driving force ?!). For the above reasons all estimates of the solar activity pre-1600 (sunspot count availability) can not be taken with any degree of certainty.”
      I have in past occasionally commented on the reliability of the 10Be data, it is an opinion which can be taken into account, or as it happens it is mostly ignored.
      regards, mv.

  33. Not long ago it was claimed that the proof of AGW was that human CO2 was the only explanation for the difference between the models and actual temperature, because all other natural factors had been accounted for. Now they’re claiming that there ARE other factors. This new claim by Steinman et al. means their original ‘evidence’ for AGW was wrong.

  34. Vukcevic Thanks for your reply- I always follow with interest your posts and comments. I agree entirely with your correlation of the detrended NH temperatures with the Geo-solar cycle. This clearly shows the 60 year +/- periodicity seen in the Hadley temperature data and obviously is commensurable with the Saturn/ Jupiter
    lap 3x 19.859.= 59.577. This too is commensurable with the 960 +/- year cycle 16x 19.859 = 953.232 which also equals the USJ lap.
    Of course Leif would do his nut at the mention of such “correlations” but it points to the place where solar physicists should think about possible connecting processes.. I suggest torque and torsion at the tachycline for openers, although I have an uneasy feeling that electro magnetic effects may also be involved.
    All I do in my cooling forecasts is simply say that the underling temperature trend detrended out in your graph is obviously part of the 350 year uptrend of the 960 +/- periodicity seen in the temperature data – see Figs 5-9 at
    http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
    By projecting the 960 year cycle forward, from its current peak climate forecasting appears to me to be reasonably simple and obvious – at least as far as getting into the ballpark is concerned.
    The entire IPCC – modeling approach on which the whole UNFCCC circus is based is simply an example of the academic herd instinct and scientific incompetence to the point of stupidity and an unwillingness to see and use the obvious as the first approach to problems.
    Where is your comment on the reliability of the 10Be data? Regards Norman.

  35. Fantastic – I see my favorite 1000 year (just under) periodicity stands out prominently-I will certainly use this graph in future posts if you don’t mind.( with proper attribution of course.

Comments are closed.