From AGU EOS —Terri Cook, Freelance Writer
Was the Recent Slowdown in Surface Warming Predictable?
From the early 2000s to the early 2010s, there was a temporary slowdown in the large-scale warming of Earth’s surface. Recent studies have ascribed this slowing to both internal sources of climatic variability—such as cool La Niña conditions and stronger trade winds in the Pacific—and external influences, including the cooling effects of volcanic and human-made particulates in the atmosphere.
Several studies have suggested that climate models could have predicted this slowdown and the subsequent recovery several years ahead of time—implying that the models can accurately account for mechanisms that regulate decadal and interdecadal variability in the planet’s temperature. To test this hypothesis, Mann et al. combined estimates of the Northern Hemisphere’s internal climate variability with hindcasting, a statistical method that uses data from past events to compare modeling projections with the already observed outcomes.
The team’s analyses indicate that statistical methods could not have forecast the recent deceleration in surface warming because they can’t accurately predict the internal variability in the North Pacific Ocean, which played a crucial role in the slowdown. In contrast, a multidecadal signal in the North Atlantic does appear to have been predictable. According to their results, however, its much smaller signal means it will have little influence on Northern Hemisphere temperatures over the next 1 to 2 decades.
This minor signal in the North Atlantic is consistent with previous studies that have identified a regional 50- to 70-year oscillation, which played a more important role in controlling Northern Hemisphere temperatures in the middle of the 20th century than it has so far this century. Should this oscillation reassume a dominant role in the future, argue the researchers, it will likely increase the predictability of large-scale changes in Earth’s surface temperatures.
Paper:
Predictability of the recent slowdown and subsequent recovery of large-scale surface warming using statistical methods
Authors
Michael E. Mann, Byron A. Steinman, Sonya K. Miller, Leela M. Frankcombe, Matthew H. England, Anson H. Cheung
(Geophysical Research Letters, doi:10.1002/2016GL068159, 2016)
Abstract
The temporary slowdown in large-scale surface warming during the early 2000s has been attributed to both external and internal sources of climate variability. Using semiempirical estimates of the internal low-frequency variability component in Northern Hemisphere, Atlantic, and Pacific surface temperatures in concert with statistical hindcast experiments, we investigate whether the slowdown and its recent recovery were predictable. We conclude that the internal variability of the North Pacific, which played a critical role in the slowdown, does not appear to have been predictable using statistical forecast methods. An additional minor contribution from the North Atlantic, by contrast, appears to exhibit some predictability. While our analyses focus on combining semiempirical estimates of internal climatic variability with statistical hindcast experiments, possible implications for initialized model predictions are also discussed.

“The temporary slowdown………..has been attributed to both external and internal sources of climate variability”? please explain. Also “semiempirical estimates” sounds like a get out of jail card to me!.
“semiempirical”? Ain’t no such animal.
Trump is going to have to appoint a CAGW Debunker, as one of his first moves in Office.
Who would be best for this job?
Ted Cruz???
Can’t be done. You’ll never “debunk” a religious belief system. If the cAGW folks were in any way influenced by scientific method, they’d have gone back to the drawing boards long ago.
All anyone can do is ignore them. The temptation, from the perspective of any politician, would be to cater them to as “gimmee” demographic, just as you might cater to any other religious group. If Trump is actually trying to change things, he’ll just ignore them. Any energy spent on de-programing is just wasted energy.
“cater to them…”
Where is the “edit” button?
@Bartleby. I understand what you are saying. Indeed you can probably never destroy a dogma or orthodoxy like CAGW in the minds of the diehard believers. It can only just die away eventually if that is going to happen at all.
The idea in my comment below is to severely damage or destroy the crediblilty of the alarmists in the minds of the majority of the American people and the majority in Congress. I am thinking that CREDIBILITY should be the issue here rather than trying to kill off CAGW altogether. If that can be done, then I think the alarmists will not have the power to push their agenda.
So you don’t think an intervention would work on these CAGW people? You’re probably right.
Oh but there’s something much better you can do to them – it’s called “defunding.” And if they object, just remind them that THEY said the “science was settled,” and that there is therefore no further need to fund more of it.
If I were in Trump’s shoes, I would assemble a panel of the best skeptic scientists (maybe 5 or 10) I could find and have them prepare a presentation containing all the falsifying scientific evidence. Then I and my scientists would go on national television and present the evidence to the American people. Then challenge the alarmist scientists to debate all the evidence in a followup presentation.
John Christy and Richard Lindzen immediately come to mind (if they were interested). They would all probably have to take a leave of absence from the academic positions.
Yes, but not a one time thing, and not stopping with falsifying evidence, but also presenting what is know about alterations to historical data, station deletions, UHI contamination, homogenization, the history of failed predictions, the way headline banners differ from actual reported findings in a great many instances, and the whole rest of the bamboozle, in a series of reports and hearings.
I doubt you will find anyone to take the proCAGW side in such a debate in such a situation…they will not even do that in a public debate with the political machine staunchly on their side.
That’s what I want: A real, pubic debate on the issue. I want to watch the Climate Change Charlatans try to defend what they have done.
@Menicholas. Absolutely agree. All the funny business that has been going on at NOAA, NASA and elsewhere needs to be exposed (and, if necessary, prosecuted) as well. And yes, this needs to be a long term ongoing campaign of severly damaging or destroying the credibililty of the alarmists, as I said above.
Why not start by showing climate hustle nation wide?
CD – Yes, but with a small change: assemble a panel of the best scientists, rather than the best sceptic scientists. OK, it will be a lot more difficult to get an agreed result, but if only sceptic scientists are used then the warmists will find it much easier to oppose. OK, it will be very difficult indeed to find a warmist scientist prepared to participate, but there is a precedent – when Anthony W published a paper recently, a well-known warmist was one of the co-authors. https://wattsupwiththat.files.wordpress.com/2015/12/agu-poster-watts-website-release.pdf, the warmist co-author was John Nielson-Gammon. The benefit was two-fold, (a) J N-G would make absolutely sure that the paper was unbiased and fully substantiated, and (b) the paper had credibility with both ‘sides’. I absolutely applaud J N-G for participating in that paper. If only others had the guts to do so, there would be a chance of ending the toxic debate.
Mann: The pause was not predictable – BS – the amo/pdo has shown up 4 times in the historical records since the the 1750’s. Mann’s explanation shows how climate scientists are utterly dishonest.
Or genuinely ignorant of earth history and geology and physical geography and historical data…etc…
“Not predictable” by junk models is what I think he is implying
We must all adhere to contaminated, upside down, and bad (and advised against) proxies.
It smells of aggravation between paleo reconstruction and modelers.
I would agree if Mann said modelers are not scientists. I would agree 100%
Incompletely and, in fact, insufficiently characterized, and unwieldy. Ergo: chaos.
No matter how plausible the circumstantial evidence, the scientific domain does not tolerate forecasts outside of a limited frame of reference, let alone predictions or prophecies about either the future or the past.
If it couldn’t be predicted, then the science isn’t settled.
The first noun in the abstract says “temporary”. That in of itself is a prediction that he should argue as he already stated the models got it wrong, but he believe in the long run they are right? Well what constitutes short and long for his paper?
I know it is illegal for a goverment employee, such as a State Department employee, to delete emails from a government server, because the emails are government property.
Is it illegal for NASA and NOAA goverment employees to delete surface temperature data? Isn’t that government property also?
Nick Stokes May 11, 2016 at 7:19 pm
“Why end it in 2010?”
They didn’t. Your quote says “early 2010s”, not 2010. In fact they took data to 2014
___________________________
This needs to be called out for the classic case of subterfuge that it is. By being unspecific, they give the impression that the time span is shorter than it is. By phrasing it as “early 2000’s to early 2010’s” the impression to the casual reader is of a period of about a decade. Does science no longer require precise reporting of data?
It would be equally reasonable for them to have said from the beginning of the millennium to nearly half way through the most recent decade. This would have been just as fair a description of the time frame, would have given an impression of a much longer time frame, and would be equally unscientific. The clever wording is deliberate obfuscation, as no one reporting actual scientific results would phrase it just that way.
Nick, you’re frequently right. But in this case you’re defending a clear attempt to obscure the magnitude of the error in the models being swept under the rug.
“Nick, you’re frequently right. But in this case you’re defending a clear attempt to obscure the magnitude of the error in the models”
Well, I try to improve correctness. In this case it was said that data ended 2010, which needed improvement.
But there is no issue here of error in models. They are just describing the duration of an observed slowdown. The precision is appropriate to the vagueness of “slowdown”. If you look at GISS, for example, the longest period of zero slope is Nov 2004 to Jan 2014. You can push the start back, getting small positive slopes, which probably qualify as a slowdown. Jan 2001 to Jan 2014 has slope 0.576 °C/cen. Is that a slowdown?
I don’t think early 2000’s to early 2010’s is unreasonable – certainly not a subterfuge.
“Jan 2001 to Jan 2014 has slope 0.576 °C/cen. Is that a slowdown?” Yes Nick, start with a La nina and end after a La Nina.
If you start in 2001 you end in 2012,
Cheeky boy, what is your trend then, stop cherry picking lad 2002 to 2014 would be accurate and senisble
Well, I try to improve correctness. In this case it was said that data ended 2010
Yes. Because it was intended to be read that way by a casual reader, and in this case it was. The subterfuge worked precisely as intended. You yourself exposed that subterfuge by pointing out that the data in fact went to 2014. Then, after having exposed the subterfuge yourself, you claim that it is reasonable to describe the data that way, and hence it isn’t subterfuge.
In science, it is NEVER reasonable to state data in such a vague manor when the specific data (as you yourself were quick to point out) is available. You cannot have it both ways Nick. Subterfuge or not, it isn’t science.
But there is no issue here of error in models.
And let’s call that statement out for what it is as well. It is a concerted effort to excuse the poor performance of the models without admitting that they are in error. Subterfuge.
“Because it was intended to be read that way by a casual reader”
Scientific papers aren’t written for casual readers. They are written for people who pay attention. But “early 2010s” is a common phrasing. And it doesn’t mean 2010.
Oh come on! This is the same guy that excoriated the entire temperature history of climate on Earth by flattening natural variability and tacking on a vertiginous and decidedly unnatural warming that climbed – whipped by forces of the anthropogenic kind – to the heavens!
Now this same dick is proposing to flatten** that very vertical with the non-existent natural variability that he so graphically* demonstrated to the world, didn’t exist!
*Very, very famous hockey-stick graph
**Or at least, take the edge off…
Nick, I believe you might benefit with using these guys method. a bit more accurate since the time of B. Franklin.http://time.com/4001563/old-farmers-almanac-predictions-accuracy/
This is a result of 2 generations being given no critical and cognitive skills. Carrot on a stick education, be a good boy accept dogma and you will progress. “Chase that carrot you donkeys”.
Science is a dirty game, just as dirty as the corporate world, just as dirty as politics or athletics, and for once the public get to see that.
davidmhoffer:
‘Yes. Because it was intended to be read that way by a casual reader, and in this case it was. The subterfuge worked precisely as intended.’
YES – that is exactly it – and it is exactly how the alarmism is accomplished – an ostensibly true statement that is phrased specifically to give the wrong – i.e. ALARMIST – impression.
I call it a lawyer’s trick – and I’ve also compared it to Lucifer’s methods – lying with a statement that is literally true.
This reads like a weeping puss filled wound.
Nick is too disingenuous in my humble opinion.
Besides, all that time arguing global average temperature is wasted time, and it seems thousands of science hours have been sh!t down the bowl.
And in the end, we still dont even have an agreement on the temperature record or temperatures after 30 years of this nonsense.
30 years fussing over something that is as worthless to climate science as ash is to a fire.
An after the fact residue, which is what the global average temp is, is an effect not a driver.
besides no trend in water vapor shows the atmosphere is not seeing an increase in heat transport by evaporation surely. No Troposphere warming relative to surface warming.
There is no signal in precipitation either, no signal in ocean alkalinity, no signal in weather.
I wish this nonsense would just up and die already, dangerous AGW is complete junk science
‘Nick Stokes’:
“Because it was intended to be read that way by a casual reader”
‘Scientific papers aren’t written for casual readers. They are written for people who pay attention.’
Deliberately obtuse. You know damn well what’s being put out there in front of the public and why. And it’s alarmism. Your presence here is damage control, to justify misleading statements that hide the details.
In journalism, the rule of thumb is that most people don’t read beyond the headline. Those that do, rarely read beyond the lead paragraph. So if you want to hide something, without giving the appearance that you are doing so, you simply put it lower in the story. That way you can say you reported the facts, while in point of fact you were using a standard method of obfuscation. Playing your part in the propaganda.
So, Nick – if that’s your real name – what you are doing is pointing out what’s written at the bottom of the story, and ignoring the implication of the lead. Which in this case, is exactly as davidmhoffer said: “It is a concerted effort to excuse the poor performance of the models without admitting that they are in error.”
“because they can’t accurately predict the internal variability in the North Pacific Ocean.” I am surprised to see an admission that the El Nino and La Nina are not understood and they are causing the climate models to be incorrect.
I can see this paper causing a lot of cognitive dissonance over at the Granuiad 😀
Anyone have any idea what effect a contracting atmosphere has on surface atmosphere convection?
Is there data for atmosphere size over any time?
One of my favorite new quotes re models vs observations is this one from Rob Honeycutt:
“One thing that bothers me about the model/obs discussions is, many seem to assume that the obs are correct and the models are wrong. That fails to recognize the challenges inherent in the observations. It seems as likely as not that models could be giving us a better picture of what is actually occurring than the observations do.”
from here: https://andthentheresphysics.wordpress.com/2016/05/10/the-uncertainty-on-the-mean/#comments
Except that’s not the case. False logic
Models that are missing many components and do not actually model every aspect and as such use more simplistic fudges, cannot be relied upon because they are based on an assumption.
Measurement is empirical, this is as close to fact or proof as we can be. (This is individual measurement) NOT the global average temperature (global average is not a measurement it is an artifact)
Neither models nor surface measurements can not be even verified as giving us a good picture (in the context of AGW) because neither contribute to validation of AGW.
There are some real pseudo arguments on ATTP lol Junk
If an argument is not logical, nothing it produces is logical
The IPCC have been taking models over observation for years. This is yet more of this nonsense.
What that crackpot is suggesting is taking empirical science (measurement! not averages!) and replacing it with computer models that cant model the earth’s climate system.
Is that what you are telling us?
The problem with “observation” as you call it, is one you missed, averages are not observations, GISS does not produce observations, neither does Berkley or anyone else, those are artifacts, created by mathematics of man.
We need to fix that area, and the ATTP article suggests we replace one faulty process with another faulty process (one that has a tragic record)
They need to start teaching kids logic at 12 I swear!
You seem surprised? One assumes you are only passingly familiar with the garbage font
that is Robhon
I’m fairly new here :p
What’s he saying? That if the models don’t match the observations, we should go with the models?
It’s akin to Trenberth and his lets change the null hypothesis.
These liberals want to change the world to their view, and changing the rules of science is not a step too far, neither is changing measurement data 96 years after collecting it (repeatedly changing it)
Political movements are famous for revisionism
It’s more likely that the models and observations are equally wrong. For different reasons.
MarkW, models and observations cannot be “equally wrong”.
Since everything about observations come from real world systems, all sources of measurement error and systemic error can be identified. The observer effect is always a factor to consider, which is why standard approaches to measuring anything have to be adopted – so that the errors are at least consistent, the measurements from different observers mean the same thing, and those measurements can then be compared. (The lack of compliance to standards in the USHCN was what started Anthony’s interest in this debate – see http://www.surfacestations.org/)
Those sources of observational error can be determined, and specific statements made about the levels of uncertainty in the measurements. Corrections to measurements, based on scientific evidence, can be made when there is agreement that the corrections will result in improved data. This has been done several times over the last 30 years with interpretation of satellite measurements. (A recent article here on systematic errors gives more explanation: https://wattsupwiththat.com/2016/04/19/systematic-error-in-climate-measurements-the-surface-air-temperature-record/).
So observations are based on agreed standards; raw data may be corrected for known errors; and the resultant data still has specified levels of uncertainty.
Climate models are completely different. Firstly, as has been pointed out elsewhere here, they are built bottom up. Small elements of the real world are converted into mathematical functions. Many of these functions are stochastic, meaning that assumptions are made about the probability distribution function that best represents a particular real world event. Compromises are made, particularly around which functions to model and how much time will the model take to run. A modeller has to determine which functions are significant and need to be included, and which can be safely ignored.
Secondly, these small model elements have to be put together, much like components in a car. As the models get bigger, modellers need to check for unexpected emergent behaviour – the sort of problems that software developers face when building any large application. Before the model can be usefully applied, it should always pass through a rigorous verification and validation (ie testing) process. To the best of my knowledge, none of the climate models have gone through such a process. And numerous errors with these climate models have been identified. (see David Evan’s series starting here: http://joannenova.com.au/2015/09/new-science-4-error-1-partial-derivatives/)
In simple terms then, it is not possible to determine how wrong climate models actually are. That is why the IPCC’s CMIP5 experiment used 27 completely different models. I think they were trying to convince us using the opposite of the Delphi method – that if we accumulate enough guaranteed wrong answers, then average them, that the average will be right! An interesting example of how wrong these climate models are was pointed out by Bob Tisdale in a recent article – see the text accompanying Figures 6 & 7: https://wattsupwiththat.com/2016/03/01/climate-models-are-not-simulating-earths-climate-part-4/
As far as I am aware, climate modellers have no way of determining the uncertainty of their model output. So, no, they are not “equally wrong”.
“To hell with what you’ve observed – I’ve got a MODEL!!” LMAO
I know we are supposed to hate him as the hockey stick man, but this does sound as though he is admitting there was a problem with the models and he is trying to fix it with healthy doses of real world data.
or more probably, Schmidt threw tree rings under the bus recently, Mann threw Tom Karl’s pause buster under the bus before that.
Now Mann is throwing models under a bus lol.
Egos like those are bound to have friction
I think Robin is correct. Mann is admitting the models have flaws and he is trying to figure it out. If I understand this correctly, he did the same in his 2015 collaboration “Robust comparison of climate models with observations using blended land air and ocean sea surface temperatures” found here https://www.researchgate.net/publication/280571227_Robust_comparison_of_climate_models_with_observations_using_blended_land_air_and_ocean_sea_surface_temperatures.
in this article he called the pause a “divergence” between models (CMIP5 with both historical and RCP8.5 scenarios) and observations “after 1998” and attributed it to over estimation of climate sensitivity to co2, under estimation of natural variability, and a misunderstanding of how to use temperature data.
Maybe it is early onset, or, just maybe, he is breaking the mold and actually trying to learn from the mistakes.
Mann wrote (apparently)
If this were true then we can only conclude that the model’s error margins must be too narrow. So next one must wonder what they’re based on. And what they really should be?
What he really means is, he and the modelers dont understand how internal variability actually works.
or more simply “we couldn’t forecast it”.
Yet in his 2015 he professes statistics can predict the future
“Nobody expects that GCMs would predict the slowdown. They aren’t initialised to do that” but if models ‘predict’ a temperature rise then that’s accepted?
How can you have it both ways.
Surely models are setup to predict climate outputs, most important being temperature.
Is Nick Stokes implying they run climate models not to predict climate outputs?
I think Nick is simply acknowledging (unintentionally to some extent) that models do nothing more than reflect the input assumptions. And since the input assumptions are garbage, so are the “predictions” dropping out of the (cough) rear end…
The basic truth is, all the models are designed to do is predict catastrophic warming caused by CO2, because they assume that CO2 drives the temperature. And since this is nonsense, so are the modeled outcomes, as can be seen by comparing them with (uncooperative) reality, past and present.
Ergo, the faster warming in the last two decades of the 20th century must be ascribed to the frequent El Nino conditions in that period. You can’t have your cake and eat it too.
Global temperature is a function of several factors in space and time. The GCM models rarely account them. Even IPCC is not sure of quantitative factoring of broader contributors at global scale, namely [1] anthropogenic greenhouse gases mpact under greenhouse effect [human induced factor]; [2] volcanic activity related natural impact under greenhouse effect; [3] human induced non-greenhouse effect associated with changes in land & water use, Land & water cover. Also, the temperature curve itself is built on partial data of space. Even if, for example, you got a 100% accuate model to verify it you don’t have the 100% accurate data. Everything we presume from the air. At least scientists must work out models at regional and national level!!!!
Dr. S. Jeevananda Reddy
Worse still, the greenhouse effect is not the difference between a planet without an atmosphere and a planet with atmosphere’s difference in temperature
-19c without, 15c with an atmosphere, if the atmosphere was static that would be true, but the atmosphere does some serious cooling by evaporation so the actual effect of the atmosphere, or greenhouse effect is actually far greater than 34c, probably twice as much half of which is taken from the surface and worked around as weather.
If the surface becomes warmer, in such a scenario, + or – 1c is going to have no detectable effect, because the total effect is far greater than claimed.
When you invoke natural factors like the AMO and La Niñas to explain part of the pause,
… Now you have invoked the AMO and the ENSO over the “whole” global temperature record.
In fact, you do need these two factors (and several volcanoes) to explain all of the up and down cycles we have experienced since 1850.
And now that one can explain the up and down cycles and the pause over a longer period of time, then you have a much smaller global warming signal.
Michael Mann needs to take the next step, extend the analysis and become a skeptic. Then he will be a whole-Mann instead of a half-Mann.
We haven’t considered all of the factors though and we cant replicate many of them in the models
The climate is influenced from the ocean floor (or maybe earth’s core) to the galaxy to varying degrees. We dont even know what inputs are missing.
The way it is, Mann was working to incorporate the pause into global warming and while he was doing that Tom Karl came out and said no pause, pre empting Mann’s paper, essentially throwing it under a bus.
Schmidt three tree ring proxies under a bus not long after, and now Mann is throwing models under a bus in the politest way possible.
Hopefully these massive egos will implode the central core of the warm camp
Tom Karl’s paper really did outshine Mann’s in the media and disagreed too, alarmists loved karl’s pause buster, they though they had it made, no pause, alarmist wet dream!
There is no doubt this pissed Mann off, Tom karl sole this “thunder” baaahahahahaha
Stupid keyboard!
Next to be published in Geophysical Research Letters :
“Predictability of the recent outburst of large-scale self-evidences using statistical methods”
Captain O. Bevious, Pedro Grullo, Jacques de la Palice, A. Truism, Otto Logy, Pla Titus de Bromide.
And so the backpedaling begins……
It was the late 1990’s to the middle 2010’s.
Additionally it was not a slow down, it was a complete halt.
Even when making excuses for their own failures, they can’t help but lie.
Surely they can predict the internal variability. Their equation is: Internal variability = models – observations.
I think you might be confusing “configured to reproduce” with predicting the future.
They might not be able to predict it, but they can “hindcast” it with uncanny accuracy. 😉
The eternal question really is, why did all the Ice Ages end very suddenly? What causes a world half-encased in thick ice, to rapidly melt at an amazing pace?
The only force able to do this trick is our local star, the sun. There is no other possible mechanism for this trick. This is why studying the local star is life and death for animals inhabiting this planet.
“The only force able to do this trick is our local star, the sun”
Are you not happy with orbital changes, tilt and procession, plus volcanoes, dust, fires creating a random stew?
Geothermal, with the surface under ice, maybe the freeze happens very quickly and then it takes geothermal heat a very long time to build up and melt it all, this would maybe explain a rapid melting, because if melted from below, the surface is basically the last of the, and ice on land obviously but open oceans is enough to get things warming up gain rapidly
Precession, not procession.
The problem with you lay climate people here is the finer nuances of semiempirical estimates are completely lost on you, whereas climberliturgical experts in the field have been nuancing the semiempiricals for years.
@Observa…Well put.
“The problem with you lay climate people here is the finer nuances of semiempirical estimates are completely lost on you, whereas climberliturgical experts in the field have been nuancing the semiempiricals for years.”
A phrasing worthy of the high art of “Climate Science Communication”
“Using semiempirical estimates…”
What a fricken joke. If Boeing and Airbus used semiempirical estimates when designing structures, aircraft would fall from the sky on a routine basis.
Maybe this illustrates the real reason why Bill Nye left Boeing. Perhaps he was making engineering calculations via “semiempirical estimates” and his management thought it was so funny that they recommended he become a comedian and wannabe scientist.
or they no longer found him funny. :p