From Wikipedia:
Polar amplification is the greater temperature increases in the Arctic compared to the earth as a whole as a result of the effect of feedbacks and other processes. It is not observed in the Antarctic, largely because the Southern Ocean acts as a heat sink and the lack of seasonal snow cover. It is common to see it stated that “Climate models generally predict amplified warming in polar regions”, e.g. Doran et al.
Now with this paper, blowing the surface data out for AGW effects, what are they going to do?
Via the Hockey Schtick:
New paper finds only 1 weather station in the Arctic with warming that can’t be explained by natural variation
A paper published today in Geophysical Research Letters examines surface air temperature trends in the Eurasian Arctic region and finds “only 17 out of the 109 considered stations have trends which cannot be explained as arising from intrinsic [natural] climate fluctuations” and that “Out of those 17, only one station exhibits a warming trend which is significant against all three null models [models of natural climate change without human forcing].” Climate alarmists claim that the Arctic is “the canary in the coal mine” and should show the strongest evidence of a human fingerprint on climate change, yet these observations in the Arctic show that only 1 out of 109 weather stations showed a warming trend that was not explained by the natural variations in the 3 null climate models.
Note a “null model” assumes the “null hypothesis” that climate change is natural and not forced by man-made CO2 or other alleged human influences.
GEOPHYSICAL RESEARCH LETTERS, VOL. 39, L23705, 5 PP., 2012
doi:10.1029/2012GL054244
On the statistical significance of surface air temperature trends in the Eurasian Arctic region
Key Points
- I am using a novel method to test the significance of temperature trends
- In the Eurasian Arctic region only 17 stations show a significant trend
- I find that in Siberia the trend signal has not yet emerged
C. Franzke
British Antarctic Survey, Natural Environment Research Council, Cambridge, UK
This study investigates the statistical significance of the trends of station temperature time series from the European Climate Assessment & Data archive poleward of 60°N. The trends are identified by different methods and their significance is assessed by three different null models of climate noise. All stations show a warming trend but only 17 out of the 109 considered stations have trends which cannot be explained as arising from intrinsic [natural] climate fluctuations when tested against any of the three null models. Out of those 17, only one station exhibits a warming trend which is significant against all three null models. The stations with significant warming trends are located mainly in Scandinavia and Iceland.
Introduction
[2] The Arctic has experienced some of the most dramatic environmental changes over the last few decades which includes the decline of land and sea ice, and the thawing of permafrost soil. These effects are thought to be caused by global warming and have potentially global implications. For instance, the thawing of permafrost soil represents a potential tipping point in the Earth system and could lead to the sudden release of methane which would accelerate the release of greenhouse gas emissions and thus global warming.
[3] Whilst the changes in the Arctic must be a concern, it is important to place them in context because the Arctic exhibits large natural climate variability on many time scales [Polyakov et al., 2003] which can potentially be misinterpreted as apparent climate trends. For instance, natural fluctuations on a daily time scale associated with weather systems can cause fluctuations on much longer time scales [Feldstein, 2000; Czaja et al., 2003; Franzke, 2009]. This effect is called climate noise. Even very simple stationary stochastic processes can create apparent trends over rather long periods of time; so-called stochastic trends [Cryer and Chan, 2008; Cowpertwait and Metcalfe, 2009; Barbosa, 2011; Fatichi et al., 2009; Franzke, 2010, 2012]. On the other hand, a so-called deterministic trend arises from external factors like greenhouse gas emissions.
[4] Specifically, here I will ask whether the observed temperature trends in the Eurasian Arctic region are outside of the expected range of stochastic trends generated with three different null models of the natural climate background variability. Choosing the appropriate null model is crucial for the statistical testing of trends in order not to wrongly accept a trend as deterministic when it is actually a stochastic trend [Franzke, 2010, 2012].
[5] There are two paradigmatic null models for representing climate variability: short-range dependent (SRD) and long-range dependent (LRD) models [Robinson, 2003; Franzke, 2010, 2012; Franzke et al., 2012]. In short, SRD models are the most used models in climate research and represent the initial decay of the autocorrelation function very well. For instance, a first order autoregressive process (AR(1)) has an exponential decay of the autocorrelation function. LRD models represent the low-frequency spectrum very well, have a pole at zero frequency and a hyperbolic decay of the autocorrelation function. One definition of a LRD process is that the integral over its autocorrelation function is infinite while a SRD process has always an integrable autocorrelation function [Robinson, 2003; Franzke et al., 2012]. In general, both stochastic processes can generate stochastic trends but stochastic trends of LRD models can last for much longer than stochastic trends of SRD models. This shows that the rate of decay of the autocorrelation function has a strong impact on the length of stochastic trends. In addition to these two paradigmatic models we will also use a non-parametric method to generate surrogates which exactly conserve the autocorrelation function of the observed time series. Figure 1 displays the autocorrelation function for one of the used stations and the corresponding autocorrelation functions of the above three models. It has to be noted that there are a myriad of nonlinear stochastic models which can potentially be used to represent the background climate variability and the significance estimates will depend on the used null model. However, I have chosen the three above models because two of them represent paradigmatic models for representing the correlation structure and one conserves exactly the empirical correlation structure.
Figure 2. Map of stations: Magnitude of the observed trend in °C per decade.
Results
[17] Figure 2 displays the location of all stations and the colour coding indicates the magnitude and sign of the temperature trends. The first thing to note is that all stations experience a warming trend over their respective observational periods. The largest trends (more than 0.4°C per decade) are in central Scandinavia and Svalbard. Most of Siberia experienced warming trends of about 0.2–0.3°C per decade.
[18] After finding evidence for warming trends we have now to assess their statistical significance; do the magnitudes of the observed trends lie already outside of the expected range of natural climate variability? The above three significance tests reveal that 17 of the 109 stations are significant against an AR(1) null model (Figure 3a), 3 stations are significant against a ARFIMA null model (Figure 3b), and 8 stations are significant against a climate noise null hypothesis using phase scrambling surrogates (Figure 3c). All these trends are significant at the 97.5% confidence level. This shows that while the Eurasian Arctic region shows a widespread warming trend, only about 15% of the stations are significant against any of the three significance tests.
Figure 3. Stations with a statistically significant trend against (a) AR(1), (b) ARFIMA, (c) phase scrambling null model and (d) stations with a significant trend: blue: weak evidence, green: moderate evidence and red: strong evidence.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


Garrett says:
December 11, 2012 at 8:21 am
…The paper recognizes that “All stations show a warming trend”. Just pointing this out because I still see a lot of folks who deny the official line that warming, for whatever reason, is occurring….
————————————————————————————-
Sorry Garrett, but I think you’ll find that most people here at WUWT agree there has been a warming trend since the 1800s, What most people here also agree is there has been no statistically significant global warming for the last 17 years – a period that Phil Jones once admitted would prove the alarmists’ models were junk
You’re right in one respect though – this paper seems to be relying on models, so I’m rather doubtful of what it has to say, unless someone else can convince me of the robustness of the models used.
(Did you just see what happened there Garrett? Being a true sceptic, I refused to accept everything that is put in front of me, but I’m also prepared to change my mind if somebody can prove me wrong. Scepticism’s great – you should try it some time)
“profound… ignorance” Our understanding concerning the factors that shape our climate would be much further advanced if the present generation of climate scientists had been willing to allow that AGW is the not the only factor that shapes global climate- but such an admission is seen as treason in the ranks and could be professional suicide. Their efforts mostly have been directed at supporting the untenable AGW “axiom” and so they have made no contribution toward advancing the understanding of climate processes. Their science is mostly a house of cards that collapses when poked; nonetheless, it serves as ideological grist to feed the AGW propaganda mill. Such is the result when ideologues hijack a branch of science.
Looks like that one station is near (or perhaps in) Reykjavik – Iceland.
Jim G says: December 11, 2012 at 8:50 am
Yup, it is hard to see how the idea of a baseline applies to climate. One could arbitrarily put one, but what good is that? Thus climate models.
Good news for canaries. Alaska was the “canary in the coal mine” just a few years back, but that one got away too.
They are running out of garages to hide the dragons in.
Garrett says:
Two points:
– The paper recognizes that “All stations show a warming trend”. Just pointing this out because I still see a lot of folks who deny the official line that warming, for whatever reason, is occurring.
Uh … the ‘all stations’ referred to is the 109 Eurasian arctic stations that are the subject of this paper. I challenge you to name anyone (let alone ‘lots of folks’) who has ever denied that warming is occuring at those 109 stations. For that matter, I challenge you to find a statement of ‘the official line’ that references the status of those 109 stations specifically, thus providing anyone the opportunity to have ever ‘denied’ it.
Typical warmist equivocation. The whole house of cards is based upon dishonest and fallacious bullshit arguments like that.
– the paper’s results depend on models. A lot of criticism I’ve heard from the climate change denial side (e.g. Monckton) is over scientists’ use of models. I’m glad to see so many people here favorable of results emanating from models.
Nah. The models are crap. And even the crappy models don’t show CAGW. CAGW is a religious/political belief system, not science.
rgbatduke said;
“When we can explain, in full non-equilibrium thermodynamic regalia, why the MWP and LIA happened and why they were as large or small as they were, then perhaps we can think about understanding the present. ”
I am currently trying to extend CET from 1660 back to 1000 AD and am currently at 1538. I tend to use contemporary observations, together with such practical info as crops, wine dates, glacier movements etc and combine them with scientific papers.
Things are muddied even further than you suggest, as the MWP was split by a substantial decades long cooling period, whilst the LIA was split by a substantial decades long warming period.
My guess would be that extreme climatic events such as the notably warm period we call the MWP, or a notably cold period we call the LIA, can be likened to sea flooding in as much a number of factors need to come together.
As regards sea flooding for example, any river involved needs to be swollen after heavy rain, the sea needs to be at the peak few hours of a especially high (spring) tide and there needs to be an area of low pressure positioned in a particular manner combined with winds that will pile up the water and push them in the direction of the weakest point.
So there are five or six factors involved all-or most-of which need to happen at the same time.
Relate that to a warm or cold climatic period. The jet stream has to be in a certain position with winds blowing consistently from a certain direction, whilst the oceans have to be in a particular warm or cold phase. No doubt other factors (arctic ice levels) are also involved, but they are probably the main ones.
During the MWP we can see clear evidence of the jet stream positioning and setlled weather over a long period with winds mostly from a warm direction. The LIA is complicated (sunspots?) but basically the important factors need to be in position in a negative manner. We have sufficient evidence to see that the settled warm (MWP) or cold (LIA) weather often breaks down, but then reasserts itself, so the period as a whole can be characterised as ‘warm’ or cold.’
tonyb
RobertInAz says:
Looks like that one station is near (or perhaps in) Reykjavik – Iceland.
Probably sitting in the lava field of a recently active volcano with a difficult to pronounce name :).
On my screen Figure 2 shows purple and pink dots, colours which are not on the scale given. Most of the dots appear light blue, which indicates slight cooling. Can we get a better Figure 2? I went to the source, but of course it is pay walled.
Notice how strongly this result is impacted by the warm water entering the Eurasian Arctic from the North Atlantic (link is courtesy of the WUWT Sea Ice Reference Page).
http://www7320.nrlssc.navy.mil/hycomARC/navo/arcticsstnowcast.gif
This sure looks to me like warm(ish) water from the North Atlantic moving into the Arctic where it provides a small atmospheric warming signal as it radiates into space. If more heat currently radiates away than is replenished in the North Atlantic, then we are probably on the down leg of a negative (heat energy) feedback cycle. As the alarmists tell us, these ocean dependent cycles take decades to even centuries to work through.
Combining this observation with Bob Tisdale’s most excellent discussion of ENSO (http://bobtisdale.wordpress.com/) and one might come to the conclusion that it is a very natural cycle with the heat energy accumulated in the Pacific slowly making its way to the North Atlantic where it can easily escape back into space. The current 16 years of surface temperature stability indicates that one of those decades long cycles is probably rolling over.
I think the models are bad: don;t have enough aerosol cooling that masks the CO2 warming. 🙂
According to meteorology, all of the greater change in temperature at the poles, or polar amplification, is due to heat transport. None is due to any other factor. The temperature gradient depends on the mean annual temperature of the earth. Any temperature change localized at the poles such as albedo changes are distributed around the earth at that gradient.
JimG –
Good points, Jim.
The climate guys have jumped the gun by several decades, studying that “continuously changing system” for only a few years before coming up with conclusions that were initially made even while the climate was only just a handful of years out of the ‘new ice age meme. Much of the work done since has been Confirmation Bias. It is not this work that should have the attention of these drop-in warmists in these comments. They need to go back to 1985 and ask what was the basis for conclusions jumped to then, given the climate history at at time (the end of the 1945-1975 cooling trend).
I disagree – in the long term – about the cause and effects being unknown and possibly unknowable. NOW we can’t know them, but have some faith in present and future scientists. We are only 30-35 years or so into the age of computers (counting not from the glacial-speed Eniac but the explosion of Apples and PCs) and about the same for modeling. Even though we do disdain models – in their current state – it is only by models that we will ever be able to know/understand the baselines and the system as a whole. Just because there is garbage now going in, crap understanding of the pieces of the puzzle, and inadequate code/math doesn’t mean someone won’t straighten out the mess some day.
Steve Garcia
rgb–just type “CO2″, as in one atom of carbon and two atoms of oxygen. The underscore in your compound representation isn’t necessary. 🙂
— (latex CO_2) replace () with $ signs — but I’m too lazy and without a previewer and edit capability it is too easy to screw up and lose a whole paragraph.
Thanks, Rocky, but I write in latex without even thinking about it, and in latex a subscript is underscore. Technically I should be writing
rgb
I think Garrett missed the point that the warming is not outside natural variation – and as a typical alarmist might do, solely zoomed in on the ‘it’s warming’ meme! Duh!
During the MWP we can see clear evidence of the jet stream positioning and setlled weather over a long period with winds mostly from a warm direction. The LIA is complicated (sunspots?) but basically the important factors need to be in position in a negative manner. We have sufficient evidence to see that the settled warm (MWP) or cold (LIA) weather often breaks down, but then reasserts itself, so the period as a whole can be characterised as ‘warm’ or cold.’
Sure, in other words there are fluctuations around some mean baseline behavior. But what causes that baseline behavior, and makes the LIA most cooling, or the MWP or Roman warm period mostly warming? Again, we have the problem of being able to reliably identify the baseline behavior as a function of non-CO_2 variables before we can even think about the CO_2 dependence/anomaly, and even after that we have to have a truly reliable carbon model over precisely the same kind of time frame, because there are fluctuations of CO_2 in the past thermal record that appear — to me, to the extent that they are accurately known from proxies — utterly confound the Bern model for the carbon cycle.
I know that Richard has done work comparing carbon cycle models, as has Bart (in a less detailed manner) but both of them, I think, have only worked with contemporary data. At a glance, the Bern model fails to describe any of the CO_2 concentration data shown in a recent top post on the subject. In fact I think — without doing all of the actual arithmetic — that the data positively refutes the model, or establishes far more stringent limits on the model parameters than given by their current values. But I rather think the model cannot be salvaged and is refuted. If/when I finish teaching in a few days for a few weeks, I may try to revisit this issue quantitatively, as it is pretty easy to run the numbers with octave/matlab. Note also that the issue is no longer the relatively simple but arguable issue of whether or not CO_2 rises precede or follow temperature rises, it is that the integrated solution to the coupled ODEs look like they cannot possibly describe the gross variations, given the joint distribution of temperature and CO_2 as best we known them by proxy.
Beyond that, I agree that decadal or better warming and cooling are likely multifactorial and not single variable independent events in the climate record. Century scale warming and cooling, however, is correspondingly more difficult to explain in terms of confluence of multiple factors as the probabilities go down substantially. Those seem rather to involve serious, long term basic causes that we have not yet identified, certainly not in a quantitatively useful way. To put it another way, if warming required the coincidence of three factors each with only two distinct values, we would only expect to be in warming phase 1/9th of the time Increasing the number of categories or factors makes this worse. Yet we have millennial trends in temperature clearly visible across the Holocene, with century scale dips (and, no doubt, decadal scale antidips) and ditto peaks. Just looking at the data I’d suggest no fewer than five or six major drivers (more if you count different aspects of Milankovitch as being distinct), with highly nonlinear coupling and not at all equal influence, at least two or three of which are unknown or underestimated. One could probably do an analysis of the fractal dimensionality of the data and get a better idea of the dimensionality of the underlying space and its probable principle components, but the principle components in the current models are utterly irrelevant on the millennial or century scale — they are at best decadal.
And we don’t yet have good data on one single cycle of all the major known decadal oscillations. Bob Tisdale loves ENSO (for good reason) because if its strong correspondence with discrete jumps in a baseline e.g. SST temperature. But ENSO itself probably is related to the PDO. The PDO is surely coupled to the AO and NAO. All three are no doubt influenced by the details of the THC. Perhaps (as you note) the solar cycle matters — it is certainly coincident, sometimes and in some ways that might depend on precisely the kind of multifactorial confluence you suggest with the temperature, but fitting a multivariate predictive model is an enormously difficult problem and requires lots of accurate data that spans the entire range of the permutation space of the variables to accomplish.
A factor that somehow eludes climate scientists. Only by assuming separability can one fit with lesser data. But what justifies an assumption of separability? With a separable model used to approximate a non-separable phenomenon, you simply get the best fit possible with a basis that spans the wrong linear vector space, one in which the solution does not lie. This happens all of the time in quantum physics — it is an excellent description of the virtues and problems with perturbation theory, for example — but somehow in the real world separability is always assumed because it is so damned difficult (and requires so damned much high quality data) to fit a truly nontrivially multivariate function by any means whatsoever.
I only know of one good, general way to do it, in fact, and that is with a neural network. One can sometimes do better if you have some reason to think that you know something about the solution (and are right) but a NN is a generalized nonlinear function approximator and hence enormously powerful.
rgb
Only one thing about this drives you nuts? How can one have a baseline in a continuously changing system with a large number of intercorrelated variables for which cause and effect are unknown and possibly unknowable? Modeling the climate is a fools’ errand, being carried out by fools, who do not know, nor care one iota about science. It’s all about power and money, as are most political endeavors.
It’s not a fool’s errand, it’s only a fool’s result if one takes the result more seriously than the probable correctness of the solution warrants. It actually looks like a lot of fun, and is certainly profitable even if one can only make comparatively short run predictions.
In the long run the efforts from this “fool’s” activity are science. Science should be enormously tolerant of fools and iconoclasts in the short run even as it refuses to take anything too seriously until there is a good, long term, predictive agreement between theory and data. Which even if the GCMs were right we wouldn’t know for at least 20 to 50 more years.
I don’t actually think it is unknowable — I only think that the unmeasured past is unknowable. In the future, with modern instrumentation, we can take increasingly fine grained and exhaustive measurements, and at some point things are actually rather likely to end up knowable.
This isn’t a religious “things man was not meant to know” argument, it is just a “things we don’t know — yet — given the data and our best theories” argument. Egregious claims are made for the theories on the basis of one tiny sliver of accurate data plus another equally tiny sliver of increasingly inaccurate data preceded by data so coarse that even things like mean global temperature estimates are often within 2 sigma of present temperatures, so that one truly has to be open minded to say that the mean temperature estimates mean anything at all. I think we can make some decent inferences, but proxy derived results automatically come with huge cumulative error bars inherited from all of the processes that set their scale plus all of the unknown factors that might contribute.
rgbatduke says:
December 11, 2012 at 7:59 am
“This in no way disproves the hypothesis that CO_2 caused the anomalous strength of the ENSO fluctuation in 1998 that is almost single-handedly responsible for the step-function bump in global mean temperatures in both SSTs and LTTs”
I have been studying what I’m going to start calling Daily Temperature abnormality in the NCDC Summary of Days data. http://www.science20.com/virtual_worlds/blog/updated_temperature_charts-86742
Daily abnormality is the difference of how much the temperature goes up today, and how much the temp falls tonight.
So while there may be a temperature fluctuation in 1998, there has in general been no trend in Daily Temp Abnormality over the entire period of good data( after WWII), and this graph: http://www.science20.com/files/images/1950-2010%20D100_0.jpg show the Abnormality for North of 23 Lat on a daily basis for 1950 to 2010, weather variability far exceeds any trend in the data.
Garret, at least read what the article says:”here I will ask whether the observed temperature trends in the Eurasian Arctic region are outside the expected range of stochastic trends generated with three different null (statistical)models of the natural climate background variability.” He is not using a global climate style model, but using statistics to estimate the natural, random(stochastic) background natural climate variability. A totally different, and valid “model”. The results look pretty robust. None of the trends where outside the normal range of variability, except for 1 station in 109.
What most non-scientists, including many of the climate modelers don’t understand that a complicated system can have random behavior, and that random behavior can produce a long-lasting trend in a variable, until some other random change takes over and does something else.
Garrett:
Hate to tell you this, but while you were at your last global-warming confab, they pulled the rug out from under your feet. The official line has been changed, because the latest fifteen-year temperature trend shows no warming since 1998. So here is an IQ test for you: Is the globe warming?
@Kev-in-Uk
“I think Garrett missed the point that the warming is not outside natural variation – and as a typical alarmist might do, solely zoomed in on the ‘it’s warming’ meme! Duh!”
Warming isn’t the issue. Catastrophic AND man-made warming is the extraordinary claim. The CLAIM is both. If it ain’t us causing it, all the hoopla is b.s. And if it ain’t catastrophic, then no alarm is necessary.
It becomes the Chicken Little Who Cried Wolf.
Which it was from the start.
Steve Garcia
My first comment was going to be this:
If 1 out of 109 canaries dies in a mine – who the hell empties out the men from the mine based on that?
Steve Garcia
I think enough people have commented on this, but I’ll just add that I’m happy to say that warming has occurred. Attribution of cause and determination of magnitude are the main areas of disagreement between people.
From what I can see, the author has used the word “model” in the statistical sense, and does not mean the much maligned computer models used to “project” climate behavior.
Two of the three “models” are AR (a statistical function) and ARFIMA (another statistical function). I don’t recognize the third model specified, but given the first two are statistical “models” I presume the third one is, too.
As such, the approach taken is not controversial.
Jim G says:
“it is a fool’s errand, carried out by fools…”
Overlooks the possibility that some do it solely for their bread, having a PhD in physics in a world where there is a glut of those. They are fortunate to find employment in their field, while others are in wheat fields, such as the physics PhD who works on an itinerant combine crew.
Is it me, or did I just hear the fat lady singing?