Icy Arctic Variations in Variability

Guest Post by Willis Eschenbach

A while back, I noticed an oddity about the Hadley Centre’s HadISST sea ice dataset for the Arctic. There’s a big change in variation from the pre- to the post-satellite era. Satellite measurements of ice areas began in 1979. Here is the full HadISST record, with the monthly variations removed.

Figure 1. Anomaly in the monthly sea ice coverage as reported by the HadISST, GISST, and Reynolds datasets. All data are from KNMI. Monthly average variations from the overlap period (1981-1994) have been subtracted from each dataset. All data are from KNMI (see Monthly Observations).

There’s a few points of note. First, the pre-1953 data is pretty useless, much of it is obviously not changing from year to year. Second, although the variation in the GISST dataset is doesn’t change in 1979, the variation in the HadISST dataset changes pretty radically at that point. Third, there is a large difference between the variability of the Reynolds and the GISST datasets during the period of their overlap.

I had filed this under unexplained curiosities and forgotten about it … until the recent publication of a paper called Observations reveal external driver for Arctic sea-ice retreat, by Notz and Marotzke, hereinafter N&M2012

Why did their paper bring this issue to the fore?

Well, the problem is that the observations they use to establish their case are the difference in variability of the HadISST during period 1953-1979, compared to the HadISST variations since that time. They look at the early variations, and they use them as “a good estimate of internal variability”. I have problems with this assumption in general due to the short length of time (25 years), which is way too little data to establish “internal variability” even if the data were good … but it’s not good, it has problems.

To their credit, the authors recognized the problems in N&M2012, saying:

Second, from 1979 onwards the HadISST data set is primarily based on satellite observations. We find across the 1978/1979 boundary an unusually large increase in sea-ice extent in March and an unusually large decrease in sea-ice extent in September (Figures 1b and 1d). This indicates a possible inconsistency in the data set across this boundary.

Ya think? I love these guys, “possible inconsistency”. The use of this kind of weasel words. like “may” and “might” and “could” and “possible”, is Cain’s mark on the post-normal scientist. Let me remove the GISST and Reynolds datasets and plot just the modern period that they use, to see if you can spot their “possible inconsistency” between the 1953-1979 and the post 1979 periods…

Figure 2. As in Figure 1, for HadISST only.

The inconsistency is clearly visible, with the variability of the pre- and post-1979 periods being very different.

As a result, what they are doing is comparing apples and oranges. They are assuming the 1953-1979 record is the “natural variability”, and then they are comparing that to the variability of the post-1979 period … I’m sorry, but you just can’t do that. You can’t compare one dataset with another when they are based on two totally different types of measurements, satellite and ground, especially when there is an obvious inconsistency between the two.

In addition, since the GISST dataset doesn’t contain the large change in variability seen in the HadISST dataset, it is at least a working assumption that there is some structural error in the HadISST dataset … but the authors just ignore that and move forwards.

Finally, we have a problematic underlying assumption that involves something called “stationarity”. The stationarity assumption says that the various statistical measures (average, standard deviation, variation) are “stationary”, meaning that they don’t change over time.

They nod their heads to the stationarity problem, saying (emphasis mine):

For the long-term memory process, we estimate the Hurst coefficient H of the pre-satellite time series using detrended fluctuation analysis (DFA) [Peng et al., 1994]. Only a rough estimate of 0.8 < H < 0.9 is possible both because of the short length of the time series and because DFA shows non-stationarity even after removal of the seasonal cycle.

Unfortunately, they don’t follow the problem of non-stationarity to its logical conclusion. Look, for example, at the variability in the satellite record in the period 1990-2000 versus the period 2000-2005. They are quite different. In their analysis, they claim that a difference in variability pre- to post-1979 establishes that human actions are the “external driver” … but they don’t deal with the differences pre- and post-2000, or with the fact that their own analysis shows that even the variability of the pre-1979 data is not stationary.

Finally, look at the large change in variability in the most recent part of the record. The authors don’t mention that … but the HadISST folks do.

03/DECEMBER/2010. The SSM/I satellite that was used to provide the data for the sea ice analysis in HadISST suffered a significant degradation in performance through January and February 2009. The problem affected HadISST fields from January 2009 and probably causes an underestimate of ice extent and concentration. It also affected sea surface temperatures in sea ice areas because the SSTs are estimated from the sea ice concentration (see Rayner et al. 2003). As of 3rd December 2010 we have reprocessed the data from January 2009 to the present using a different sea ice data source. This is an improvement on the previous situation, but users should still note that the switch of data source at the start of 2009 might introduce a discontinuity into the record. The reprocessed files are available from the main data page. The older version of the data set is archived here.

08/MARCH/2011. The switch of satellite source data at the start of 2009 introduced a discontinuity in the fields of sea ice in both the Arctic and Antarctic.

Curious … the degradation in the recent satellite data “probably causes an underestimate of ice extent and concentration,” and yet it is precisely that low recent ice concentration that they claim “reveals an external driver” …

In any case, when I put all of those problems together, the changes in variability in 1979, in 2000, and in 2009, plus the demonstrated non-stationarity pre-1979, plus the indirect evidence from the GISST and Reynolds datasets, plus the problems with the satellites affecting the critical recent period, the period they claim is statistically significant in their analysis … well, given all that I’d say that the N&M2012 method (comparing variability pre- and post-1979) is totally inappropriate given the available data. There are far too many changes and differences in variability, both internal to and between the datasets, to claim that the 1979 change in variability means anything at all … much less that it reveals an “external driver” for the changes in Arctic sea ice.

w.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

110 Comments
Inline Feedbacks
View all comments
Eric Adler
May 4, 2012 11:39 am

I don’t see the justification for trashing the NH sea ice extent data between 1953 and 1979. It seems to me that the data looks reasonable. Here is what the Canadian Cryospheric Network says on this subject:
http://www.socc.ca/cms/en/seaIce/pastSeaIce.aspx
“Prior to 1950, the reconstruction of the historical record for sea ice suffered from lack of data and incomplete coverage. Since then, however, most circumpolar countries have kept regular comprehensive sea ice extent and concentration charts for waters in their jurisdiction. In order to obtain a consistent homogeneous picture of sea ice, these various ice charts have been combined with satellite observations (starting from 1972) to produce a comprehensive over 50 year record for the northern hemisphere”
The data graphed on the page looks pretty consistent to me, contrary to Eschenbach’s impression. Somehow, I get the feeling that if the authors of the paper on sources of variability came to the conclusion that internal variation was the main driver of sea ice variation in both periods, Eschenbach wouldn’t be complaining about how awful the data looked to him.

Rob Dekker
May 5, 2012 12:33 am

Willis said :

the problem is that the observations they use to establish their case are the difference in variability of the HadISST during period 1953-1979, compared to the HadISST variations since that time.
As to where in the paper it says the difference in variability is relevant, it is the point of the entire paper.
What do you think they are talking about if not the difference in variability?
It’s all about variability, that’s what they are using to make their case.

If that is what they use to make their case, then it should not be so difficult for you to answer my question :
What IS the “difference in variability of the HadISST during period 1953-1979, compared to the HadISST variations since that time” according to the paper, and where is your statistical assessment of that difference, Willis ?
I hope that you understand that without an actual statistical assessment of the data at hand, your words are simply empty rethoric without scientific substance..

Rob Dekker
May 5, 2012 12:45 am

In response to Eric Adler, who noted

I don’t see the justification for trashing the NH sea ice extent data between 1953 and 1979. It seems to me that the data looks reasonable.

Willis said

the data is not adequate or appropriate for the purposes to which the authors are putting it

Willis, can you please clarify the scientific reasoning you used to conclude that the data is not adequate or appropriate for the purposes to which the authors are putting it

May 5, 2012 9:50 am

Well, I will tell you guys something that you have not heard before…
Earth has been cooling since 1994.
Average temps. in the atmosphere cooled down by about 0.2 degrees C since 1994.
It is not much and it is a global average.
Nevertheless,
it could pick up speed going down
http://www.letterdash.com/henryp/global-cooling-is-here

Pamela Gray
May 5, 2012 12:12 pm

Willis, you bring up an important point. When I was doing research on the frequency specificity of a toneburst used to measure the Auditory Brainstem Response, I wanted to know if the high frequency tone bursts that were created were capable of forcing the auditory pathway to recognize the center frequency of each tone. To do that we could not just pick human subjects randomly. We had to use young subjects whose auditory pathway was intact and whose noisy brains were capable of being quieted (mine was not one of those kinds of brains). We discovered that indeed, the tones were capable of causing the auditory pathway in the brainstem area, if it were intact, to respond to the center frequency of each tone in the standard measured way.
It became incumbent on yet others to see if this were the case in a randomly selected healthy population regardless of whether or not they had “busy” brains. And then someone else had to come along to see if this method could detect hearing loss happening in a population too sick to respond to normal audiometric tests.
You need to select the source of your raw data, depending on what you want to measure. If you want to measure apples, don’t start with oranges.

Bill Illis
May 5, 2012 3:49 pm

Here is one of the datasets mentioned by Dennis Ray Wingo.
30 years of satellite records produced through the Nimbus satellites published in Geophysical Research Letters.
http://www.atmos.umd.edu/~kostya/Pdf/Seaice.30yrs.GRL.pdf
I’m pretty sure this is their datasets here (monthly and daily extent, NH to 1972, SH to 1973). It won’t be exactly comparable to the current datasets but it is not much different.
ftp://sidads.colorado.edu/pub/DATASETS/nsidc0192_seaice_trends_climo/esmr-smmr-ssmi-merged/

Bill Illis
May 5, 2012 4:25 pm

Followed a few more links from this and guess what. There are actual weekly gif images showing the ice extent back to 1972 produced by the US Navy Ice Centre.
May 1, 1972 here (looks a little higher than today) Note that Dark Blue is under 15% ice coverage so would not be counted in today’s numbers.
ftp://sidads.colorado.edu/pub/DATASETS/NOAA/G02172/gifs_weekly/nic_weekly_1972_05_01_tot.v0.gif
How about the Arctic sea ice minimum Sept, 12, 1972. I don’t know what you think, but this is very low – this is not much different than recent minimums.
ftp://sidads.colorado.edu/pub/DATASETS/NOAA/G02172/gifs_weekly/nic_weekly_1972_09_12_tot.v0.gif
Put this is your bookmarks.
ftp://sidads.colorado.edu/pub/DATASETS/NOAA/G02172/
Others may be available going back to 1933, 1953, and (believe or not 1750 and 1553) here.
http://nsidc.org/data/g02176.html
http://nsidc.org/data/g02169.html

Rob Dekker
May 6, 2012 1:47 am

Willis said

HadCRUT3 1953-1978: var = 0.35, n=311, mean=0.87
GISST 1953-1978: 0.16
HadCRUT3 1979-2006: var = 0.14, n=324, mean=-0.10
GISST 1979-1994 (end of record): var = 0.25
HadCRUT3 2007-present: var = 0.37
The HadCRUT3 variance of the early period is more than twice that of the period 1979-2006, with a large and very visible jump at the boundary.

Thanks Willis. Thank you for showing that variability before and after 1979 is less than 0.35, which is, as I am sure you will acknowledge, minor compared to the difference in mean (of about 1.0).
No wonder that the t-test shows that there is no way that the reduction in the mean is caused by chance (internal variability).
Now, considering these variabilities you mention for the periods 1953-1978 and 1979-2006 as well as 2007-present, if there would be no “external driver” at all over the entire period, what would be the probability that the start of the record (1953) would hover around +1.0 mean, while by 1979 the mean would be around +0.6 and by the end of the record (say 2009-2011) the mean would be around -1.0 ? All just by chance ?
If you don’t like the outcome of that probability calculation, then by all means, please present your own quantification of how this trends in Arctic sea ice extent over time can be brought within the boundaries of the variability that you already presented.

Rob Dekker
May 6, 2012 2:19 am

By the way, Willis, your figure 1 is a bit confusing.
It seems that you are showing monthly variablility for the pre-1953 period (sine wave varying over a 2.5 million km^2 span), but other periods (such as from 1995-2005) annual variability in your plot seems to be less than 1 million km^2, which is obviously unrealistic.
You mention that you obtained your data from the KNMI, but I’ve been unable to reproduce your figure 1 (nor 2) from the KNMI data pages. Could you clarify exactly how you obtained your figure 1 for us ? Thanks.
HADISST actually presents their sea ice data in a 2D gridded form, so it requires some work to put that in a graph.
A couple of years ago, Tamino presented HADISST sea ice extent graph for the September (summer minimum) and March (winter maximum) extent, which seem much clearer than your figure 1 :
http://tamino.files.wordpress.com/2010/10/nhem140.jpg
Now what is your argument again against this data set ?

Lars
May 6, 2012 2:27 am

The data from Nimbus ESMR can be used to construct a (more or less) homogeneous time series starting from 1972 (see Cavalieri et al. 2003, link above):
http://www.scilogs.de/wblogs/gallery/16/September_extent_1972_2011_3rd.png
http://www.iup.uni-bremen.de/iuppage/psa/documents/Technischer_Bericht_Milke_2009.pdf
By the way, the first passive microwave satellite sensor was on KOSMOS-243 launched in 1968. See slides 30-36 http://porsec2010.ntou.edu.tw/Tutorial/Mitnik_PORSEC_14Oct_10.pdf

Brian H
May 6, 2012 4:56 pm

ed caryl above refers to this post which notes that the hand-waving dismissal of SO2 as a GHG because of short residence time does not apply in Arctic Cold. In fact, none of the processes that strip it operate more than very slowly in winter. And winter is explicitly where the “warm anomalies” in the Arctic appear.
An interesting side observation is that Antarctic winter warming is concentrated around large research installations — the sole source of winter SO2 there!
A source of variance explicitly not excluded and falsified in the paper currently being dissected.

Rob Dekker
May 7, 2012 2:42 am

Willis, wrote :

Monthly average variations from the overlap period (1981-1994) have been subtracted from each dataset.

This just gets more confusing. What are the monthly averahe variations from the overlap period, Willis ? Did you take the average variations in each month over the 1981-1994 period and subtracted that number from the anomaly plot for each of the other year’s anomaly numbers for that month ? Or did you take the average of all months variations in the 1981-1994 period and subtracted that single scalar number from points in each dataset ? Or did you take the monthly average variations BETWEEN the datasets and subtracted THAT for each data point ? Or did you subtract take the average variation for each month and subtracted that from the absolute ice extent numbers, to OBTAIN the anomaly plot ? Or something else completely ?
Even figure 2 (which shows only the HADISST dataset) does not seem to follow from the KNMI (or direct HADISST) data, and I suspect that is because your poorly defined “Monthly average variations from the overlap period (1981-1994) have been subtracted”. I would really help if you could clarify EXACTLY how you obtained that figure 2.

Rob Dekker
May 7, 2012 11:47 pm

OK. I finally figured out how Willis created his figure 1 and 2 here.
If we take the HADISST data from KNMI, and then calculate how much that data differs from an ‘average’ seasonal cycle (calculated from 1981 – 1994) then this is what we obtain :
http://climexp.knmi.nl/ps2pdf.cgi?file=data/ihadisst1_ice_0-360E_60-90N_n_mean1219811994a.eps.gz
(don’t let the suffix fool you. This is really a (cgi generated) pdf file.
This graph is not exactly the same as Willis’ figure 1 and 2 ‘variability’ plots, but it captures the main features (increasing variability before 1980 and after 1995 and rather small variability from 1980-1994) and many of the details (general shape and outliers, such as 2007).
Maybe Willis can explain why some of the details (such as variability over the 1995-2005 range seems a bit higher than in his plot) still seem to mismatch.
So, now that we can largely reproduce Willis’ plots, the question is really, what are we looking at here ?
Well, for starters, the “variability” in Willis figure 1 and 2 is small during the 1980-1995 period, because he calibrated variability over that period ! So it was defined by Willis to be small during that period.
Moreover, and this is where is gets better (or worse, depending on your opinion of what is a “fair” and “objective” scientific representation of data), figure 1 and 2 do NOT only show a plot of the (natural) variability that Notz and Marotzke were trying to determine, but because Willis took the anomaly over the annual cycle, he not just captures natural variability, but also the long-term change in extremes in the annual cycle !
Tamino already showed us :
http://tamino.files.wordpress.com/2010/10/nhem140.jpg
that the difference between winter maxima and summer minima increased slowly over time in the HADISST data set, and because of Willis choice of presenting the “variability” in Arctic sea ice, this diverging trend in seasonal cycle is now the main cause of Willis’ figure 1 and 2 increasing “variability” before 1980 and after 1995.
In other words, when Willis said :

The inconsistency is clearly visible, with the variability of the pre- and post-1979 periods being very different.

.
we now know that the “variability of the pre- and post-1979 periods are very different” mostly because Willis ‘calibrated’ the variability on the 1981-1994 times segment. Anything outside that period will show larger “variability” because of the diverging trend of sea ice maxima and minima over the seasonal cycle.
So by choosing the calibration period carefully (right after the 1979 year) Willis created this difference of the variability pre- and post-1979 all by himself !
This puts the ad-hominem remarks he vented against the authors of the paper :

Ya think? I love these guys, “possible inconsistency”. The use of this kind of weasel words. like “may” and “might” and “could” and “possible”, is Cain’s mark on the post-normal scientist.

into a completely different perspective.
Not to mention the ad-hominem accusations that Willis was venting against the authors of this paper.

Rob Dekker
May 8, 2012 12:17 am

I am sorry. That last sentence should read as follows :
Not to mention the OTHER ad-hominem accusations that Willis was venting against the authors of this paper.
such as

As a result, what they are doing is comparing apples and oranges. They are assuming the 1953-1979 record is the “natural variability”, and then they are comparing that to the variability of the post-1979 period

These turn out to be some strange form of phychological ‘projecting’, since YOU were the one comparing apples and oranges.
Nice going Willis !
This is the second time I catch you misleading your audience with deceptive representations and cherry-picked data.
Wanna go for a third time ?

Rob G.
May 9, 2012 6:36 am

I sometimes read Willis’s posts – they are long and full of colorful descriptions (although some might call it ad-hominem attacks) about other real scientists and topics and arguments on why he is a scientist (that he has four publications, although no one has cited any of them as far as I can see, including the one in Nature). Since, as it appears to me, his mind is already made up and he is looking for justifications for his position, I usually do not believe what he says. But please go a little easy on him. I like to continue to read his posts now and then – they are really fun to read.

Rob G.
May 11, 2012 5:03 am

Willis Eschenbach says: “Nature magazine and other scientific journals have thought I was a scientist, they published my work, and I’ll take their word over that of some anonymous coward who attacks people from behind an alias.”
What I wrote, that no one ever cited your papers, is a fact, anyone can check it in the Web of Science. Whether I give my full name or not is immaterial, I thought we always say what was said is more important than who said it? I did not say anything untrue – it is not an allegation at all (let alone nasty) to denigrate your work.