Interesting analysis of IARC-JAXA Arctic sea ice data

Analysis of the AMSR-E data on Arctic Ice

Guest post by Dr Tony Berry, 4th October 2009

AMSRE_Sea_Ice_Extent_100509
The latest value : 5,824,688 km2 (October 5, 2009)

The Arctic Ice extent prepared IARC-JAXA demonstrates the cyclical annual trend of freezing and thawing in the Arctic. It is clear that the cycles vary from year to year and it has been suggested that the data supports the hypothesis that AGW is responsible for a long term decline in Arctic ice although recent data has shown a recovery in the Arctic ice extent.

The raw data provide daily IARC-JAXA shows trend nicely but it is difficult to carry out comparative analysis and examine long term trends. Therefore I have calculated average monthly ice extent for each complete month from July 2002 and used this to prepare rolling averages for the data. NOTE: there are gaps in the daily data of up to 11days especially during 2002 to 2004. In these cases the gaps have been filled in using linear interpolation; I do not believe that this has affected the monthly averages in any significant way. The resulting data is as follows:-

For all graphs and tables, click them to obtain larger images.

TonyB_Image1

These data contain a number of interesting observations. All the lowest ice extent figures are contained in three clusters: January-June 2006, November and December 2006 and July to September 2007. By contrast all the highest figures, bar one, are found in 2002/2003. Looking at the monthly averages over the whole period it is remarkable how little variation (Std.Dev<3%) there is apart from the period July to October. This is illustrated by the following graph

TonyB_Image2

Using the above data to compute monthly 12 month rolling averages shows some very interesting trends. Considering just the period July 2003 (the first 12 month average) to September 2007 this shows the following trend:-

TonyB_Image3

It is apparent that there appears to be a strong correlation between the average ice extent and time as illustrated by the high correlation coefficient (0.9232).  You might have concluded in September 2007 that this was indeed strong evidence that long term warming was taking place and you might also be concerned that it appeared to accelerating in the later months. This is just the conclusion that our AGW “friends” have reached assuming that these data can be extrapolated. However,  when you consider the later data you might change your mind as show by the next graph covering the period September 2007 to September 2009 which shows the following trend:-

TonyB_Image4

These data show an entirely different picture. Whilst there is still a strong trend between the average ice extent and time (correlation coefficient 0.9185) it is precisely in the opposite direction!

These two graphs illustrate the folly of assuming that correlations are a proxy for understanding the underlying science and can be used to make predictions of the future. In fact these sorts of correlations are useful in understanding what has happened in the past and might be used as a starting point to identify the science but have little or no value in making long term projections. You can’t drive safely down the highway by looking in the rear view mirror!

The trend over all the data is as follows:-

TonyB_Image5

The fitted polynomial is of no significance other than it fits the data best with a good correlation coefficient (0.8982). The curve, as one might expect, does predict the period, by differentiation, when the lowest recorded ice extent occurred – between the 16th and 18th of September 2007. This picture is also carried forward in a two year rolling average:-

TonyB_Image6

Again I don’t believe that the polynomial itself is of any significance other than to illustrate trends. However, this analysis does highlight that there would appear to be something unusual about the period January 2006 and October 2007 when all the lowest ice extents occurred. This has quite a large effect on the graphical data which shows much larger swings during this period. It is also interesting that the graphical data seems to show short term cycles which is particularly apparent again over the period 2006-2007.This latter phenomenon might be an artifact of the analysis or the data but might be worth further investigation.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

57 Comments
Inline Feedbacks
View all comments
October 6, 2009 8:15 am

red432 (08:02:36) :
Wouldn’t it make more sense to fit the curves against harmonic functions of some sort? That might have some sort of plausible predictive value. If we use a line we get the oceans boiling away in a few years — if we use a higher order polynomial we will all freeze…

The data need to be transformed into the frequency domain (Fourier Transform) so that the low frequency components can be examined. The simple polynomial functions can be used as a very simplistic filter to reveal low frequency trends.

October 6, 2009 8:35 am

So where’s danalloupe and RRKampen for this thread? They have a perfect opportunity to debate with us without hijacking a very interesting discussion about something entirely different. danalloupe could show us the alternative dataset that he based his claims on and RRKampen could show us his messages to the Dutch fora where he tells them that it is volume that is important.
I was having a similar debate on reddit, and was looking for a chart that showed overall ice extent (both North and South). I couldn’t find anything despite a lot of Googling, so I thought I’d download the data from NSIDC and figure it out myself. They don’t make it easy, but with a little PHP pixie dust, I got a png image showing North, South and overall ice extent since 1979, when satellite records began.
The only thing that surprised me was that where I thought there would be very little difference between overall max extent and overall min extent (as the Arctic starts losing ice, the Antarctic would be gaining), there’s actually 8m km^2 difference between the two, mainly because the Antarctic max – min is almost double the Arctic’s max-min, probably because it is surrounded by open water.
Other than that, we’ve lost about 1.5m km^2 overall since 1979, not surprising given that the records start at the beginning of the most recent warm phase. It reached it’s lowest min extent in 2005, and it’s lowest max extent in 2007, and has been increasing since.

George E. Smith
October 6, 2009 9:03 am

“”” Dave Middleton (08:15:49) :
red432 (08:02:36) :
>>>
<<<
The data need to be transformed into the frequency domain (Fourier Transform) so that the low frequency components can be examined. The simple polynomial functions can be used as a very simplistic filter to reveal low frequency trends. """
Dave; one thing I always worry about when using Fourier Transforms, is whether the original data samples are adequate to yield a correct transform that is sufficiently accurate to reveal any subtle features.
I suppose if this data is gathered daily and is available from 1979, that one would get a reasonable transform.
It's not clear to me that the same analysis would be valid for say GISStemp anomalies, which to me look like 1/f noise, (but maybe valid data).
I've often wondered if the Big Bang was ust the bottom end of the 1/f noise spectrum.

George E. Smith
October 6, 2009 9:25 am

“”” Pamela Gray (22:17:15) :
But I think it is possible that the past can predict the future. We just need to input enough data to allow the possibility of statistically discovering predictability within an acceptable significance level. Just like they do with the PDO statistical models. “””
Well Pamela perhaps you missed the ‘OOoops’ in that statement.
First of all you have to GATHER enough data; and therein lies the problem, in that the global climate sampling system doesn’t gather enough data by orders of magnitude; so one can’t even adequately represent the past; let alone predict the future.
Any finite data set, can be fitted to almost any accuracy to a formula with enough parameters in it; so such a formula will have spectacular statistical correlation to the data; but unless that formula describes actual physical processes that are the cause of the changes in the data, it will still have no predictive value.
For example, any one of the Tchebychev Polynomials (Tn(x)) can be represented by the parametric equations:- x = Cos (a); y = cos (na) exactly within the limits -1<x<1 (also includes the = case)
But if you try to extrapolate beyond that argument range, the polynomials diverge to +/- infinity, while the parametric form cannot be computed outside those limits.
Trying to fit measured data to some simple mathematical formula, is a well known method for trying to discover the underlying physical laws that relate variables; but it is only going to uncover laws that actually exist to be uncovered. It will not yield a theory of causal relationships, where none exists; and in the chaotic climate system; it is unlikely that there are any of significance to be uncovered.

Spector
October 6, 2009 9:30 am

Just for reference: I have found that the following periodic function appears to remove the average annual melt-freeze cycle from the ASMR-E Arctic sea-ice extent data:
x=2*pi()*(date-2005.180407)
A0=10.45548643
A1= 4.06924873
A2= -0.66555191
A3= 0.31945008
A4= -0.03687002
A5= 0.00260179
A6= 0.03819824
A7= -0.04021417
Y=A0+A1*COS(x)+A2*COS(2*x)+A3*(COS3*x)+ … +A7*COS(7*x)
The date is the decimal date in years and Y is ice extent in 10^6 sq km. These values were determined by the MS Excel 2007 solver function set to minimize the sum of the square difference of this function and the observed data. Internal gaps in that data had been filled in by linear interpolation. The last date was 10/4/2009 (2009.755647).

Gus
October 6, 2009 9:40 am

Kinda looks like – dare I say it – a hockey stick!

Tonyb2
October 6, 2009 10:04 am

George E. Smith
Does Dr Berry have an affiliation he can share with us; so we know where he’s from; no biggie, just curiosity.
George I’m 65 and have retired. I have a first Degree in Chemistry, a PhD in Biological Chemistry (UCL) and an MBA from Warwick University in the UK. I worked in the Pharmaceutical Industry for 30 years finally owning and managing my own company. I spent a significant amount of my time in planning, corporate planning and business development predicting markets and production requirements for pharmaceuticals. Hence my interest in forecasting etc. Old Habits die hard! Hence my interest in these numbers! Does this help?
Tony Berry

Don B
October 6, 2009 10:26 am

These Russians found a 60 year climate cycle in the productivity of Arctic fishing. Could 2007 have been the start of 30 years of growing ice?
http://alexeylyubushin.narod.ru/Climate_Changes_and_Fish_Productivity.pdf?

Tonyb2
October 6, 2009 11:13 am

George E. Smith (07:39:38) :
Anyhow Dr Berry, if you have some insights on what has been happening the last few weeks since the refreeze started; there’s at least one person here interested ; not that I want to distract from your longer term studies you report here.
I can offer nothing except speculation. As far as the long term trend is concerned the next point in the rolling average will be at the end of October which will drop the October 2008 value and replace it with October 2009. Thus if the monthly average for October 2009 is higher the trends will continue, I think!. I would also point out, as others have noted, that there seems to be a short term cycle superimposed on the long term trend which seem to be most apparent at the minimum values of both the 12month and 24 month curves. The stalling your referring to might be connected with this. As an example: If you remember that excellent animation of the Arctic ice on Watts up it shows a cyclical ice build up along the Newfoundland coast, in some years, if this was a process which was semi -independent from the annual cycle then this might be involved in modulating the long terms trend. i think you perhaps need better mathematical skills than me to sort out and process this signal!

Neo
October 6, 2009 11:13 am

It might be better to model a cyclic trend such as the Arctic ice extent as a modulated signal on top of a sine wave. Of course, setting the gain, phase angle and common mode value of the underlying sine wave might be a bit tricky.

Neo
October 6, 2009 11:17 am

One would expect to see solar activity affect the gain, but something like AGW should show up in the common mode (offset) value as it should be a relative constant over the entire 12 months.

Solomon Green
October 6, 2009 11:27 am

When I like a piece of work I lways try to ascertain the credentials of the author. Unfortunately I could only find two Dr. Tony Berrys who fitted my search.
1) For the past five years, Dr. Tony Berry and his team have been working on a …ways in which we can reduce CO2 emissions and hence address climate change…
AND
2) Berry is at best a shadowy and elusive figure. But he is believed at some time to have been a fanwriter, and as such may have played his part in Martin Tudor’s hoaxes. Why, for example, was it a quantity of fanzines which was found during the desperate search for Bradford’s literary heritage? The Australian critic Peter Nicholls, in his Encyclopaedia of Science Fiction, hints as much: ‘Berry may have been a prolific pseudonymous writer: pseudonyms with which he has been associated include “Rhodri James”, “L Ron Hubbard”, “Jonah Jorm” and an as-yet-unidentified author who may have been called “Ashley”.’
The ‘Berry-Ashley theory’ has many vociferous supporters, but we must tread with care. Others point out that, some time before, the mysterious ‘Ashley Watkins’ had achieved some sort of hold over the publishing magnate, John Jarrold. The exact nature of their arrangement remains obscure, but explanations which feature one of them ‘dressing in women’s clothing’ are, as we shall see, very much wide of the mark. While Nicholls is a widely respected author, the cautious will note that his co-editor, John Clute, appears not to share his adherence (if such it is) to the Berry-Ashley theory:
Tony Berry has been fingered as a Yeltsinite double-agent, a paranoid recluse with a taste for exotic bubblegum, Hannibal the Cannibal and the true leader of the Conservative Party. He is none of these. Berry the metafiction — attractive in some ways though the concept might have been — will not so much as stand up to brief scrutiny by a myopic porcupine. Such a creature might, however, with profit look to the endless, bland, Arctic wasteland of the prose, and see in an instant the obvious truth. Tony Berry is, in the usual literary, if not the strictly literal, sense, a Canadian.
I hope that there is a third.

Austin
October 6, 2009 11:27 am

The sardine and anchovy biomass vs time for the last 2000 years graphs on page 27 of Lyubushin’s paper on fisheries is very interesting.
He looks at catch data from many locations – the Arctic, California, etc and looks at many species.
IMHO it looks like the Alaskan and Siberian salmon runs might be a good proxy as the Polar Urals.
Interesting read.

LarryOldtimer
October 6, 2009 12:21 pm

Pamela Grey
Regarding an object of mass, if all of the forces acting it or will act on it are known, then its rate of acceleration and position in the future can be predicted.
It is difficult enough to determine all of the forces acting on an object of mass in the present. It is quite impossible to determine what forces will be acting on it in the future.
The prediction of the future is quite impossible
While it is possible to do statistical calculations, all that can possibly gained from this exercise is to determine the probability of something happening in the future, and that is only tentative at best, generally knowing better the probabilities with more past data points.
The whole problem is, of course, probabilities only determine, at best, what would happen in the future, with not a clue as to when they will happen.
If there is any probability that something could occur in the future, then at some time in the future, given enough time, that something will happen.
On the other hand, there can be a large probability that something will happen in the future, and still that something may not happen for a long, long time indeed.
Without knowing, or the ability to know, the precise time something will happen in the future, the knowledge of the probability, however accurate, is of little, if any value in planning with the future in mind.
I have done a good deal of hydraulic design, all based on a given frequency storm, most on the basis of either a 25 year storm frequency, or a 50 year storm frequency. Trouble is, the rainfall intensity over time for a say 50 year storm could be accurate (although I rather doubt it, given the relative few data that went into the development of the intensity over time curves we engineers use) and while my design could be done well for the given frequency year storm, what I designed and saw constructed could easily be washed completely out the year after it was constructed, or even the same year it was constructed, even if the last design frequency storm had happened the year previously to the construction.
What we civil engineers get from these rainfall intensity over time curves derived from statistical analysis in reality is a “basis for design”. Useful in protection from criticism when the the hydraulic installation is washed away a year or 2 after its construction.
Statistically derived data is useless in attempting to predict the future on any temporal basis.

Tonyb2
October 6, 2009 12:59 pm

salomon Green
very amusing! there is…. see the entry 5 before yours an answer to George E. Smith
Tony Berry

DeWitt Payne
October 6, 2009 1:31 pm

Re: #10 crosspatch,
Picking a convenient year and declaring a new trend is the height of cherry picking. Show me a well characterized statistical test that says the trend has changed at least at the 90% confidence limit and I will be more impressed. I would try some sort of control chart that’s designed for serially autocorrelated data like cusum or exponentially weighted moving average. I doubt there’s enough data yet, though.
FYI, I think you are probably correct, but you can’t prove it yet. It all depends on whether the AMO index continues to decline as expected.

George E. Smith
October 6, 2009 2:10 pm

“”” Tonyb2 (10:04:23) :
George E. Smith
Does Dr Berry have an affiliation he can share with us; so we know where he’s from; no biggie, just curiosity.
George I’m 65 and have retired. I have a first Degree in Chemistry, a PhD in Biological Chemistry (UCL) and an MBA from Warwick University in the UK. I worked in the Pharmaceutical Industry for 30 years finally owning and managing my own company. I spent a significant amount of my time in planning, corporate planning and business development predicting markets and production requirements for pharmaceuticals. Hence my interest in forecasting etc. Old Habits die hard! Hence my interest in these numbers! Does this help?
Tony Berry “””
Yes Dr Berry, thanks so much for the short bio; as I said no nefarious motivation; just natural curiosity about Anthony’s community.
I’m looking at those two glitches in the DMI temperature graph and wondering when it will resume the downward plunge, and how that may affect the next few weeks of JAXA ice data. It would be nice if someone has some explanation for those glitches; they do seem to be rather common at htis stage, so it would be nice to have some idea what causes those reversals.
As for me I’m almost a decade beyond you, but still full time employed stirring the Physics pot; currently mostly in special Imaging Optics; as in your Optical Mouse.

George E. Smith
October 6, 2009 2:20 pm

“”” LarryOldtimer (12:21:31) :
Pamela Grey
Regarding an object of mass, if all of the forces acting it or will act on it are known, then its rate of acceleration and position in the future can be predicted. “””
Well according to Heisenberg, it is inherently impossible; no matter what, to even determine the present dynamic state of anything; ie simultaneous knowledge of position and momentum. Any method of increasing the positioning precision, will result in increasing the uncertainty of the momentum; and verse vicea.
So absent exact knowledge of the present; the future is most certainly unpredictable (well except in a probability sense) and as they say; on average nothing much will happen; it’s the deviations from average wherin lies the interesting details.

Gary Pearse
October 6, 2009 2:46 pm

Well you can sure see the point where the ice was swept out in summer 2007. Move the curve up that down sloping line and you would have the trend up if this mechanical reduction in ice hadn’t occurred. The 2007 re-freeze after September was very strong (steep).

hotrod
October 6, 2009 2:56 pm

crosspatch (22:39:32) :
… There are at least three different trends visible since 1979. One flat, one steady down with a drastic down during the wind anomaly of 2007, and then a trend up in recovery.
Oh, and so far I expect 2010 to be near 2006 levels.

DeWitt Payne (13:31:04) :

FYI, I think you are probably correct, but you can’t prove it yet. It all depends on whether the AMO index continues to decline as expected.

One intuitive observation that comes to mind, is that when there is an Arctic ice dumping episode like in 2007 where lots of ice gets pushed out into the Atlantic there are really two things going on.
The arctic ice extent/volume must be declining while that happens — Many are assuming that is due to warming rather than physical transport of the ice.
And not so obviously, the Atlantic must be cooling as all that ice load melts.
Could that sort of ice purging be a significant forcing that switches on the AMO shift from a positive warming trend to a cooling trend?
If you suddenly take all the ice out of your ice tea and dump it in your hot soup, the ice tea will be warmer than you expect and the soup will cool much faster than you expect.
You can’t have one without the other, all that latent heat capacity in the ice has got to suck lots of heat energy out of the Atlantic .
Like an electrical circuit, the Atlantic , if you give it a big enough shove in the negative direction, would overshoot its median temperature, while it absorbs the negative input (melts the ice) and then slowly recovers back toward its stable value.
It seems to me that the sudden introduction of all that ice into the N. Atlantic is a plausible cause for the transition from a positive AMO trend to a negative AMO trend.
Larry

J H Folsom
October 6, 2009 3:16 pm

I was just curious. What is the reasoning for using the mean instead of mode for ice extent or area when trying to determine what “normal” ice extent/area is?
With the recent Yamal stories, im wondering if there are massive outlier events affecting the average, up or down. What does the mean of the ice extents year over year actually mean?

Pamela Gray
October 6, 2009 7:16 pm

DMI tracings seem to be significantly affected by calculated adjustments made to compensate for the changes in melt pools for satellite derived data. Since they also add on-the-ice buoy temperature sensors (at least the ones that are still operating) to compensate for cloud cover and calculated satellite adjustments, the up and down direction of the temp trace looks like a floating bit of sea ice is carrying the sensors in and out of warmer areas.

Spector
October 6, 2009 9:28 pm

I have recently downloaded the 30-year NSDIC record in the txt files for each month and resorted this data by year and month. After developing a special cosine-series to remove the average annual melt-freeze cycles from this data, I see several minor recovery intervals: 1984-1987, 1990-1994, and 1996-1998.
I think we may have to wait several more years before we can say the previous long-term melting trend has been reversed.

Spector
October 6, 2009 9:31 pm

Minor correction: NSIDC not NSDIC.

Fred Lightfoot
October 7, 2009 1:47 am

Watching the Russian Tv channel RT in English they showed a documentry of about one hour, 2 trucks with trailors in July drove to the magnetic North Pole and returned, no water, but ploenty of broken pack ice, the BBC was notibly absent.