Arctic Ice Graphing Lesson Increasing By 50,000 km2 Per Year

By Steven Goddard

[see important addendum added to end of article ~ ctm]

[Note: The title and conclusion are wrong due to bias in the start/end point of the graph, the mistake was noted by Steven immediately after publication, and listed below as an addendum. I had never seen the article until after the correction was applied due to time difference in AU. My apologies to readers. I’ll leave it up (note altered title) as an example of what not to do when graphing trends, to illustrate that trends are very often slaves to endpoints. – Anthony]

JAXA Arctic Ice measurement just had its 8th birthday. They have been measuring Arctic ice extent since late June, 2002.

http://www.ijis.iarc.uaf.edu/en/home/seaice_extent.htm

We normally see year over year ice graphs displayed in the format above, with each year overlaid on top of previous years. The graph below just shows the standard representation of a time series, with the linest() trend.

As you can see, Arctic ice extent has been increasing by nearly 50,000 km² per year. Over the eight year record, that is an increase in average ice extent of about the size of California. More proof that the Arctic is melting down – as we are constantly reminded. Spreadsheet is here.

How do we explain this? There has been more ice during winter, paralleling the record winter snow in the Northern Hemisphere. Meanwhile in the Southern Hemisphere, ice extent is at a record high for the date.

Size matters, but I’m guessing that Nobel Prize winner Al Gore didn’t share this information with his masseuse.

Addendum:

I realized after publication that this analysis is biased by the time of year which the eighth anniversary occurred. While the linest() calculation uses eight complete cycles, it would produce different slopes depending on the date of the anniversary. For instance, had the anniversary occurred in March, the trend line would be less steep and perhaps negative.

This is always a problem with graphing any cyclical trend, but the short length of the record (8 years) makes it more problematic than what would be seen in a 30 year record.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

174 Comments
Inline Feedbacks
View all comments
Scott
July 3, 2010 1:25 pm

899 says:
July 3, 2010 at 12:48 pm

Taking that one step further, one might also surmise that the journals themselves inspire the matter. Do they not require a constant stream of submissions to their publishers in order to survive?
What would happen, say, if they faced a paucity of submissions such that they had nothing to print? Not that such would ever happen, you understand, but still …

While I suspect you are correct, I don’t want to go leaping to conclusions. 🙂 However, your comment did remind me of a bit of evidence supporting your conclusion. Having published development of atmospheric analysis instrumentation in the past, I was sent an automated e-mail by the journal Analytical Chemistry some months ago. The e-mail was a call for papers on “Atmospheric Analysis as Related to Climate Change”, and accepted papers will be published in a focused issue October 1, 2010.
So what’s wrong with that? Well Analytical Chemistry is a journal that should focus on making novel and unbiased measurements. But if the focus on the measurement is to only measure climate change/global warming, it is inherently biased. Now I’m not one to shout that science should be 100% unbiased, because all science is biased. However, when developing measurement techniques, only looking for warming (or any other thing in one direction) is just flat-out the wrong way to do things. And here the journal is supporting encouraging/promoting this behavior! Now, if they’d have another issue/call for papers asking for “Atmospheric Analysis as Related to Lack of Climate Change”, then maybe it’d be balanced out. 😉
Just my two cents,
-Scott

Scott
July 3, 2010 1:30 pm

stevengoddard says:
July 3, 2010 at 12:35 pm
Sadly Steve, you hit the nail on the head. As a student in a group with tons of contradicting stuff, I’d say it’s almost bipolar in nature. I’m currently looking over a manuscript in preparation from my group that is just flat-out awful and can’t be reproduced. I could hire a junior undergraduate engineer and they could do better in less than a month. For someone familiar with the topic, they’d do better in less than a week.
However, firing graduate students looks bad for the PI, the group, and the school…so their crap gets published and the students pushed out the door to help the appearance of things. Ditto for journal articles. I’m not saying it’s all like that, but there’s enough of it out there to cause serious problems.
-Scott

Les Johnson
July 3, 2010 2:02 pm

stevengoddard: your
Linear trends through cyclical data forms the entire basis of the IPCC’s raison d’être
I see that.
I generated a sine curve, of 200 points, with 20 points between peak/peak. I added a random number between -1 and 1 to all points. At 10 “years”, the slope can be positive or negative, but usually between -1 and 1, but as high as +-6/century.
The average “temperature” was usually between -0.01 and +0.01, but the slope was usually much greater.
Oddly, some results showed a negative average “temperatures” over the entire record, but had high positive slopes (one of near +8/century).
A 5 “year” record gave greater variation, not surprisingly.
I did peak-peak, or trough to trough, to get rid of start/end bias.
On a fifty year record, I could get up to 0.5 deg/century + or -.
Conclusion? A little bit of random variation on a cycle, and I can get significant amounts of either “warming” or “cooling”. Or increases or decreases in “ice”. The longer the data period, the less the variation.
This is without adding in a “random walk” or embedding other cycles.
Interesting.

Les Johnson
July 3, 2010 2:07 pm

Scott: Pretty well spot on.
I saw a request from Environment Canada, asking for proposals to study the negative effects on wet lands, from global warming.
Nothing like telling a researcher what conclusion he needs to find. Not much of a stretch from that, to read between the lines, and suggest that with the conclusion already drawn, it will take the most extreme results from the pre-ordained conclusion to get the funding.

899
July 3, 2010 2:19 pm

Scott says:
July 3, 2010 at 1:25 pm
While I suspect you are correct, I don’t want to go leaping to conclusions. 🙂 However, your comment did remind me of a bit of evidence supporting your conclusion. Having published development of atmospheric analysis instrumentation in the past, I was sent an automated e-mail by the journal Analytical Chemistry some months ago. The e-mail was a call for papers on “Atmospheric Analysis as Related to Climate Change”, and accepted papers will be published in a focused issue October 1, 2010.
So what’s wrong with that? Well Analytical Chemistry is a journal that should focus on making novel and unbiased measurements. But if the focus on the measurement is to only measure climate change/global warming, it is inherently biased. Now I’m not one to shout that science should be 100% unbiased, because all science is biased. However, when developing measurement techniques, only looking for warming (or any other thing in one direction) is just flat-out the wrong way to do things. And here the journal is supporting encouraging/promoting this behavior! Now, if they’d have another issue/call for papers asking for “Atmospheric Analysis as Related to Lack of Climate Change”, then maybe it’d be balanced out. 😉

That’s rather interesting.
My take on science has been that it’s essentially: Show and tell, report on all of the facts, and let the rest come to a conclusion, unless the one reporting on the matter may decidedly make an indisputable assertion regarding something, given all which is known.
A sort of ‘I report, you decide’ kind of thing.
With so-called ‘CAGW,’ it get the very disturbing impression that only ~some~ of the data are being revealed, in order that a false conclusion will be the only conclusion, and ‘ClimateGate’ was a sure revelation of just that.
The journal you mention above was likely engaging in the ‘garnering of interest,’ much as the mass media TEE VEE pique the interests of those partaking of it, with the usual ‘blah-blah-blah, film at eleven’ bait.
Anything to keep the advertisers happy …

July 3, 2010 2:49 pm

Two hours to publish, one hour to correct a false claim on same day another 1200 sqkm piece broke off the ice barrier blocking the Northwest-Passage.
http://ice-map.appspot.com/?map=Arc&sat=ter&lvl=8&lat=74.594791&lon=-106.947399&yir=2010&day=183
All while staring at 20 sqkm close to Barrow. I wish I’d have a look at the list of priorities. What will be the headline if 2010 records? Something like: “Pythagoras was wrong” ?

July 3, 2010 3:12 pm

Bob Tisdale (July 2, 4:52) made an important point that’s got lost: you can measure anomalies right off (albeit within a tiny time frame), to get rid of the sine-like curve which is just a distraction if you want to see trends clearly. And his chart shows a decline overall – albeit the time frame is too short to be of real significance.
I love Sarah Chan’s “definitions”, they bear repeating again

SCIENCE
If you’re not making mistakes, you’re doing it wrong.
If you’re not correcting your mistakes, you’re doing it really wrong.
If you’re not accepting that you’ve made mistakes, you’re not doing it at all.

July 3, 2010 3:41 pm

noiv
It is extremely unusual for ice to break up in July. One in a million probability.

SteveS
July 3, 2010 6:23 pm

@stevengoddard
https://spreadsheets.google.com/pub?key=0AnKz9p_7fMvBdEdDTjFHWWtaSTB6NXBQQjVuNVQzWHc&hl=en&single=true&gid=0&output=html
Why is the fifth column of your spreadsheet now labelled Area? Shouldn’t it be Extent? Sea-ice area and extent are two different things.
Also don’t you think it was just a little sneaky changing it without responding to my original post.

Ed
July 3, 2010 8:07 pm

Time for Steve and Anthony to leave the science to experts and stop spreading misinformation. Face it, you were completely and entirely incorrect in your claims earlier this year about Arctic sea ice. Further manipulation of the data is not going to prove you any more correct.
[Reply: What superb logic! You should be Obama’s science Czar in place of Holdren. Yes, when mistakes are detected, shut down everything! I’m sure you argued for the closure of NASA after the Challenger shuttle tragedy in 1985 (it took Richard Feynman to point out their mistake, they did not self admit before that) and when they lost the Mars probe due to a simple error in metric-english conversion. Yes let’s shut down everything when a simple mistake is discovered and leave it up to experts like NASA GISS who needed Steve McIntyre to point out their data error for them. Uh, twice. If you’d like to type up a letter arguing for the closure of NASA due to errors, I’m sure Anthony would be happy to print it here. ~mod]

Les Johnson
July 3, 2010 9:04 pm

Ed: your
Face it, you were completely and entirely incorrect in your claims earlier this year about Arctic sea ice.
Really? So you know what the ice extent will be in mid-September? Do you want to make a side bet on the extent mis September, vs 2007?

Larry Hulden
July 4, 2010 12:00 am

Steven Goddard!
According to the graph in:
http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/global.daily.ice.area.withtrend.jpg
the global sea ice extent looks to me to have a quite horizontal trend until about 2003 when there is a decline until 2007 but after that an upward trend.
Is this a correct conclusion?

July 4, 2010 6:13 am

stevengoddard says (responding to Les Johnson):
July 3, 2010 at 11:36 am
Linear trends through cyclical data forms the entire basis of the IPCC’s raison d’être

This gets to the essence of the whole debate—doesn’t it? And it’s simple enough that even the ‘journalists’ of the mainstream media (who used to be called reporters, back when digging out facts was the aim, not regurgitating propaganda) can understand it.
This statement should be amplified with a few easy-to-understand examples, and circulated to all the media and the politicians. Maybe a few will start to scratch their heads and say, “Hey, maybe we’ve been hoodwinked!”
/Mr Lynn

July 4, 2010 6:48 am

Les Johnson (July 3, 2010 at 10:00 am) says:
5. On spotting errors in your work. At least for me, I can better spot errors after publishing or printing, then before hand on the monitor. On important papers, I routinely print a copy, and proof that. I have no idea why the difference. . .

This is an important rule of thumb for traditional editors and publishers: print it out, and have a few fresh pairs of eyes go over the manuscript. The World Wide Web unfortunately encourages the slapdash approach of throwing the words down on the screen and clicking a button which instantly broadcasts the article to the world.
It is admirable that Anthony and the moderators encourage quick correction of errors; still, it is arguable that a 24-hour delay in ‘publishing’ lead posts, allowing time for one or two colleagues to read them, and for the author to polish things up a bit, would only improve this excellent site. It might even help avoid the more mundane spelling and grammatical errors that crop up too frequently. “Never hurry and never worry,” as Charlotte said (E. B. White, Charlotte’s Web).
/Mr Lynn
[Reply: You have no idea how many spelling/grammar errors are corrected every day. But that number is a small fraction of the total, as you can see. Sisyphus would understand. ~dbs, mod.]

July 4, 2010 8:29 am

It’s been a few days since the last prediction post while we are reaching the point where summer ice extent can be predicted, i.e. at 8e6sqkm.
Looking at the slope of the curve, it can be seen that it’s a typically average slope. I think we can expect the extent to go down to +5e6sqkm like it did in 2009.

JSmith
July 4, 2010 10:56 am

To clear up any timeline issues. He sent me the email with the correction long before he could have seen a single comment as none had yet been approved. ~ ctm
Timestamps show when you made comments, not when they are approved. Steve had emailed me before any comments were approved, before he could see any of them. I approved the backlog of comments first, then edited the original post when I sat down at the desk. His email was sent at 2:19 pm Pacific.~ ctm
Could you publish the email, please ?

tonyb
Editor
July 4, 2010 2:00 pm

Mr Lynn said;
The World Wide Web unfortunately encourages the slapdash approach of throwing the words down on the screen and clicking a button which instantly broadcasts the article to the world.’
Wise words. I find it very difficult to see mistakes on a monitor that I can spot instantly in print. As regards slapdash, I try to make it a rule with anything important that, after composing an email, I leave it alone for 12 hours THEN see if it is fit to send.
tonyb

Scott
July 4, 2010 2:51 pm

899 says:
July 3, 2010 at 2:19 pm

With so-called ‘CAGW,’ it get the very disturbing impression that only ~some~ of the data are being revealed, in order that a false conclusion will be the only conclusion, and ‘ClimateGate’ was a sure revelation of just that.

Some people can argue/believe this, and that’s a fine conclusion by me. What I’d argue, however, isn’t that this is being done intentionally by many scientists (though fairly definitely by a few scientists), but that it happens via a natural biasing of the way “science” is currently done in 1st world countries. As pointed out by Willis Eshenbech (sp?) in a comment on this site several months ago, if you have ~12 studies looking for warming induced by humans, there’s a roughly 50% chance that one will say “yes” at the 95% confidence interval even when none is present (a “spurious” measurement). From those numbers, I argue that if you have 1000 studies going (anyone know what an actual estimate might be?), then you’ll have roughly 40 saying that man causes global warming even if man is not! Combine that with factors like biased homogenization, biased authors, deletion of “outlier” data points in one direction, the urban heat island effect, and natural cycles (PDO etc), and you can see how easily a large proportion of the science can be duped. And once the funding starts depending on this, the bias grows immensely!
-Scott

Skeptic Student
July 4, 2010 3:05 pm

Forgive me for skipping the second half of the comments and potentially repeating what’s already been said.
I see people saying that it’s wrong to fit a linear trend to cyclical data. Wouldn’t a better lesson be to pay attention to R^2 and error terms overall? Picking up the data right now and using the whole set instead of whole cycles, and measuring June 1, 2002 as day 1 (I added the day numbers before removing the days lacking data), I get a trend of 63.19t + 10380683, but R^2 = 0.0003, which is a bit of a red flag. The standard error of my slope, conveniently provided by my software and probably any software, is 65.92. I don’t know what peer-reviewed literature uses, but I believe we used 95% confidence intervals in my physics lab last year.
I also chopped the data to include only June 21 to June 21, but I only got a positive trend of 75.98, standard error 66.43. I see an explicit figure of 142.58 from Les Johnson, which reproduces the 50000km^2 a year increase initially stated, so I assume this is the number other people are getting as well. By replacing the -9999 data with a guestimate average of 7,500,000 I can about reproduce this number. But I’m assuming by the lack of sudden jumps in the graph of the article that it was actually some sort of interpolation to get numbers for the missing days? I just left these days out of my analysis completely. Does anyone know exactly where our methods differ?
As a final addendum, without too much thought put into it, Sept 22 to Sept 22 data gives R^2 = 0.008, a slope of -366.77 and associated standard error of 80.15; Mar 22 to Mar 22 data gives R^2 = 0.0007, a slope of -126.63, and standard error of 101.02. I am not a statistician, or even a statistics student.

Skeptic Student
July 4, 2010 3:48 pm

guestimate average of 7,500,000
Haha, I don’t know why I thought this was an appropriate average. I clearly didn’t scroll through a whole year before guessing. But that just makes me more curious why my calculation is different.

Skeptic Student
July 4, 2010 3:58 pm

Okay, I think I should just think more. Looking at the linked spreadsheet instead of making guesses, I can see that missing days just have the data copied from the previous day.
Yeah.

RomanM
July 4, 2010 4:11 pm

Skeptic student

Wouldn’t a better lesson be to pay attention to R^2 and error terms overall?

I get a trend of 63.19t + 10380683, but R^2 = 0.0003, which is a bit of a red flag.

It is also a red herring and wrong. What you are not taking into account is that most of what your program is ascribing as random variation is, in fact, seasonal variation, so that the R^2 is meaningless.
Many people have made suggestions in this thread as to what is necessary to properly fit a trend to this data (use anomalies, ensure using only complete cycles, go forom highs to highs or lows to lows, etc., use a sine or cosine fit, etc.), but each of these approaches can have its own problems.
In a post a while back on fitting trends to temperature anomalies, I pointed out the pitfalls and a solution (an analysis of covariance approach) which will also work on any cyclic data which has equally spaced measurements within each cycle. There is no requirement to have full cycles or start or/and stop at any particular point in the cycle – the answer is the same.
In this particular case, there is a decreasing trend in the data of approximately 67000 km^2 over the JAXA data time period. If year 2007 is omitted, this reduces to a decreasing rate about 52000 km^2.

July 6, 2010 6:26 am


I just found the cold spot. It-s
April 29 2002-2010. Trend +73307 km2/year. This is not a trough-peak-trend. It’s the trend of the same date throughout 2002-2010. That even beats your 50000 km2/year.
Now comes the BIG early summer melt. Yesterday was
July 5, 2010. Trend -132977 km2/year
Sept. 24 is Sea Ice Minimum 2007 day. Trend -185195 km2/year
Oct. 12 is all-time negative anomaly day 2007. Trend -207796 km2/year.
2005 was the warmest year over the Arctic according to NOAA, emc.ncep.noaa sst June 2007and emc.ncep.noaa sst July 2007 looked much worse than emc.ncep.noaa sst June 2010. Looks like a lot of this 3-year ice grown since 2007 will survive. It takes a lot to reverse such a hugh Arctic machine with volume of ice still at an all-time low. The cooling Pacific Ocean will very likely do the trick. Time for Arctic cooling as it happened twice in the 20th century. I don’t think AOCCMs can hindcast that.

wayne
July 6, 2010 9:22 pm

After becoming so enlightened after de-cycling this sine form sea ice extent data that I’m going to take a jab at the predicting the minimum. I think Steve made a very early take at 6 million but I am going to better that at 6,120,000 km^2 for the minimum using IARC data at http://www.ijis.iarc.uaf.edu/en/home/seaice_extent.htm. This data just seems to scream that this is near the number though it sure doesn’t look like that right now from the chart displayed at that site.
If you’ve never removed the seasonal sine curve from that data, you ought to try it, it’s a vastly better view of what’s going on up north.

1 5 6 7
Verified by MonsterInsights