In the prior thread I raised a question of why there was a large downward jump in sea ice extent on the graph presented by NSIDC’s Artic Sea Ice News page. The image below was the reason, dozens of people called my attention to it in emails and comments overnight because in the space of a weekend, a million-plus square kilometers of Arctic sea ice went missing. Note the blue line.
Click for larger image
When I checked NSIDC’s web site this morning, about 8:30 AM PST (9:30AM MST Mountain time in Boulder where NSIDC is located) the image was still up. A half hour later it remained. I checked all around the NSIDC web site for any notice, including the links they provide for the data issues.
Learn about update delays, which occasionally occur in near-real-time data. Read about the data.
Finding nothing, and knowing that it was now 10AM in Boulder, which should have been plenty of time to post some sort of notice, I decided to write a quick post about it, which was published at 9:10AM PST (10:10MST) and drove to work.
The corrected image (with the million square kilometers of sea ice restored) appeared on the NSIDC web site just shy of 3 hours later, about noon PST or 1 PM MST.
Click for larger image
About the same time this comment was posted on WUWT by NSIDC’s chief research scientist, Dr. Walt Meier:
Anthony,
We’re looking into it. For the moment, we’ve removed the data from the timeseries plot.
You need to remember that this is near real-time data and there can be data dropouts and bad data due to satellite issues. While the processing is automatic, the QC is partly manual. Thus errors do happen from time to time and one shouldn’t draw any dramatic conclusions from recent data.
I’m not sure why you think things like this are worth blogging about. Data is not perfect, especially near real-time data. That’s not news.
Walt Meier
Research Scientist
NSIDC
ps – FYI, the JAXA data is from a different sensor, so it is not consistent with our data, but it provides a good independent check. If the JAXA data does not show a dramatic change while the NSIDC data does (or vice versa), then it’s likely an issue of missing data or bad data.
First let me say that I have quite a bit of respect for Dr. Meier. He has previously been quite accessible and gracious in providing answers, and even a guest post here. But I was a bit puzzled by his statement “I’m not sure why you think things like this are worth blogging about…. That’s not news“
First let us consider a recent event. The BBC ran really badly researched video report just a couple of days ago where the reporter obviously didn’t know the difference between positive and negative feedbacks in the climate. I wrote about it. The video is now gone. Now I ask this question; if nobody speaks up about these things, would the video still be there misinforming everyone? Probably.
The point I’m making here is that in my experience, most reporters know so little about science that they usually can’t tell the difference between real and erroneous science. Most reporters don’t have that background. I say this from experience, because having worked in TV news for 25 years, I was always the “go to guy” for questions about science and engineering that the reporters couldn’t figure out. And, it wasn’t just at my station that this happened, a meteorologist friend of mine reported the same thing happened to him at his station in the San Francisco bay area. I vividly remember one week he was on vacation and I saw a news report about a plane that crashed that had just minutes before been doing a low level run over the airfield as part of a show. The reporter had video taped the plane’s run, and then used that video to proudly demonstrate “as as you can see, just minutes before the crash, the propellers on the plane were turning very slowly”.
The reporter didn’t understand about how a video camera scanning at 30 frames per second can create a beat frequency that give the impression of slowly turning propellers that were actually running about 3000 RPM., and there was nobody there to tell her otherwise. She made an honest mistake, but her training didn’t even raise a question in her mind.
So when I see something obviously wrong, such as a dramatic drop in sea ice on a graph presented for public consumption, I think about a reporter (print, web, or video -take your pick) somewhere in the world who may be assigned to do a story about sea ice today and does an Internet search, landing on NSDIC’s web site and then concluding in the story “and as you can see in this graph, Arctic sea ice has gone through a dramatic drop just in the last few days, losing over a million square kilometers”.
Thinking about Walt’s statement, “ That’s not news” if the NSIDC graph had been picked up by a major media outlet today, would it be news then?
I understand about automation, about data dropouts, and about processing errors. I run 50 servers myself and produce all sorts of automated graphics output, some of which you can see in the right sidebar. These are used by TV stations, cable channels, and radio/newspaper outlets in the USA for web and on-air. While those graphics are there on WUWT for my readers, I also have an ulterior motive in quality control. Because I can keep an eye on the output when I’m blogging. When data is presented for public consumption, in a venue where 24 hour news is the norm, you can’t simply let computers post things for public consumption without regular quality control checking. The more eyes the better.
At the very least, a note next to NSIDC”s Learn about update delays, about how glitches in satellite data or processing might generate an erroneous result in might be in order. And also for consideration, adding a date/time stamp to the image so it can be properly referenced in the context of time. This is standard operating procedure in many places, why not at NSIDC?
NSIDC and other organizations need to realize that the interest in what they produce has been huge as of late. In NSIDC’s case, they have been promoted from relative obscurity to front page news by the recent unfortunate statements of an NSIDC employee, Dr. Mark Serreze, to the media, that have received wide coverage.
As commenter “just want truth” wrote in the previous thread on NSIDC:
Last year Mark Serreze, of the NSIDC (you may know him), said North Pole ice could be gone in the summer of 2008. He said then “The set-up for this summer is disturbing”. This, of course, was broadcast in all news outlets around the world. Everyone on both sides of the global warming debate was watching Arctic ice totals last summer to see what would really happen. You may have noticed hits on the NSIDC web site were high last summer.
Now Mark Serreze is saying North Pole ice is in a “death spiral”.
You can be certain that Arctic ice data will be scrutinized because of Al Gore and Mark Serreze. A line has been drawn by both. Both have placed it clearly on the radar screen. This is why NSIDC data is worth blogging about–especially since Mark Serreze is employed at the NSIDC.
Mark Serreze 2008 North Pole ice free :
http://abcnews.go.com/Technology/Story?id=4728737&page=1
and
http://www.youtube.com/watch?v=S6e3e4VzwJI
Mark Serreze North Pole ice in “death spiral” :
http://www.youtube.com/watch?v=HW9lX8evwIw
and
http://www.nypost.com/seven/08282008/news/worldnews/arctic_ice_in_death_spiral_126443.htm
Given the sort of attention that has been heaped on NSIDC, I think blogging about errors that have gone unnoticed and uncorrected by 10AM on a Monday morning isn’t an unreasonable thing to do.
I also think that reining in loose cannons that can do some terrible damage in the media is a good way to maintain scientific credibility for an organization, especially when predictions like “ice free north pole” don’t come true.
I have no quarrel with Dr. Meier, as I’ve said he’s been the utmost professional in my dealings with him. But I do have quarrel with an organization that allows such claims to be broadcast, all the while producing a data source that is now regularly scrutinized by the public and the media for the slighest changes. It’s a slippery slope.


Oops. “Their” means “NSIDC’s.”
@ur momisugly pft (13:23:47)
I have already addressed some of the points you reraise in a previous comment, you might have missed that one.
You are correct in this, however NSIDC is not guilty of this sin. What they make available is preliminary data, the final, research grade data does undergo more scrutiny. The current information they have put up is a good read.
http://www.nsidc.org/arcticseaicenews/
I especially agree with their assessment:
The quality control of data used in climate science seems to be a concern.
I’m not sure, whether it is so much a problem of quality control, but exaggerated expectations at quality. Look at how each photo of a grade 5 weather station is greeted with cheers and ridicule. On the other hand, how much attention is given to the preliminary results of John V.’s opentemp, using rural grade 1 and 2 stations? The minute he had published his findings on ClimateAudit, someone commented, that we must not draw conclusions from the quality of US sites regarding the quality of non-US sites. We are now creeping towards the 75% threshold Anthony has set as a starting point for looking into an analysis. How long, do you think, will the quality control take to come up with a temperature reconstruction, once the 75% threshold is reached?
It appears that the problem with the sensor has been going on for at least two months. I believe that many have felt that something was not quite right with the sea ice growth considering the very cold temperatures this winter. The huge drop was more like the straw that broke the camel’s back. Was it worth blogging about? Ya, and this new developement is probably worth at least another blog and a big writeup in the MSM. Why didn’t the scientists at NSIDC have that same feeling that so many non-scientists did that something wasn’t right with the sea ice graph? Shouldn’t this have been caught much earlier considering the known divergence with other sensors? Was it known that these old sensors consistently gave slightly lower readings as they aged? Could this thirteen year old sensor also have given problems that weren’t caught over the last several years? Does the NSIDC approve of the way that CT has been using their data? Does Dr. Meier even realize that the Comparison Product at CT is not using the sea ice masks on the older images? Doesn’t Dr. Chapman realize that NSIDC has directed all users of the data to update their products with the newest Sea Ice masks? Do the problems at NSIDC still appear to be because it was a volunteer effort? Are they working on the daily updates during lunch breaks and on Saturdays? At least it does not appear that we are now getting some type of excuse and I hope we can see improvements soon.
Mike Bryant
PS What about the “string of pearls” at CT and the many, many odd things that have been reported and left to stand?
Re: Frederick Michael (15:14:19) :
“Their response is totally professional.”
I mostly agree with you on that. It must have been embarrassing for them to publish their error and they should be commended for it.
One of my favorite Richard Feyman quotes which I posted a day or two ago on another site (actually a paraphrase because after I heard it I couldn’t find it in his popular literature, and am reciting it from memory) was: “When a scientist discovers that he has made an error in his work, he should not admit it, he should proclaim it!”
The question now is in how they will correct this data in their final analysis. I still believe their rational for continuing to use the flawed SSM1 data is a bit of a stretch.
Criminy, given that the admission today of systematic and growing failure over a two month period, one hopes that NSIDC now has a better appreciation of the value of getting publicly prodded to look a situation over.
And from their explanation, I’m a little unclear. Are they talking about an additional 500k square kilometers *in addition* to the correction they originally made?
Frederick Michael (15:14:19) :
“Still, I favor their method of posting raw data in real-time. If some nut chooses to use it inappropriately, that’s their fault. You can’t prevent lying by eliminating facts. Hiding the raw data is far too common and wrong.”
They don’t publish “raw” data. This is an automated process by which raw data that you or I wouldn’t recognize from cottage cheese goes into a program algorithm, processed and uploaded to the website. And the processed data we see is what will be archived, it is only “preliminary” in the sense that NSDIC claims to do “rigorous QC” once or twice a year on the data, apparently to “catch” any errors that may have occured in addition to “adjusting” data due to several variables such as land changes. I doubt any changes made are minor, but IMO the catching of errors due to things such as sensor failure, if not able to be detected automatically, should be performed on a regular basis instead of waiting to “fix” error ridden data up to a year old. And indeed we see this as being the case now. Yet if it hadn’t been brought to their attention, would it have been left, and if it had, how could it possibly have been quality controlled six months or so from now? Like I said before, just a quick eyeball on the update of a couple days ago, with the line going straight down, should have been a huge big flag – *before* it was uploaded to the website.
Apparently some people don’t understand what automated means. If set up properly, the automated process would generate the graphic and upload it as well. If it is automated, no one looks at it then uploads it.
aurbo (15:59:04) :
Re: Frederick Michael (15:14:19) :
“Their response is totally professional.”
I mostly agree with you on that. It must have been embarrassing for them to publish their error and they should be commended for it.
The question now is in how they will correct this data in their final analysis. I still believe their rational for continuing to use the flawed SSM1 data is a bit of a stretch.
As I pointed out before CT also use the SSM/I data for their comparisons and archiving for exactly the same reason of continuity but the image they show on their front page of the arctic is the high res ASMR-E.
They have also posted about the problems with the imager:
“February 17, 2009 – The SSMI sensor seems to be acting up and dropping data swaths from time to time in recent days. Missing swaths will appear on these images as a missing data in the southern latitudes. If this persists for more than a few weeks, we will start to fill in these missing data swaths with the ice concentration from the previous day. Note – these missing swaths do not affect the timeseries or any other plots on the Cryosphere Today as they are comprised of moving averages of at least three days.”
jeez (16:40:14) :
Apparently some people don’t understand what automated means. If set up properly, the automated process would generate the graphic and upload it as well. If it is automated, no one looks at it then uploads it.
Especially over the long holiday weekend!
jeez (16:40:14) :
“Apparently some people don’t understand what automated means. If set up properly, the automated process would generate the graphic and upload it as well. If it is automated, no one looks at it then uploads it.”
If it is true that the process is completely automatic, then shame and embarrasment on them. They do apparently look at the data after it is uploaded (see below), and they are aware of potential problems such as “sensor drift” that can cause significant errors. Why not take a quick peek before upload? It’s only once (in the latest update) in three days.
From NSIDC website yesterday:
“Also of note is that from January 15 to 26, ice extent saw essentially no increase; an unusual wind pattern appears to have been the cause.”
Glenn, what part of near real time, preliminary, or subject to revision, don’t you understand?
I agree that it’s ok to blog about this error, and at the same time I feel for Dr. Meier who has been nothing but forthcoming. but everyone keeps saying “someone should have noticed before they uploaded it” and I’m calling attention to the point that it is quite likely NO ONE LOOKS AT IT FIRST, that is why it is called “AUTOMATED”. If you don’t want real time, preliminary data, or subject to revision data, be prepared for:
1. A lot more work/man hours/cost
or
2. Much less frequent updates.
I don’t agree with the suggestion made earlier that Anthony should have emailed NSIDC and waited for a response.
The situation then would be this : a possible error has been detected, the group (NSIDC) responsible for the data has been notified, but no-one else knows that this is the situation until NSIDC replies to Anthony or posts new information.
Anthony clearly needs to email NSIDC and blog the situation at the same time. Also, NSIDC should then immediately post the fact that there is a data query, not wait until they have completed investigations/fixes.
These actions are not expensive, and keep everyone as fully informed as possible.
I am perfectly happy to accept that NSIDC’s data is provisional when first posted, and I am perfectly happy to understand that sometimes an error may get posted. In order to remain perfectly happy, however, I need to be confident that if a possible error is detected, no matter by whom, all parties will dealt with it quickly and openly.
—–
I do agree that there has been more than a hint of paranoia and NSIDC-bashing here, which is regrettable.
Message to everyone here : paranoia is unjustifiable if there’s no-one out to get you.
Hmmm.
Maybe Anthony has something to do with this, maybe not. But cyrospheretoday posted a message on its site saying that something is wrong with the equipment. A sensor could be failing. Nice to see someone had their coffee this morning and was alert. Did Anthony cause that? Maybe.
jeez (17:35:35) :
“Glenn, what part of near real time, preliminary, or subject to revision, don’t you understand?
I agree that it’s ok to blog about this error, and at the same time I feel for Dr. Meier who has been nothing but forthcoming. but everyone keeps saying “someone should have noticed before they uploaded it” and I’m calling attention to the point that it is quite likely NO ONE LOOKS AT IT FIRST, that is why it is called “AUTOMATED”. If you don’t want real time, preliminary data, or subject to revision data, be prepared for:
1. A lot more work/man hours/cost
or
2. Much less frequent updates.”
Jeez, what part of “You need to remember that this is near real-time data and there can be data dropouts and bad data due to satellite issues. While the processing is automatic, the QC is partly manual” do you not understand? That was a quote from Walt Meier in the other thread. Just because an overall process is labelled “automatic” doesn’t mean everything is completely automatic from start to finish.
What I don’t want is unreliable data. All data is subject to revision, no data is “perfect”, but that doesn’t mean that all data be regarded as unreliable.
If no one looks at it, they should, as I have already explained. And no, it wouldn’t take much time at all, or delay updates. Likely less time would be involved than the time it took for you to respond, and that could have been done most anywhere in the world on a laptop.
Regardless of the obviousness of some regular oversight, I’m wondering why there isn’t some method of regular monitoring to determe the functional status of the sensor, and how it is that they determined that this error is the result of a faulty sensor now. We can read that it “worsened until it became noticeable in the sea ice product.” How was it determined that something was “noticeably” wrong? Someone manually looking at the graph and saying “Oh, that much ice couldn’t dissapear in three days”? If that is the case, why not have regular oversight? Got any real argument against, other than the lame one that it would cost too much and we wouldn’t get NRT data?
A bit of insight into how NSIDC regards their NRT data:
http://nsidc.org/data/news.html
“NSIDC is working to correct the issue and provide reliable NRT sea ice data. In the meantime, F15 data since 1 January 2009 should not be used.”
http://nsidc.org/data/g02135.html
“The product is intended to help researchers illustrate sea ice conditions, and to inform users with general questions about recent ice concentration and extent.”
http://nsidc.org/data/nsidc-0080.html
The following example shows how to cite the use of this data set in a publication.
http://nsidc.org/data/nsidc-0081.html
“Use of near real-time data, particularly to extend older data, should be clearly stated in all publications, presentations, or other applications.”
The daily and monthly images that we show in Arctic Sea Ice News & Analysis are near-real-time data. Near-real-time data do not receive the rigorous quality control that final sea ice products enjoy, but it allows us to monitor ice conditions as they develop.
http://nsidc.org/arcticseaicenews/faq.html#quality_control
“Several possible sources of error can affect near-real-time images. Areas near land may show some ice coverage where there isn’t any because a land filter has not yet been applied and the sensor has a coarse resolution. Sometimes, the data we receive have geolocation errors, which could affect where ice appears. We correct these problems in the final sea ice products, which replace the near-real-time data in about six months to a year.
http://nsidc.org/arcticseaicenews/faq.html#quality_control
“Despite its areas of inaccuracy, near-real-time data are still useful for assessing changes in sea ice coverage, particularly when averaged over an entire month. The monthly average image is more accurate than the daily images because weather anomalies and other errors are less likely to affect it. Because of the limitations of near-real-time data, they should be used with caution when seeking to extend a sea ice time series, and should not be used for operational purposes such as navigation.”
2008 data has been archived. Are we to wait till the end of 2009 for this automatic process to be QCed before it ceases to be “preliminary” data that no one in NSIDC looks at?
“February 18, 2009
Satellite sensor errors cause data outage”
This is the headline at NSIDC along with a somewhat detailed explanation for the sensor drift affecting data reported from early January. The error has caused an under-reporting of sea ice extent of 500,000 sq km according to NSIDC.
The outstanding question with regard to “automated” v manual operations is: can the data be readily filtered for large deltas? Typically software monitors significant changes and flags it for further study. In this case, as expressed by the public, people are interested in accurate data. Better to suspend publication until large anomalies are sorted out. As is the case right now at NSIDC. Data suspended.
Sincere thanks to the NSIDC staff for a fast response to queries about causes and prevention of future errors. And to Anthony for his and others’ eagle eye at the outset.
A faulty sensor not discovered for about a month. This just gnaws on me that NSIDC only discovered the error after Anthony’s post, when it “became noticeable in the sea ice product”, and now claimed to be traced to a malfunction starting early January. Seems there would be a way to keep an eye on this since they admit awareness of the possibility for this to happen. Would “calibration” be a test of sensor function, and if so certainly that would be known on the ground. In that event it could and should be relayed to users of the data, such as NSIDC.
http://nsidc.org/data/docs/daac/ssmi_instrument.gd.html
“All data, commands, timing and telemetry signals, and power pass through the BAPTA on slip ring connectors to the rotating assembly.”
“The mirror reflects cold sky radiation into the feed, thus serving, along with the hot reference absorber, as calibration references for the SSM/I”
“Frequency of Calibration
Once every 1.9 seconds.”
So as I speculated here
John H (11:29:19) :
It may that the erratic plotting of ice since mid January has been a result of faulty data after all.
Now when the corrected plotting will resemble the other sources making 08-09 ice much closer to the 79-06 average will the mainstream media report the return to normal sea ice?
Mary Hinge: “…they prefer the ‘tabloid’ route of trying to discredit the data and pick holes in any little data issue.”
A tabloid tactic indeed, essentially a gleeful ‘gotcha’ followed quickly by an attempt to assuage conscience through faux concern about the integrity of the data and outrage about incompetent bureaucrats etc.
“It is a self defeating approach as without the hard science behind it, it will eventually go the way of other conspiracy theories such as Roswell, grassy knolls etc.”
The conspiracy (or hoax and fraud) mind-set creates a situation where errors or even artifacts of measurement are readily viewed as evidence of sinister motive, a perspective that is clearly the result of an ideological position.
NSIDC has posted about the real problem – what I saw and blogged about was the result of a catastrophic sensor failure on the satellite platform. See the main WUWT page for details.
Leave your comments on “tabloid” and “conspiracy theory” behind if commenting on that thread because they will be snipped.
Yes.
You might care to note that NSIDC wants to continue using it’s existing measurement methods in order to provide a consistent record. That is to be applauded, changes in the methods used can lead to difficulties.
You will also note that NSIDC suggests that this current technical failure to observe an increase in the Arctic ice in no way invalidates its view that the current Arctic Ice retreat is due to a warming of the earth as observed by satellite data from 1979.
As I pointed out in a previous post we have excellent records of the extent of the southern ice in the Atlantic going back to the American war when HM packets had to sail out of Halifax by the northern route so that the Hydrographer’s Office received regular reports throughout the year until the 1840’s when with the rise of steam navigation the service ended: and was replaced by the Royal Navy’s Arctic patrol which continued and used the same measurement techniques until it ceased at the end of the nineteenth century.
From then on the record is more difficult to decipher, far more observations were made but by different navy’s and private ships and aircraft which did not use the same measurement techniques.
Nevertheless we can be sure from this record that, excluding the current event, the Arctic ice has retreated abruptly four times in the last two hundred and fifty years and that these periods of retreat last for about ten to fifteen years before the ice advances again.
We do not why or how. Some people might try to use statistical analysis to show that this is a cycle. Perhaps it is, perhaps not. Others may speculate on winds, ocean currents or such like including no doubt CO2. Nothing wrong with such speculation it might even lead to a better understanding of what is going on.
I welcome such explanations but beg leave to doubt their validity.
Oh and when anything interesting happens in the Arctic do please wake me up and tell me about it: there really might be a pot of gold at the end of the rainbow by the North Pole after all.
Kindest Regards
One thing to consider. The “Arctic Sea Ice News” may be the most visible part of the work of Dr. Meier and his colleagues, but decidedly not the the most important part. They are not paid to produce daily nice looking images that stand up to highest quality standards. NSIDC allows you to have a look over their shoulder at preliminary data, courtesy of voluntary efforts – honor it as what it is. Their real work lies elsewhere.
If you are interested in longterm trends it is a good idea, NOT to waste your time by looking at the newest image every day and thinking about it. Wait a month, look at the data in one go and you are more likely to observe the important patterns.
@ur momisugly Glenn (20:03:53)
Think of it as a slow puncture in a tire. You won’t notice it right away, but at some later time you will, as symptoms become recognizable.
Thank you for linking to the description of the sensor. The drift in the sensor could be e.g. a drift in temperature of the “hot reference”, which is used for calibration. The experts will have to figure it out, as they have more information on how exactly everything is built and other data. Should I happen to be correct with my example, it would be just pure luck on my part.
I was pleased in reading this article to see that there is not a noticeable contamination with planting doubt in readers’ minds, and that Anthony Watts was raising a sincerely-held concern over accuracy in presentation of information, although I also commend NSIDC for doing the daily work of gathering meaningful data to improve our modeling and projections.
Mr. Watts raised a slightly subtle point about refining the presentation of data so as to raise the level of accuracy of all kinds of discussion, including in the news media. Trained in science myself, I do feel that the chart shown here should include a statement (or a link to a full statement) indicating that these were near-real-time data and that occasional equipment or data “noise” could lead to outliers (aberrant data points) in this publicly-visible data. In fact, a short article about why there is such noise would also be useful in raising public understanding.
As I see it, anticipating and documenting surprising details that may not mislead other scientists but that could confuse reporters or the non-professional public, is an active way to improve one’s services, and is preferable to defending what is clearly a good service at gathering data in this important matter of climate change. The blogger, Watts, for his part hopefully showed restraint and respect while suggesting that NSIDC should pay more attention to the public perception of their data, and should be clear as reasonably possible.
I looked at the NSIDC page, and they do currently have a generous amount of information about sensor drift and the nature of real-time data. I am not sure whether or how much of that was posted prior to the Watts blog entry. It does support the point of keeping discussion civil – it seems that Walt Meier and Anthony Watts have been having a cordial relationship. I would not call the NSIDC graph a true error, as they pointed out that this was raw data and the data goes through additional checks before being used in articles or being archived.
In other words, this dialogue should not be sensationalized, and any animosity between the principals should be minimal. This civility is one aspect of discussion that seems to have been lost to a large extent, for reasons of commercial attention; that loss is truly harmful to informed, societally-beneficial discussion of important issues.