Icy Arctic Variations in Variability

Guest Post by Willis Eschenbach

A while back, I noticed an oddity about the Hadley Centre’s HadISST sea ice dataset for the Arctic. There’s a big change in variation from the pre- to the post-satellite era. Satellite measurements of ice areas began in 1979. Here is the full HadISST record, with the monthly variations removed.

Figure 1. Anomaly in the monthly sea ice coverage as reported by the HadISST, GISST, and Reynolds datasets. All data are from KNMI. Monthly average variations from the overlap period (1981-1994) have been subtracted from each dataset. All data are from KNMI (see Monthly Observations).

There’s a few points of note. First, the pre-1953 data is pretty useless, much of it is obviously not changing from year to year. Second, although the variation in the GISST dataset is doesn’t change in 1979, the variation in the HadISST dataset changes pretty radically at that point. Third, there is a large difference between the variability of the Reynolds and the GISST datasets during the period of their overlap.

I had filed this under unexplained curiosities and forgotten about it … until the recent publication of a paper called Observations reveal external driver for Arctic sea-ice retreat, by Notz and Marotzke, hereinafter N&M2012

Why did their paper bring this issue to the fore?

Well, the problem is that the observations they use to establish their case are the difference in variability of the HadISST during period 1953-1979, compared to the HadISST variations since that time. They look at the early variations, and they use them as “a good estimate of internal variability”. I have problems with this assumption in general due to the short length of time (25 years), which is way too little data to establish “internal variability” even if the data were good … but it’s not good, it has problems.

To their credit, the authors recognized the problems in N&M2012, saying:

Second, from 1979 onwards the HadISST data set is primarily based on satellite observations. We find across the 1978/1979 boundary an unusually large increase in sea-ice extent in March and an unusually large decrease in sea-ice extent in September (Figures 1b and 1d). This indicates a possible inconsistency in the data set across this boundary.

Ya think? I love these guys, “possible inconsistency”. The use of this kind of weasel words. like “may” and “might” and “could” and “possible”, is Cain’s mark on the post-normal scientist. Let me remove the GISST and Reynolds datasets and plot just the modern period that they use, to see if you can spot their “possible inconsistency” between the 1953-1979 and the post 1979 periods…

Figure 2. As in Figure 1, for HadISST only.

The inconsistency is clearly visible, with the variability of the pre- and post-1979 periods being very different.

As a result, what they are doing is comparing apples and oranges. They are assuming the 1953-1979 record is the “natural variability”, and then they are comparing that to the variability of the post-1979 period … I’m sorry, but you just can’t do that. You can’t compare one dataset with another when they are based on two totally different types of measurements, satellite and ground, especially when there is an obvious inconsistency between the two.

In addition, since the GISST dataset doesn’t contain the large change in variability seen in the HadISST dataset, it is at least a working assumption that there is some structural error in the HadISST dataset … but the authors just ignore that and move forwards.

Finally, we have a problematic underlying assumption that involves something called “stationarity”. The stationarity assumption says that the various statistical measures (average, standard deviation, variation) are “stationary”, meaning that they don’t change over time.

They nod their heads to the stationarity problem, saying (emphasis mine):

For the long-term memory process, we estimate the Hurst coefficient H of the pre-satellite time series using detrended fluctuation analysis (DFA) [Peng et al., 1994]. Only a rough estimate of 0.8 < H < 0.9 is possible both because of the short length of the time series and because DFA shows non-stationarity even after removal of the seasonal cycle.

Unfortunately, they don’t follow the problem of non-stationarity to its logical conclusion. Look, for example, at the variability in the satellite record in the period 1990-2000 versus the period 2000-2005. They are quite different. In their analysis, they claim that a difference in variability pre- to post-1979 establishes that human actions are the “external driver” … but they don’t deal with the differences pre- and post-2000, or with the fact that their own analysis shows that even the variability of the pre-1979 data is not stationary.

Finally, look at the large change in variability in the most recent part of the record. The authors don’t mention that … but the HadISST folks do.

03/DECEMBER/2010. The SSM/I satellite that was used to provide the data for the sea ice analysis in HadISST suffered a significant degradation in performance through January and February 2009. The problem affected HadISST fields from January 2009 and probably causes an underestimate of ice extent and concentration. It also affected sea surface temperatures in sea ice areas because the SSTs are estimated from the sea ice concentration (see Rayner et al. 2003). As of 3rd December 2010 we have reprocessed the data from January 2009 to the present using a different sea ice data source. This is an improvement on the previous situation, but users should still note that the switch of data source at the start of 2009 might introduce a discontinuity into the record. The reprocessed files are available from the main data page. The older version of the data set is archived here.

08/MARCH/2011. The switch of satellite source data at the start of 2009 introduced a discontinuity in the fields of sea ice in both the Arctic and Antarctic.

Curious … the degradation in the recent satellite data “probably causes an underestimate of ice extent and concentration,” and yet it is precisely that low recent ice concentration that they claim “reveals an external driver” …

In any case, when I put all of those problems together, the changes in variability in 1979, in 2000, and in 2009, plus the demonstrated non-stationarity pre-1979, plus the indirect evidence from the GISST and Reynolds datasets, plus the problems with the satellites affecting the critical recent period, the period they claim is statistically significant in their analysis … well, given all that I’d say that the N&M2012 method (comparing variability pre- and post-1979) is totally inappropriate given the available data. There are far too many changes and differences in variability, both internal to and between the datasets, to claim that the 1979 change in variability means anything at all … much less that it reveals an “external driver” for the changes in Arctic sea ice.

w.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Policy Guy

Obviously another example of groupthink analysis by an unsupervised graduate student. Forward!

William McClenney

“Ya think? I love these guys, “possible inconsistency”. The use of this kind of weasel words. like “may” and “might” and “could” and “possible”, is Cain’s mark on the post-normal scientist.
Struth mate!

Dr. Deanster

Let me tell you an issue I’d like to see someone look into … I find it funny that the variability of the arctic ice between 1997 and 2007 seems awfully constant and compressed when compared to 1979-1997 and 2007 to present. Though it does make a nice tight trend downwards .. the variability just seems very uncharacteristic when compared to the rest of the record.
Kinda makes me think that the data has been cooked. Now .. that couldn’t be .. I mean the CAGW crowd would never cook the data … right??

edcaryl

There could be a man-made external forcing. Check NoTricksZone tomorrow.
http://notrickszone.com/

tango

I hope they will let the snow crab fishing boats which are iced in and cannot go after them , that the ice has melted

cartoonasaur

But if they don’t torture the data, then how is that little rascal Mankind to confess it’s sins? Poor widdle earthy worthy got all broke-d cuz the bad mans got all pwoffity woffitty, uh huh…

SteveSadlov

You’ve all heard of the 350 org, I hereby announce the 500 org.

“Curious … the degradation in the recent satellite data ‘probably causes an underestimate of ice extent and concentration,’ and yet it is precisely that low recent ice concentration that they claim ‘reveals an external driver’ …”
In the vernacular and as detailed by the musical group Tag Team, “Whomp, there it is.”
Admittedly, though, N&M2012 asserts that their purported negative, linear trend is across the entire sample time period – on the presumption the pre-satellite and satellite data reported similar magnitudes attributed to internal variability. However, Mr. Eschenbach presented a strong argument that the collection methodologies associated with the two data sets render subsequent comparisons largely suspect.
Which narrative would you regard as more accurate – (a) the paragraph you read without your glasses in which you confidently reported seven out of every ten words (due to the blur factor) or (b) the paragraph you read with your glasses in which you confidently reported ten out of every ten words (due to the clarity factor)? Now, add a gouge to the glasses (i.e., a recent and significant degradation in performance) after the dog knocked you over with one last sentence to read and even this accuracy decreases, as well.
Common sense, albeit subjective, suggests that any comparison between the two data sets must default to the use of a broad brush rather than a narrow, more precise one (Monte-Carlo simulations notwithstanding due to the lies, damned lies, and statistics factor).

jorgekafkazar

“The use of this kind of weasel words. like “may” and “might” and “could” and “possible”, is Cain’s mark on the post-normal scientist.”
Not really. Having written my share of scientific reports, I can say that those words are part of the normal lexicon. It is the absence of those words, not their presence, that identifies pseudoscience. The disconnect lies in skipping from a scientific paper directly to policy, conveniently omitting the qualifiers to achieve the desired result.
But I agree; the paper is severely flawed, as you state. . Notz & Marotz note a discontinuity, assume that it represents human effects, base their analytical statistics on short-term data before the discontinuity, then cite the short-term post-discontinuity data as proof of human effects. This is nothing more than a fanciful way of begging the question. Worse, they attribute the presence of a sudden discontinuity to human CO2 emissions, though the latter are gradual and the former is abrupt. This is post-doc, post hoc thinking, proof that they’ve assumed an understanding of the Arctic ice system that they don’t really possess.
They also state “Arctic sea ice is currently retreating rapidly…” though as of this date the NORSEX Arctic Ice area is within a hairbreadth of the 1979-2006 average. http://arctic-roos.org/observations/satellite-data/sea-ice/observation_images/ssmi1_ice_area.png

Philip Bradley

Too much grant money, chasing too few opportunities for real climate science.
The 2007 onwards increase in variability does seem a real phenomena resulting from increased ice ‘melt’ and increased ice formation on an annual basis.

Willis Eschenbach

jorgekafkazar says:
May 3, 2012 at 7:50 pm

“The use of this kind of weasel words. like “may” and “might” and “could” and “possible”, is Cain’s mark on the post-normal scientist.”

Not really. Having written my share of scientific reports, I can say that those words are part of the normal lexicon. It is the absence of those words, not their presence, that identifies pseudoscience. The disconnect lies in skipping from a scientific paper directly to policy, conveniently omitting the qualifiers to achieve the desired result.

I disagree. A scientific statement would be something like “the value of the variable is 3.6 ± 1.2 units (one standard deviation).
The corresponding pseudo-scientific statement is “the value of the variable may be greater than 6”.
Both statements could be true. The value could in fact be 6.5. But only one of the statements is science.
I see this all the time, in forms like Hansen’s infamous claim that the increase in CO2 could lead to a 20 metre increase in sea level by the end of the century. Well, yes, it could, and I could win the lottery tomorrow, but that’s not science.
Science is generally distinguished, in fact, by the use of error estimates rather than weasel words like “may occur” and “could lead to”.
However, Jorge, I do agree with you that “those words are part of the normal lexicon”.
That’s the problem …
w.

The human drivers in this case are the “scientists” playing with numbers. Crunch ’em this way, then crunch ’em that, until the only “logical” explanation is fumes from their Cadillacs.
They call me Baby Driver
And once upon a pair of wheels
Hit the road and I`m gone ah
What`s my number
I wonder how your engine feels.

Willis is quite correct here in both his analysis and in the lexicon of science speak. My take on it it is simply another run as making the data fit the theory.

Manfred

Hi Willis,
the pre-1953 data is not pretty useless.
The artificial step change in the early 50s stands out so obviously, that its use is clear: cheat.
Another artificial step change may be around 1973, perhaps inserted with an instrument change, but as always in the direction to support alarm.

peter

This is brilliant — it dismisses the statistical analysis used in a peer-reviewed paper in a prestigious journal without actually doing any statistics! Only a man without the intellectual blinders of advanced degree in science, without experience in scientific research, or any without any kind background in mathematics more advanced than algebra could so effortlessly dismiss the scientific work of a couple of egghead PhD research geeks. We should all reread D&K (2000) and pondor admiringly on Mr. Eschenbach’s achievements in science and statistics.

Len

Willis:
Nice work again. And your conclusion that the changes in the time series are the results of changes in measurement techniques and changes in instrument performance, not CO2 forcing or any other global warming terminology appear logical and supported by the data. The authors’ rush to man-caused change is simply not evident in the data. Again, nice work and thank you for all your hard work in our behalf.

Philip Bradley

This is brilliant — it dismisses the statistical analysis used in a peer-reviewed paper in a prestigious journal without actually doing any statistics!
People don’t realize how much poor or bad science gets published. A prominent scientist I know very well, who evaluates published studies for the purpose of formulating public policy tells me that most of the published studies in his field are worthless. Specifically, the conclusions they draw aren’t warranted by the data they present.
Which, in a nutshell, is Willis’s critique above.

Willis Eschenbach

peter says:
May 3, 2012 at 9:25 pm

This is brilliant — it dismisses the statistical analysis used in a peer-reviewed paper in a prestigious journal without actually doing any statistics! Only a man without the intellectual blinders of advanced degree in science, without experience in scientific research, or any without any kind background in mathematics more advanced than algebra could so effortlessly dismiss the scientific work of a couple of egghead PhD research geeks. We should all reread D&K (2000) and pondor admiringly on Mr. Eschenbach’s achievements in science and statistics.

Ah, yes, the argument from authority. Please notice, folks, that it appears he cannot find any errors in my logic or my facts or my claims. As a result, he is reduced to saying but, but, but the authors have PhDs! It’s peer reviewed! The journal is prestigious! All overlaid with a thick coat of satire.
My achievements in science and statistics are a matter of public record, both in the peer reviewed journals and here on this blog. I have four peer-reviewed publications in the scientific journals, including a peer-reviewed “Communications Arising” in Nature magazine. They had no problem with my qualifications or achievements. Regarding my math-fu, it’s strong, and there is always more to learn. I got credit for a year of college calculus when I was still in high school, and never looked back. I wrote my first computer program in 1963, and have continued programming to this day. And yes, I do make stupid math errors sometimes, and when I do, it’s very public. The Argus-eyed intarwebs assure that I get called on it and admit it and continue to learn.
But that’s not the point, peter. It’s not about me … and it’s not about the authors or the journals either. It’s about the science, the ideas. Nothing else matters. Doesn’t matter who said it, doesn’t matter if they wrote it on the bathroom wall. All that matters is, is it true? Is it consistent? Does it hold up under attack?
Notice, folks, that peter cannot find any fault in the actual science and statistical concepts and arguments that I have presented here.
peter, my statistics are reasonably good, but the underlying problem with their analysis is not statistical. Or rather, the problem is statistical but it lies in the area of the logic of statistics, rather than what you call the doing of statistics. That’s why I’m not “doing any statistics”, even though I can do them. That’s not where the problem lies.
The problem is that the authors are comparing two separate, non-stationary datasets taken by different groups of people at different times using different measuring tools. Here’s the problem:
You can’t draw any conclusions from comparing the variations of two such datasets.
It’s a fools errand. There is no reason to assume that the mean, variance, or distribution of the two are related at all. They may be related, but if they are not, this does not “reveal an external driver”, that’s nonsense.
Finally, appeals to authority such as you have made above, even if they are as clever and sarcastic as yours, don’t get any traction here. At WUWT we do science. If you have objections to my science, to my logic, to my data, to my mathematics, to my statistics, then QUOTE MY WORDS THAT YOU OBJECT TO and explain my error, which I know to be manifold.
Anything else, and you’re just spinning your wheels …
w.

P. Solar

>>
This is brilliant — it dismisses the statistical analysis used in a peer-reviewed paper in a prestigious journal without actually doing any statistics!
>>
Brilliant is perhaps an over-statement but it is a very good article by Willis.
He does not dismiss that statistical analysis itself, he is quite clearly showing why any such analysis is invalid before it is even calculated because the data are incomparable AND are explicitly said to be so by the authors that publish the data.
Not only is this paper flawed science , it is knowingly flawed and hence dishonest science.
If GRL was as “prestigious” as you suggest, they would not publish such obviously false rubbish.

Arctic ice extent depends on Norther Atlantic SST
http://i51.tinypic.com/oqlpxi.jpg
and North Atlantic SST goes 30 years up and 30 years down
http://i51.tinypic.com/k8epd.jpg
so the pre-satellite steady ice extent is nonsense. There is a study which observes Barents sea ice following the AMO in the same wave pattern.
One hint – NW passage was open in 1942-44 and then only in 2007. Both on the top of the warm SST peaks.

Mike Edwards

peter says:
May 3, 2012 at 9:25 pm

This is brilliant — it dismisses the statistical analysis used in a peer-reviewed paper in a prestigious journal without actually doing any statistics!

On the contrary, Peter. What this case shows is the worthlessness of peer review – how was it that none of the peer reviewers picked up these relatively straightforward problems?
You’d expect the peer reviewers to be knowledgeable about the subject matter of the paper, right? In which case, you’d also expect them to be aware of the available datasets, right? If so, shouldn’t they have been asking the quite basic questions that Willis has asked here? And at least requiring the authors to address those questions properly?
This kind of paper simply brings the whole system into disrepute.
This makes the case for the abandonment of peer review for something more open, where anyone who is interested can make open comments on a draft paper for all to see and where the authors’ response is also done openly. The true spirit of open enquiry.

Andrew

RE
Willis Eschenbach says:
May 3, 2012 at 11:00 pm
peter says:
May 3, 2012 at 9:25 pm
———-
Willis pins the issue brilliantly as usual. The big problem for the peter-warmists of the world is that they have so few real world data they can hope to conjour into a form that even remotely looks like it supports the failed CAGW hypothesis that they feel absolutely compelled to defend this type of Post Normal Science (PNS) to the last man as it were – because if they don’t they know the entire edifice of “evidence” will disappear in a flash of light and puff of smoke.
The Shakun et. al. paper delusion falls into the ‘same sh*t different shovel’ category in its shameless cherry picking of proxies, as does the entire contorted, twisted and battered land-based temp record, the entire catalogue of IPCC spin, the various attempts by Santer and friends to cover for the missing ‘hotspot’ using wind shear measurements in preference to radiosonde thermometer readings, the Mann-schtick trick flick etc.etc.
peter, are you not in any way embarrassed by your ‘play the man’ attacks? Do you really have such little sham and insight? And you cultists wonder why people generally have become so suspicious with your entire story-line.

First of all, the referenced paper is completely wrong about the “beginning” of the satellite era. There is an almost continuous record of Arctic and Antarctic ice from satellites,stretching back to the early 1960s’.
My team has personally helped the NSIDC with Nimbus I, II, and III HRIR data. Beyond that, it has recently been discovered that there is a film copy of the Nimbus AVCS (visible light) images.
We need to put this 1979 boundary crap to bed.

The Infidel

Makes me wonder if we have almost exhausted the limits of truly extraordanary scientific discoveries? So they are having to make stuff up to keep winning prizes and getting money off the gullable. Most branches of science seem to be at the same place, few to no new dicoveries, just the occational improvement, occational backward steps.

Old England

Part which intrigues me is the statement from the ‘HadIIST folks’ that SST are ‘SSTs are estimated from the sea ice concentration’.
By concentration do they mean thickness or extent or a combination of both? Does this take account of wind driven ice piled thicker or more closely driven together and changing the extent? If th temperature of the sea is 0.5 or 1 deg C colder or warmer than the overlying ice – that wouldn’t seem to figure in their calculations so how do they correct for that?
Is it simply more of the guesstimation that seems to be a theme of the Post Normal Science philosophy that too many climate scientists seem o be adherents of ?

Rob Dekker

Willis said Well, the problem is that the observations they use to establish their case are the difference in variability of the HadISST during period 1953-1979, compared to the HadISST variations since that time.
What IS the “difference in variability of the HadISST during period 1953-1979, compared to the HadISST variations since that time”, as reported by the paper, Willis ? Hint : answer is on page 2 of the paper.
And while we are at it, where in the paper did you see that this difference is relevant “to establish their case” ?
Peter, This is brilliant – it dismisses the statistical analysis used in a peer-reviewed paper in a prestigious journal without actually doing any statistics!
It’s not just that Willis does not use any statistics at all. The bigger problem is that Willis seems to be completely clueless about the data analysis used as a basis for the conclusions in this paper. It’s almost as if he deliberately avoids the statistics and data analysis, and instead pretends to reveal issues in the various climate records, even though these have already been addressed thoroughly in the paper.

Rob Dekker

Dennis Ray Wingo, would you care to share some of that pre-1979 data with us ?

Kasuha

1979-2008 is used as anomaly mean there – which means anomalies are artificially minimized for this interval. Anything outside that interval is going to have increasing anomalies unless the system is stationary.
Pre-satellite era is characterized by way lower certainity of results. Someone here recently stated that it’s not science if there are no error bars. I’m missing them in these graphs.
And regarding the recent change, I wouldn’t blame the satellite too much. The change did not start in 2009, it started in half 2007 and in 2009 it was going on for two years already. It would be sure interesting to find a satellite problem causing it as it was rather abrupt but I don’t see any mentioned here. Quite likely the 2009 satellite measurement error is order of magnitude smaller than the real change which occurred.

mfo

Very neat. The paper should have been called Observations reveal discontinuities for Arctic sea-ice.

BioBob

What a surprize ! More garbage in yields more garbage out. Who would have thunk it ?
The problem with any kind of measurement of reality is ALWAYS estimating how close to reality one is approaching. All experimental and field scientists worth their salt understand that statistically validated data is required in order to yield estimates that at least approach reality.
Only climate scientists and charlatans seem to be able to blow smoke up everyone else’s butts about the probitive statistical validity of a sample size one ONE.
A sample size of ONE. That’s the takaway from all this….. One thermometer (forget that the types change over time), one satellite (forget about the fact that serial satellite comparisons show their internal / external inconsistencies), no replicates, no random sampling, no field validation of remote sensing data nor proper calculation of stats, nor estimation of the size of instrument error, etc.
Just pitiful.

It is important not to confuse satellite ‘imagery’ from visible and passive microwave. Passive microwave retrievals started in 1979 but visual images from satellite started earlier.

Willis Eschenbach

Rob Dekker says:
May 4, 2012 at 1:37 am (Edit)

Willis said

Well, the problem is that the observations they use to establish their case are the difference in variability of the HadISST during period 1953-1979, compared to the HadISST variations since that time.

What IS the “difference in variability of the HadISST during period 1953-1979, compared to the HadISST variations since that time”, as reported by the paper, Willis ? Hint : answer is on page 2 of the paper.
And while we are at it, where in the paper did you see that this difference is relevant “to establish their case” ?

Hint: the difference in variability is meaningless.
As to where in the paper it says the difference in variability is relevant, it is the point of the entire paper. See the introduction:

We find that the available observations are sufficient to virtually exclude internal variability and self-acceleration as an explanation for the observed long-term trend, clustering, and magnitude of recent sea-ice minima.

What do you think they are talking about if not the difference in variability?
Or see e.g. Section 3, called “Internal Variability”. What they are doing is showing that the variability of the later period exceeds the “internal variability” exhibited by the earlier pre-satellite period. You sure you read the paper? It’s all about variability, that’s what they are using to make their case.
Rob, let me try explaining the underlying problem again. It’s as though they estimated a sample of people’s weights by looking at the size of their pants, and then they estimated the weights of another sample of people by looking at the size of their shirts. Then they say that because one sample exceeds the “internal variance” of the other, it shows that there is an “external driver” making them different.
You still don’t seem to grasp the nettle—you can’t simply grab two datasets from different ways of measuring something, splice the two datasets together, and then claim that the difference between the two is meaningful.

Peter,

This is brilliant – it dismisses the statistical analysis used in a peer-reviewed paper in a prestigious journal without actually doing any statistics!

It’s not just that Willis does not use any statistics at all. The bigger problem is that Willis seems to be completely clueless about the data analysis used as a basis for the conclusions in this paper. It’s almost as if he deliberately avoids the statistics and data analysis, and instead pretends to reveal issues in the various climate records, even though these have already been addressed thoroughly in the paper.

No, the issues haven’t been “addressed thoroughly” in the paper. They have been mentioned in the paper. Where, for example, have they “addressed thoroughly” the issue of non-stationarity? Yes, they mention it, but that’s all they do.
And some issues are not even mentioned, like the satellite problems in the recent data.
You have the same problem that Peter had. You are happy to accuse me of a variety of sins, but you say nothing about what I might have done wrong.
Rob, I ask of you what I asked of peter. If you disagree with something I’ve said, stop prancing around asking Socratic questions and going hint hint. If you disagree, QUOTE MY WORDS and explain exactly why they are wrong.
Finally, you seem to think it’s a crime that I haven’t gone into the statistical nuts and bolts. There was no need to, since the underlying premise was flawed. But if you like statistical questions, how about this one.
They assume that despite the fact that they find the pre-1979 records to be non-stationary, that they can estimate “internal variability” of the system from that short 25-year record.
Given what we know about the general long-term variability of the climate, and the known existence of ~ 60-year cycles in the Arctic temperatures, and the fact that they themselves say the pre-1979 dataset is non-stationary … what are the error limits on their calculation of “internal variability”?
And given those problems and the shortness of the record, can the errors on their “internal variability” even be calculated?
I’d be interested in your answers to those questions … be sure to use lots of statistics, they seem to impress some people …
w.

Willis Eschenbach

Dennis Ray Wingo says:
May 4, 2012 at 12:34 am

First of all, the referenced paper is completely wrong about the “beginning” of the satellite era. There is an almost continuous record of Arctic and Antarctic ice from satellites,stretching back to the early 1960s’.
My team has personally helped the NSIDC with Nimbus I, II, and III HRIR data. Beyond that, it has recently been discovered that there is a film copy of the Nimbus AVCS (visible light) images.
We need to put this 1979 boundary crap to bed.

Thanks, Dennis. If you have citations to that earlier data it would be great.
However, when you say “the referenced paper is completely wrong” about the satellite era, you mistake their meaning. They are talking about “satellite era” in reference to the datasets that they used. The HadISST data is put together like this:

The data used for the Northern Hemisphere fields were as follows: (1) 1871–1900: a calendar-monthly cli- matology of adjusted mid-monthly Walsh data (see section A1.2) for 1901–1930. (2) 1901 to October 1978: mid- monthly adjusted Walsh data. However, fields for the period 1940–1952 were set to the calendar monthly 1940–1952 climatology, as the Walsh data set appears to be a sequence of two different climatologies during that period (Figure 1). (3) November 1978 – 1996: monthly median bias-adjusted GSFC data. Fields for the SSM/I data-void of December 1987 and January 1988 [Cavalieri et al., 1999] were filled by linear temporal interpolation of anomalies for the previ- ous and following months, and adding the result to the 1978–1996 climatology of the bias-adjusted GSFC data. (4) 1997 onward: monthly median bias-adjusted NCEP data.

SOURCE: Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century (large PDF from Hadley Centre)
Note that it is non-satellite data up to November of 1978, and satellite data thereafter. That is why they can refer to the “satellite era” starting in 1979.
In a more general sense, I think that “satellite era” in reference to 1979 means the start of continuous observations by satellite. As far as I know, the earlier Nimbus satellites, although quite useful, didn’t provide a continuous record.
Curiously, the authors don’t even make use of the HadISST satellite data. Instead, they are using the NSIDC data after 1978, viz:

For this latter period, we use satellite observations collected in the NSIDC Sea Ice Index [Fetterer et al., 2002, 2010] (“satellite NSIDC record” in Figure 1). We use the NSIDC record rather than the HadISST record from 1979 onwards because the NSIDC record provides a more consistent interpretation of the satellite period [Meier et al., 2007].

For those insisting that they need statistics, I note that the authors have not provided any statistics or any sensitivity analysis justifying this choice of one of the two satellite datasets, or demonstrating its effect on the final result.
w.

richard verney

William McClenney says:
May 3, 2012 at 6:53 pm
///////////////////////////////
I agree with William on this particular criticism (which does not distract from the thrust of your article as a whole).
One of the main problems in this so called ‘science’ is the certainty with which data or trends or results or projections or predictions or proxies are claimed. Every claim and every interpretation in this so called ‘science’ should be couched in terms of uncertainty. I therefore consider it wrong to critise when some authors of a paper are indicating uncertainty
That said, I think your article raises some interesting issues and suggests flaws in the methodology used in the N & M paper. Unfortunately, no great surprise to see that data is being improperly handled.

Greg Holmes

Willis, when you write them I can follow them, pure genius, you should be President.

ParmaJohn

As with everything else AGW the paper described here hits another nail on the head. Obviously sea ice is diminishing due to mankind’s activities: in this case they have uncovered the nefarious effect of a man-made satellite on the entire Arctic’s stability.
Schrödinger and his cat would be proud, if we could only see them.
On the other hand, maybe this is just one more for Maurizio Morabito’s list of fortuitously timed observations. We switched to satellite observations in exactly the same month that natural variability was overrun by man-made forcing! What luck!

Partington et al 2003: http://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1058&context=usdeptcommercepub
“Both chart data and passive microwave data show a negative trend in integrated arctic-wide concentration over the period 1979-1994. The difference between the passive microwave and chart trends is statistically significant only in the summer, where it is about 2 percent per decade steeper in passive microwave data.”
“Differences between the NIC ice chart sea ice record and the passive microwave sea ice record are highly significant despite the fact that the NIC charts are semi-dependent on the passive microwave data, and it is worth noting these differences.”
There is a significant divergence between non-satellite (including visible derived satellite) and satellite passive microwave Arctic sea ice measures. There are also different trends between satellites with decaying orbits (DMSP) and controlled orbit (AQUA).
Could Instrumentation Drift Account for Arctic Sea Ice Decline? http://www.scribd.com/doc/89395853/Could-Instrumentation-Drift-Account-for-Arctic-Sea-Ice-Decline

Before anyone jumps to conclusions, GISST stands for Global sea Ice and Sea Surface Temperature. It is a product of the Hadley Centre, not GISS.

Willis: hadisst and gisst are not sea ice records in themselves. They collect other sources of ice information. If you want to do you analysis of sea ice information with primary ice data, you want NSIDC, not hadisst or gisst.

a reader

Re Nimbus I:
National Geographic, Feb. 1965, pp. 189-193. “Incredible photograph shows Earth from pole to pole.” Nimbus I was only up for 26 days but made 27,000 pictures. Both North and South poles are shown in the pic on pp. 189-190. They were created by heat scanning, so show the photo through heat contrast. The photo of Europe on page 191-193 is quite good as there aren’t as many clouds which show up as “cool.”

Henry Clark

Although I wouldn’t trust any of it to not be more subtly *adjusted*, what they did for the pre-1950 part of the graph is even amazingly blatant. I’ve saving this graph as an example. Peaks/troughs in exactly flat lines unchanging over decades never exist in the real world if one knows anything about weather and climate, how such is never exactly constant. (If they want to pretend zero data, although that isn’t really true, then they have no justification for depicting that time period at all within their graph).
The HadISST and GISST claims so utterly contrast to
http://earthobservatory.nasa.gov/Features/ArcticIce/Images/arctic_temp_trends_rt.gif (temperatures
as warm or warmer in a peak in the 1930s than in the late 20th century) as well as sea ice maps discussed at the recent http://wattsupwiththat.com/2012/05/02/cache-of-historical-arctic-sea-ice-maps-discovered/ article.
In one way, this makes sense. Someone like me or almost all other skeptics would never even consider trying to enter the field and get employment at institutions like that in the post-Mann-era, because we know we would never fit in and not be penalized, whereas the kind of ideologues who do not mind dishonesty for the cause of some environmentalists have increasingly gravitated towards them. Almost anything on climate published in the late 1990s and beyond, especially that published in the past several years, has to be double-checked for likely intentional skewing (not always but far too often). In contrast, though, one can relatively trust any older scientific publications up to around the 1970s at least. At this point, I think the world essentially needs the uncertain but possible scenario of solar cycle 24 after its peak declining over subsequent years into a Grand Minimum with severe temperature drop over most of the next dozen or so years (aside from short fluctuations on the scale of one to several years at a time from ocean variation and other precipitation-temperature fluctuations like El Ninos versus La Ninas), to destroy the spread of corruption.

Peter

Let’s say I weigh myself every day using a primitive balance-and-counterweight scale, and dutifully record the values I measure with this contraption. Then one day in 1979, I upgrade to a new, accurate digital scale and continue my weight record with the new, more accurate values. One day I look back at my weight record and find I’ve been gradually but surely gaining weight. I would like to examine the data to find if the reason I’ve been gaining weight is that I’ve been eating, but it occurs to me that I should carefully examine the older weight records first because I know the data is less accurate. But Willis says, “you just can’t do that.” “You can’t compare one dataset with another when they are based on two totally different types of measurements,” Willis tells me. I try to explain to Willis that I am going to analyze the older data carefully to insure it can be compared against the newer data. But to Willis, that doesn’t matter. He looks at my chart of data and says, “The inconsistency is clearly visible, with the variability of the pre- and post-1979 periods being very different.”
Well, there we have it. I guess I haven’t been gaining weight! Thanks Willis!

Jim G

Peter says:
May 4, 2012 at 7:29 am
There is a difference between your absolute weight gain and the variability of your weight gain record from one measurement system to another. It is the accuracy of the measurement of this variability that is the point of the discussion and the basis for the claims in the subject article.

No Peter, you don’t understand what “different types of measurements” means. The proper analogy would be that you “measured” your weight until 1978 by using a measuring tape and recording the circumference of your thigh, and then from 1979 on, you switched to a digital scale to weigh yourself. Since your thigh circumference did not increase with your weight gain (because it was all going into your belly), you conclude with near certainty from your measurements that you started gaining weight in 1979 and it must be due to AGW.

Don’t mind me. I’m trying to stop the emails about follow-up comments I keep getting from this thread. I’ve tried a number of other things and none have worked.
[Reply: Go to ‘settings’ on the page you received. Turn off emails. Worked for me. ~dbs, mod.]

dh7fb

“Dennis Ray Wingo says:
May 4, 2012 at 12:34 am
First of all, the referenced paper is completely wrong about the “beginning” of the satellite era. There is an almost continuous record of Arctic and Antarctic ice from satellites,stretching back to the early 1960s’.
My team has personally helped the NSIDC with Nimbus I, II, and III HRIR data. Beyond that, it has recently been discovered that there is a film copy of the Nimbus AVCS (visible light) images.
We need to put this 1979 boundary crap to bed.”
I checked the interannual std.dev. in the HADISST ( downloadable from here http://climexp.knmi.nl/get_index.cgi ) and calculated the year-to-year differences. Before 1953 the data are not useful for calculating the variability (differences 1945…1953 are zero!), this point is reflected correctly in the paper ( Introduction, point 6). Anyway, from 1953 to 1965 the differences are much more tiny than in the years after 1964 so one can say, the data from 1953 to 1965 are some kind of suspicious.
From 1965 on the data seem to be useful for calculating the variability. This is consistent with your statement. So the paper should have used only data from 1965 on.
dh7fb

Larry

…cause we had satellites and computers to measure that sea ice before 1953. That data is a waste of time.

Check NASA SP-489 for the years 1973-76.
Nimbus 1, 2, 3, and 4, have records as well as other satellites. We have done the work just for the HRIR data for these birds. The AVCS images are being scanned even as we speak. While no one satellite has a continuous record, if you paste all of them together the record goes back quite far.
There are papers on this from the AGU. I don’t want to steal anyone’s thunder from the NSIDC as they have done a major service in this area.

An addendum is that the HRIR images are worth far more than just the ice. We have been able to track mid 1960’s hurricanes and the first calculation of the global heat balance was done using Nimbus II HRIR images. In my opinion, a very good scientific study of the configuration of the atmosphere in the colder conditions of the mid 1960’s and compare that with the 70’s, 80’s, and through today would provide major insights into atmospheric circulation, rainfall patterns, and actually allow some really good predictions of weather based on global temperatures….

Jim G

Larry says:
May 4, 2012 at 10:44 am
“…cause we had satellites and computers to measure that sea ice before 1953. That data is a waste of time.”
Didn’t know Sputnik I measured sea ice and it was the first to orbit Earth in 1957. Or was it the monkey on the Jupiter launch in 1959? What satellite was in 1953? Not all “space launches” reached orbital velocity which is required to be considered a “satellite”.