Steig's Antarctic Heartburn

flaming-hot-antarctic-penguin

Art courtesy Dave Stephens

Foreword by Anthony Watts: This article, written by the two Jeffs (Jeff C and Jeff Id) is one of the more technically complex essays ever presented on WUWT. It has been several days in the making. One of the goals I have with WUWT is to make sometimes difficult to understand science understandable to a wider audience. In this case the statistical analysis is rather difficult for the layman to comprehend, but I asked for (and got) an essay that was explained in terms I think many can grasp and understand. That being said, it is a long article, and you may have to read it more than once to fully grasp what has been presented here. Steve McIntyre of Climate Audit laid much of the ground work for this essay, and from his work as well as this essay, it is becoming clearer that Steig et al (see “Warming of the Antarctic ice-sheet surface since the 1957 International Geophysical Year”, Nature, Jan 22, 2009) isn’t holding up well to rigorous tests as demonstrated by McIntyre as well as in the essay below. Unfortunately, Steig’s office has so far deferred (several requests) to provide the complete data sets needed to replicate and test his paper, and has left on a trip to Antarctica and the remaining data is not “expected” to be available until his return.

To help layman readers understand the terminology used, here is a mini-glossary in advance:

RegEM – Regularized Expectation Maximization

PCA – Principal Components Analysis

PC – Principal Components

AWS – Automatic Weather Stations

One of the more difficult concepts is RegEM, an algorithm developed by Tapio Schneider in 2001.   It’s a form of expectation maximization (EM) which is a common and well understood method for infilling missing data. As we’ve previously noted on WUWT, many of the weather stations used in the Steig et al study had issues with being buried by snow, causing significant data gaps in the Antarctic record and in some burial cases stations have been accidentally lost or confused with others at different lat/lons. Then of course there is the problem of coming up with trends for the entire Antarctic continent when most of the weather station data is from the periphery and the penisula, with very little data from the interior.

Expectation Maximization is a method which uses a normal distribution to compute the best probability of fit to a missing piece of data.  Regularization is required when so much data is missing that the EM method won’t solve.  That makes it a statistically dangerous technique to use and as Kevin Trenberth, climate analysis chief at the National Center for Atmospheric Research, said in an e-mail: “It is hard to make data where none exist.” (Source: MSNBC article) It is also valuable to note that one of the co-authors of Steig et al, Dr. Michael Mann, dabbles quite a bit in RegEm in this preparatory paper to Mann et al 2008 “Return of the Hockey Stick”.

For those that prefer to print and read, I’ve made a PDF file of this article available here.

Introduction

This article is an attempt to describe some of the early results from the Antarctic reconstruction recently published on the cover of Nature which demonstrated a warming trend in the Antarctic since 1956.   Actual surface temperatures in the Antarctic are hard to come by with only about 30 stations prior to 1980 recorded through tedious and difficult efforts by scientists in the region.  In the 80’s more stations were added including some automatic weather stations (AWS) which sit in remote areas and report the temperature information automatically.  Unfortunately due to the harsh conditions in the region many of these stations have gaps in their records or very short reporting times (a few years in some cases).  Very few stations are located in the interior of the Antarctic, leaving the trend for the central portion of the continent relatively unknown.  The location of the stations is shown on the map below.

2jeffs-steig-image1

In addition to the stations there are satellite data from an infrared surface temperature measurement which records the temperature of the actual emission from the surface of the ice/ground in the Antarctic.  This is different from the microwave absorption measurements as made from UAH/RSS data which measure temperatures in a thickness of the atmosphere.  This dataset didn’t start until 1982.

Steig 09 is an attempt to reconstruct the continent-wide temperatures using a combination of measurements from the surface stations shown above and the post-1982 satellite data.  The complex math behind the paper is an attempt to ‘paste’ the 30ish pre-1982 real surface station measurements onto 5509 individual gridcells from the satellite data.  An engineer or vision system designer could use several straightforward methods which would insure reasonable distribution of the trends across the grid based on a huge variety of area weighting algorithms, the accuracy of any of the methods would depend on the amount of data available.  These well understood methods were ignored in Steig09 in favor of RegEM.

The use of Principal Component Analysis in the reconstruction

Steig 09 presents the satellite reconstructions as the trend and also provides an AWS reconstruction as verification of the satellite data rather than a separate stand alone result presumably due to the sparseness of the actual data.  An algorithm called RegEM was used for infilling the missing data. Missing data includes pre 1982 for satellites and all years for the very sparse AWS data.  While Dr. Steig has provided the reconstructions to the public, he has declined to provide any of the satellite, station or AWS temperature measurements used as inputs to the RegEM algorithm.  Since the station and AWS measurements were available through other sources, this paper focuses on the AWS reconstruction.

Without getting into the detail of PCA analysis, the algorithm uses covariance to assign weighting of a pattern in the data and does not have any input whatsoever for actual station location.  In other words, the algorithm has no knowledge of the distance between stations and must infill missing data based solely on the correlation with other data sets.  This means there is a possibility that with improper or incomplete checks, a trend from the peninsula on the west coast could be applied all the way to the east.  The only control is the correlation of one temperature measurement to another.

If you were an engineer concerned with the quality of your result, you would recognize the possibility of accidental mismatch and do a reasonable amount of checking to insure that the stations were properly assigned after infilling.  Steig et. al. described no attempts to check this basic potential problem with RegEM analysis.  This paper will describe a simple method we used to determine that the AWS reconstruction is rife with spurious (i.e. appear real but really aren’t) correlations attributed to the methods used by Dr. Steig.  These spurious correlations can take a localized climactic pattern and “smear” it over a large region that lacks adequate data of its own.

Now is where it becomes a little tricky.  RegEM uses a reduced information dataset to infill the missing values.  The dataset is reduced by Principal Component Analysis (PCA) replacing each trend with a similar looking one which is used for covariance analysis.  Think of it like a data compression algorithm for a picture which uses less computer memory than the actual but results in a fuzzier image for higher compression levels.

2jeffs-steig-image2

While the second image is still visible, the actual data used to represent the image is reduced considerably.  This will work fine for pictures with reasonable compression, but the data from some pixels has blended into others.  Steig 09 uses 3 trends to represent all of the data in the Antarctic.  In it’s full complexity using 3 PC’s is analogous to representing not just a picture but actually a movie of the Antarctic with three color ‘trends’ where the color of each pixel changes according to different weights of the same red, green and blue color trends (PC’s).  With enough PC’s the movie could be replicated perfectly with no loss.  Here’s an important quote from the paper.

“We therefore used the RegEM algorithm with a cut-off parameter K=3. A disadvantage of excluding higher-order terms (k>3) is that this fails to fully capture the variance in the Antarctic Peninsula region.  We accept this tradeoff because the Peninsula is already the best-observed region of the Antarctic.”

http://www.climateaudit.org/wp-content/uploads/2009/02/regpar9.gif

Above: a graph from Steve McIntyre of ClimateAudit where he demonstrates how “K=3 was in fact a fortuitous choice, as this proved to yield the maximum AWS trend, something that will, I’m sure, astonish most CA readers.

K=3 means only 3 trends were used, the ‘lack of captured variance’ is an acknowledgement and acceptance of the fuzziness of the image.  It’s easy to imagine that it would be difficult to represent a complex movie image of Antarctic with any sharpness from 1957 to 2006 temperature with the same 3 color trends reweighted for every pixel.  In the satellite version of the Antarctic movie the three trends look like this.

2jeffs-steig-image3

Note that the sudden step in the 3rd trend would cause a jump in the ‘temperature’ of the entire movie.  This represents the temperature change between the pre 1982 recreated data and the after 1982 real data in the satellite reconstruction.  This is a strong yet overlooked hint that something may not be right with the result.

In the case of the AWS reconstruction we have only 63 AWS stations to make the movie screen, by which the trends of 42 surface station points are used to infill the remaining data.  If the data from one surface station is copied to the wrong AWS stations the average will overweight and underweight some trends. So the question becomes, is the compression level too high?

The problems that arise when using too few principal components

Fortunately, we’re here to help in this matter.  Steve McIntyre again provided the answer with a simple plot of the actual surface station data correlation with distance.  This correlation plot compares the similarities ‘correlation’ of each temperature station with all of the 41 other manual surface stations against the distance between them.  A correlation of 1 means the data from one station is exactly equal to the other.  Because A -> B correlation isn’t a perfect match for B->A there are 42*42 separate points in the graph.  This first scatter plot is from measured temperature data prior to any infilling of missing measurements.  Station to station distance is shown on the X axis.  The correlation coefficient is shown on the Y axis.

2jeffs-steig-image4

Since this plot above represents the only real data we have existing back to 1957, it demonstrates the expected ‘natural’ spatial relationship from any properly controlled RegEM analysis.  The correlation drops with distance which we would expect because temps from stations thousands of miles away should be less related than those next to each other.  (Note that there are a few stations that show a positive correlation beyond 6000 km.  These are entirely from non-continental northern islands inexplicably used by Steig in the reconstruction.  No continental stations exhibit positive correlations at these distances.)  If RegEM works, the reconstructed RegEM imputed (infilled) data correlation vs. distance should have a very similar pattern to the real data.  Here’s a graph of the AWS reconstruction with infilled temperature values.2jeffs-steig-image5

Compare this plot with the previous plot from actual measured temperatures.  Now contrast that with the AWS plot above.  The infilled AWS reconstruction has no clearly evident pattern of decay over distance.  In fact, many of the stations show a correlation of close to 1 for stations at 3000 km distant!  The measured station data is our best indicator of true Antarctic trends and it shows no sign that these long distance correlations occur.  Of course, common sense should also make one suspicious of these long distance correlations as they would be comparable to data that indicated Los Angeles and Chicago had closely correlated climate.

It was earlier mentioned that the use of 3 PCs was analogous to the loss of detail that occurs in data compressions.   Since the AWS input data is available, it is possible to regenerate the AWS reconstruction using a higher number of PCs.  It stood to reason that spurious correlations could be reduced by retaining the spatial detail lost in the 3 PC reconstruction.  Using RegEM, we generated a new AWS reconstruction using the same input data but with 7 PCs.  The distance correlations are shown in the plot below.

2jeffs-steig-image6

Note the dramatic improvement over that shown in the previous plot.  The correlation decay with distance so clearly seen in the measured station temperature data has returned.  While the cone of the RegEM data is slightly wider than the ‘real’ surface station data, the counterintuitive long distance correlations seen in the Steig reconstruction have completely disappeared.  It seems clear that limiting the reconstruction to 3 PCs resulted in numerous spurious correlations when infilling missing station data.

Using only 3 principal components distorts temperature trends

If Antarctica had uniform temperature trends across the continent, the spurious correlations might not have a large impact in the overall reconstruction.  Individual sites may have some errors, but the overall trend would be reasonably close.  However, Antarctica is anything but uniform.  The spurious correlations can allow unique climactic trends from a localized region to be spread over a larger area, particularly if an area lacks detailed climate records of its own.  It is our conclusion is that is exactly what is happening with the Steig AWS reconstruction.

Consider the case of the Antarctic Peninsula:

  • The peninsula is geographically isolated from the rest of the continent
  • The peninsula is less than 5% of the total continental land mass
  • The peninsula is known to be warming at a rate much higher than anywhere else in Antarctica
  • The peninsula is bordered by a vast area known as West Antarctica that has extremely limited temperature records of its own
  • 15 of the 42 temperature surface stations (35%) used in the reconstruction are located on the peninsula

If the Steig AWS reconstruction was properly correlating the peninsula stations temperature measurements to the AWS sites, you would expect to see the highest rates of warming at the peninsula extremes.  This is the pattern seen in the measured station data.  The plot below shows the temperature trends for the reconstructed AWS sites for the period of 1980 to 2006.  This time frame has been selected as this is the period when AWS data exists.  Prior to 1980, 100% of AWS reconstructed data is artificial (i.e. infilled by RegEM).

2jeffs-steig-image7

Note how warming extends beyond the peninsula extremes down toward West Antarctica and the South Pole.  Also note the relatively moderate cooling in the vicinity of the Ross Ice Shelf (bottom of the plot).  The warming once thought to be limited to the peninsula appears to have spread.  This “smearing” of the peninsula warming has also moderated the cooling of the Ross Ice Shelf AWS measurements.  These are both artifacts of limiting the reconstruction to 3 PCs.

Now compare the above plot to the new AWS reconstruction using 7 PCs.

2jeffs-steig-image8

The difference is striking.  The peninsula has become warmer and warming is largely limited to its confines.  West Antarctica and the Ross Ice Shelf area have become noticeably cooler.  This agrees with the commonly-held belief prior to Steig’s paper that the peninsula is warming, the rest of Antarctica is not.

Temperature trends using more traditional methods

In providing a continental trend for Antarctica warming, Steig used a simple average of the 63 AWS reconstructed time series.  As can be seen in the plots above, the AWS stations are heavily weighted toward the peninsula and the Ross Ice Shelf area.  Steig’s simple average is shown below.  The linear trend for 1957 through 2006 is +0.14 deg C/decade.  It is worth noting that if the time frame is limited to 1980 to 2006 (the period of actual AWS measurements), the trend changes to cooling, -0.06 deg C/decade.

2jeffs-steig-image9

We used a gridding methodology to weight the AWS reconstructions in proportion to the area they represent.  Using the Steig’s method, 3 stations on the peninsula over 5% of the continent’s area would have the same weighting as three interior stations spread over 30% of the continent area.  The gridding method we used is comparable to that utilized in other temperature constructions such as James Hansen’s GISStemp.  The gridcell map used for the weighted 7 PC reconstruction is shown here.

2jeffs-steig-image10

Cells with a single letter contain one or more AWS temperature stations.  If more than one AWS falls within a gridcell, the results were averaged and assigned to that cell.  Cells with multiple letters had no AWS within them, but had three or more contiguous cells containing AWS stations.  Imputed temperature time series were assigned to these cells based on the average of the neighboring cells.  Temperature trends were calculated both with and without the imputed cells.  The reconstruction trend using 7 PCs and a weighted station average follow.

2jeffs-steig-image11

The trend has decreased to 0.08 deg C/decade.  Although it is not readily apparent in this plot, from 1980 to 2006 the temperature profile has a pronounced negative trend.

Temporal smearing problems caused by too few PCs?

The temperature trends using the various reconstruction methods are shown in the table below.  We have broken the trends down into three time periods; 1957 to 2006, 1957 to 1979, and 1980 to 2006.  The time frames are not arbitrarily chosen, but mark an important distinction in the AWS reconstructions.  There is no AWS data prior to 1980.  In the 1957 to 1980 time frame, every single temperature point is a product of the RegEM algorithm.   In the 1980 to 2006 time frame, AWS data exists (albeit quite spotty at times) and RegEM leaves the existing data intact while infilling the missing data.

We highlight this distinction as limiting the reconstruction to 3 PCs has an additional pernicious effect beyond spatial smearing of the peninsula warming.   In the table below, note the balance between the trends of the 1957 to 1979 era vs. that of the 1980 to 2006 era. In Steig’s 3 PC reconstruction, moderate warming that happened prior to 1980 is more balanced with slight cooling that happened post 1980.  In the new 7 PC reconstruction, the early era had dramatic warming, the later era had strong cooling.  It is believed that the 7 PC reconstruction more accurately reflects the true trends for the reasons stated earlier in this paper.  However, the mechanism for this temporal smearing of trends is not fully understood and is under investigation.  It does appear to be clear that limiting the selection to three principal components causes warming that is largely constrained to a pre-1980 time frame to appear more continuous and evenly distributed over the entire temperature record.

Reconstruction

1957 to 2006 trend

1957 to 1979 trend (pre-AWS)

1980 to 2006 trend (AWS era)

Steig 3 PC

+0.14 deg C./decade

+0.17 deg C./decade

-0.06 deg C./decade

New 7 PC

+0.11 deg C./decade

+0.25 deg C./decade

-0.20 deg C./decade

New 7 PC weighted

+0.09 deg C./decade

+0.22 deg C./decade

-0.20 deg C./decade

New 7 PC wgtd imputed cells

+0.08 deg C./decade

+0.22 deg C./decade

-0.21 deg C./decade

Conclusion

The AWS trends which this incredibly long post was created from were used only as verification of the satellite data.  The statistics used for verification are another subject entirely.  Where Steig09 falls short in the verification is that RegEM was inappropriately applying area weighting to individual temperature stations.  The trends from the AWS reconstruction clearly have blended into distant stations creating an artificially high warming result.  The RegEM methodology also appears to have blended warming that occurred decades ago into more recent years to present a misleading picture of continuous warming.  It should also be noted that every attempt made to restore detail to the reconstruction or weight station data resulted in reduced warming and increased cooling in recent years.  None of these methods resulted in more warming than that shown by Steig.

We don’t yet have the satellite data (Steig has not provided it) so the argument will be:

“Silly Jeff’s you haven’t shown anything, the AWS wasn’t the conclusion it was the confirmation.”

To that we reply with an interesting distance correlation graph of the satellite reconstruction (also from only 3 PCs).  The conclusion has the exact same problem as the confirmation.  Stay tuned.

2jeffs-steig-image13

(Graph originally calculated by Steve McIntyre)

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

135 Comments
Inline Feedbacks
View all comments
P Folkens
March 1, 2009 10:42 am

1) Providing PDFs of important contributions like this one raises the value of WUWT to a new level. Thank you!
2) This work should be submitted to Nature under “correspondences” or as a full-tilt rebuttal (if not done already). If it is rejected, a scathing rebuke of Nature is warranted.
3) President Obama has laid the gauntlet over the new budget which includes the carbon cap and trade scheme. Perhaps a summary of “The Jeffs” article linked to the complete article needs to be sent to everyone’s member of Congress emphasizing that a more balanced scientific review is warranted before imposing a $600 billion+ new tax on the country in the midsts of a major economic recession.

Terry
March 1, 2009 10:47 am

Steig’s Mona Lisa appears to have grown a moustache… 🙂 Very nice work, Jeffs, thanks for summing up several weeks of effort into a succinct, easily understandable analysis.

March 1, 2009 11:03 am

One more thought.
With such rich quality of material appearing as Jeff & Jeff (to be known as JJ 09 in future??) I dream again of the wiki we climate skeptics owe ourselves to put together, to write gold-standard pieces like this one, that we all know and refer to, that deconstruct the Hockey Stick and all the rest, and are comprehensible like this is, to get the word out about the real science. Now couldn’t it be fun too, if we can do it together?

Stephen Brown
March 1, 2009 11:10 am

[snip – noted, thank you, but off topic; Arctic Trying to keep the discussion centered on the Antarctic]

Rocket Man
March 1, 2009 11:21 am

Thank you Jeff, Jeff and Anthony for spending your own time to do this analysis. It is too bad you don’t get paid by “Big Oil” (or by anybody for that matter) to do it, unlike Steig and the Team who get paid by “Big Government” to do their work.
In my opinion, what this analysis shows is the futility of trying to measure global temperatures (or even continent wide temperatures) with ground based measurements. GISS Temp has a lot of the same problems. Sure, you can use the data to do an analysis, but the results will be strongly dependent on what methodology you use. And without knowing the “actual” temperature trends, which of course is what you are trying to find, it can never be known if the methodology used gives you an accurate picture of what is actually happening.
What Steig and company should have done is to release a paper showing a representative number different methodologies (not all of them, because that would be an infinite number) and their results and then present an argument as to why the one they chose is the best one.
Of course the best way is to quit using ground based measurements at all as primary sources of temperature data and use exclusively satellite data. With all the money spent trying to interpret ground based data, we could launch a couple of more satellites and get high quality, wide coverage data of the entire planet.

March 1, 2009 11:34 am

The difference being if we get it wrong the product fails and people get fired.
Thank you, Jeff C. I appreciate your candor. I like extrapolatory algorithms, too, in their proper place.
I wish it were not the case, but good people, friends of mine, have lost their jobs because of GW alarmism.
In my own field, forestry, vast tracts of heritage forests are being incinerated (millions of acres per year) with massive deleterious effects to vegetation, habitat, watersheds, airsheds, public health and safety, homes, lives, etc. Those catastrophes are blamed on phony global warming, a paltry excuse for failing to to do the active stewardship required.
We face a runaway government heck-bent on imposing cap-and-stifle carbon taxes, also justified by scientifically defective GW alarmism.
These wholly preventable real world tragedies and injustices bug me no end. I appreciate your efforts to debunk the bunkum. I wish we could do more to stop the actual active and future tragedies incited by the GW claptrap.

timbrom
March 1, 2009 11:35 am

Has anyone calculated the energy required to melt all the ice in Antartica and then compared that with the available energy transferrable to the continent? At a rough guess I’d hazard that it would take a couple of years, at least.

Just want truth...
March 1, 2009 12:14 pm

Anthony
“REPLY:Barring their acceptance of publishing a rebuttal, perhaps we should consider a full page ad in Nature. I think we could garner enough financial support from readers to make that happen. – Anthony”
I’m in.

March 1, 2009 12:15 pm

Count me in too.

Aron
March 1, 2009 12:15 pm

There have been people looking for the cause of the peninsula’s warming. Some have suggested volcanic activity. There are other suggestions too.
What I have not heard yet is what about the winds that blow over from South America. Could wind be bringing some accumulated heat from the many urban heat islands in South America? If so, then the warming is not caused by climate change but by atmospheric temperature contamination from another continent.

J. Peden
March 1, 2009 12:16 pm

Many thanks, again, Jeffs, and your post wasn’t really that long.
If I place more thermometers on my one acre, do I now own more acres?/sarc
As already noted, it should be interesting to see what Nature does. This is perhaps Nature’s moment of truth. Why don’t they just admit that their peer review is not an audit or a guarantor of truth – or something?

Aron
March 1, 2009 12:25 pm

OK, I’ve looked at the direction Westerlies (winds that flow from west to east) from South America usually take and they do pass directly over the Antarctic Peninsula. We need more attention paid to this because it seems that the temperature monitors are simply being contaminated by warmer winds from South America.

Policyguy
March 1, 2009 12:30 pm

Would someone please parse this acknowledgment that there is a disadvantage to using K>3?
“We therefore used the RegEM algorithm with a cut-off parameter K=3. A disadvantage of excluding higher-order terms (k>3) is that this fails to fully capture the variance in the Antarctic Peninsula region. We accept this tradeoff because the Peninsula is already the best-observed region of the Antarctic.”
This appears to be doublespeak. It seems to me to say that the author is sacrificing greater accuracy on the peninsula in order to see greater clarity in the rest of the continent. But if it is true that we know more about the peninsula, why shouldn’t that be used to verify information elsewhere?

HasItBeen4YearsYet?
March 1, 2009 12:54 pm

Rocket Man (11:21:30) :
I would think we would still need some ground based stations for calibration purposes, just to keep the satellites honest.

DAV
March 1, 2009 12:54 pm

Wyatt A (08:21:13) : Can anyone point me to a good URL to get a better understanding of Principle Component Analysis?
The problem with most PCA discussions is that they are fairly thick unless you are good at seeing mathematical relationships. I personally think in images . Anthony’s second link has a good illustration at Fig 2a. (http://www.snl.salk.edu/~shlens/pub/notes/pca.pdf). Here’s my mental image:
A multivariate vector is a multidimensional vector where each variable defines an axis. If the variables are uncorrelated, they will be orthogonal with each other. For a number of reasons that I won’t go into, anything else is often undesirable. The goal of PCA is to change the data coordinates to an orthogonal set.
It does this by placing axes centered on the average variance. For it to work properly, the variance has to be normalized. If you look at Fig. 2a, the original variables (Xa, Ya) are highly correlated. The largest variance extends along the diagonal between the two and the smallest is 90 degrees from that. So two new axes (variables) are generated. It is customary to label them in order of descending contribution. In Fig 2a, PC1 is the longer line and PC2 is the shorter. In the Wikipedia article Anthony linked under Derivation of PCA using the covariance method, this corresponds to having a covariance matrix with the only non-zero values on the diagonal.
To use the PCA on the data, one rotates and translates the data to the PC coordinate system.
Note that the new coordinates may or may not have any physical interpretation. It is often hoped that PCA will separate the individual contribution (signal) of each variable to the observed data but that can only be proven outside of the PCA. Likewise, the PCA may or may not have predictive power or usefulness in obtaining a prediction.
HTH

a jones
March 1, 2009 12:59 pm

This is an excellent piece of work which highlights the dangers of the modern fashion for using statistical reconstructions which are open to interpretation: not least because the method used can be chosen to produce a desired result.
It doesn’t only happen in climate studies either.
Now this may be OT, if so snip, but what I find of particular interest and never before known until this analysis, is that there was a warming trend in the Antarctic followed by a cooling trend which seems to be the inverse of that in the Arctic.
This may be coincidence and mean nothing at all.
But the Arctic was cooling until 1979 which is why that date is used for ice extent data, because the ice was then at its maximum. Actual satellite data goes back to 1974 when the ice extent was rather less.
Similarly we know that from about 1979 the sea ice extent and season, the period for which the Antarctic sea ice persists in seasonally open water, has increased from the late 1970’s, it is about four weeks longer today than back then.
The speculation that there is an oscillation of temperature between the two poles with one warming whilst the other cools and vice versa is old: and essentially based on sea ice records.
Yet here we have a new source of data to match the temperature data in the Arctic: and it shows just such an inverse relationship.
It also goes to show how important it is you use statistical techniques to reveal what is actually happening rather than to support some preconceived idea.
Because I will take a small bet than neither of the Jeffs ably assisted by Steve knew that the outcome of their analysis might reveal either this rather interesting fact: or that the said fact might possibly help to confirm that there may indeed be an inverse temperature relationship between the poles.
Fascinating.
Kindest Regards

Norm in the Hawkesbury
March 1, 2009 1:20 pm

Excuse a poorly educated old man who can only understand by reading a lot, relying on the resultant osmosis of knowledge and intuition.
It looks to me like there are two separate climatic areas in the Antartic; the penisula and the rest.
Could we do an extrapolation of each area individually, note the variance and then work out the cause?
I am led to believe the penisula has a warmer earth crust below than the majority of the continent. Would that not be like the inverse correlation to including Alaska into the US mainland figures? They are not really alike.
Also, the fact that the peninsula protrudes into the ocean wouldn’t it be affected by the prevailing weather patterns from the ocean?

Jeff Alberts
March 1, 2009 1:21 pm

I’m so proud to be a Jeff! 😉 Too bad I’m not nearly as smart as these guys. 🙁
I still maintain that infilling ANY temperature data cannot be rationalized. Unless the working and non-working sensors are within a couple hundred meters of each other, they will tend to have different weather.

thefordprefect
March 1, 2009 1:23 pm

No infill. No reconstruction. Just the data:
1971 2000 temperature trends from British Antarctic Survey
http://www.nerc-bas.ac.uk/public/icd/gjma/reader.temp.pdf
1951 to 2006 temperature trends
http://www.nerc-bas.ac.uk/public/icd/gjma/trends2006.col.pdf
1951 version shows most stations with increase in temperature
1971 version shows warming from180 to 15 deg E (clockwise)

Wyatt A
March 1, 2009 1:27 pm

Anthony and DAV,
Thanks for the links and discussions!
This the most awesomely-awesomest of websites. Most deserved of the “Best Science Blog” award.
Jeff-n-Jeff,
Great work!
I was wondering though, rather than correlate station distance to temperature maybe we should look at latitude? Stations separated by miles, but at the same latitude, might have a strong correlation.
Thanks again,

BarryW
March 1, 2009 1:35 pm

So if the rule of thumb for climate vs weather is 30 yrs and even Steig’s analysis shows a cooling since 1979, then the antarctic climate is, by the climatologists’ own definition, definitely getting colder! Yet they publish the opposite. Alert the media!

March 1, 2009 1:36 pm

Until I read through this, I had no idea that there was so much complexity in this. Reminded of Dante, and his observance that “complex frauds” resulted in having those who willingly participated in them spending eternity somewhere below the 6th or 7th Circle.
The Global Warming “calculations”, here and in other areas of “concern” certainly seem to fall under the classification of “complex fraud”.
catholicfundamentalism.com makes use of many of your articles to let believers know that they should pray for those who tell lies for money or prestige. Or, both. They seem to define “lost souls” by their very existence.

John F. Hultquist
March 1, 2009 1:47 pm

Jeff C. (09:18:12) :
Allen63 (06:03:48) :
Regarding the number of components used – A simple word explanation:
I’ll use an off-topic example because I think all will be able to relate to it.
Say I had data by county that included “new car purchases” along with age classes (0-5, 5-10, etc.; sex (2 classes), income level (again with several classes), % foreign born, and on and on. Some of these variables are obviously related (maybe r^2 > .9). The goal is to find one, two, or more variables that “explain” our dependent variable, namely, “new car purchases.”
For example, % with income >$100,000 might be one with high explanatory power. But that would be highly correlated with age between 50-65, and also, % employed in the “high tech” industry. Think of several other things that would be related to these.
The idea then is to manipulate the data in such a way as to collapse these several related measures (variables) into a “component” variable that would, by itself, have a high correlation with our independent variable.
We would like to have several of these components that are not themselves correlated, but when these several are all used they have high explanatory power. That means they should explain the variance in the data of the independent variable.
We also would like these components to be “interpretable” or have a meaning we could assign a name to, as in my example, maybe the term “Status.” You want principle components because as the number of components increases each next one has less and less explanatory and is less interpretable, or has less meaning. However, the more you have or use the more variance you explain but the tradeoff is you can’t say what was added to your degree of understanding.
In the case of this temperature data, this last statement would seem to be a non-issue.

Rocket Man
March 1, 2009 1:50 pm

HasItBeen4YearsYet? (12:54:42) :
If you are trying to measure atmospheric warming, using ground based measurements is not going to give you the atmospheric temperature of the column of air over the measurement site. Rather, ground based measurements are going to give you a representation of the interaction between the ground and the air at ground level. While this information might be useful in determining micro climate effects, it is not very useful in telling you what is happening in the atmosphere as a whole.

John F. Hultquist
March 1, 2009 1:52 pm

In my post “That means they should explain the variance in the data of the independent variable.” This last should be dependent variable. Sorry for the too quick submit.