Steig's Antarctic Heartburn

flaming-hot-antarctic-penguin

Art courtesy Dave Stephens

Foreword by Anthony Watts: This article, written by the two Jeffs (Jeff C and Jeff Id) is one of the more technically complex essays ever presented on WUWT. It has been several days in the making. One of the goals I have with WUWT is to make sometimes difficult to understand science understandable to a wider audience. In this case the statistical analysis is rather difficult for the layman to comprehend, but I asked for (and got) an essay that was explained in terms I think many can grasp and understand. That being said, it is a long article, and you may have to read it more than once to fully grasp what has been presented here. Steve McIntyre of Climate Audit laid much of the ground work for this essay, and from his work as well as this essay, it is becoming clearer that Steig et al (see “Warming of the Antarctic ice-sheet surface since the 1957 International Geophysical Year”, Nature, Jan 22, 2009) isn’t holding up well to rigorous tests as demonstrated by McIntyre as well as in the essay below. Unfortunately, Steig’s office has so far deferred (several requests) to provide the complete data sets needed to replicate and test his paper, and has left on a trip to Antarctica and the remaining data is not “expected” to be available until his return.

To help layman readers understand the terminology used, here is a mini-glossary in advance:

RegEM – Regularized Expectation Maximization

PCA – Principal Components Analysis

PC – Principal Components

AWS – Automatic Weather Stations

One of the more difficult concepts is RegEM, an algorithm developed by Tapio Schneider in 2001.   It’s a form of expectation maximization (EM) which is a common and well understood method for infilling missing data. As we’ve previously noted on WUWT, many of the weather stations used in the Steig et al study had issues with being buried by snow, causing significant data gaps in the Antarctic record and in some burial cases stations have been accidentally lost or confused with others at different lat/lons. Then of course there is the problem of coming up with trends for the entire Antarctic continent when most of the weather station data is from the periphery and the penisula, with very little data from the interior.

Expectation Maximization is a method which uses a normal distribution to compute the best probability of fit to a missing piece of data.  Regularization is required when so much data is missing that the EM method won’t solve.  That makes it a statistically dangerous technique to use and as Kevin Trenberth, climate analysis chief at the National Center for Atmospheric Research, said in an e-mail: “It is hard to make data where none exist.” (Source: MSNBC article) It is also valuable to note that one of the co-authors of Steig et al, Dr. Michael Mann, dabbles quite a bit in RegEm in this preparatory paper to Mann et al 2008 “Return of the Hockey Stick”.

For those that prefer to print and read, I’ve made a PDF file of this article available here.

Introduction

This article is an attempt to describe some of the early results from the Antarctic reconstruction recently published on the cover of Nature which demonstrated a warming trend in the Antarctic since 1956.   Actual surface temperatures in the Antarctic are hard to come by with only about 30 stations prior to 1980 recorded through tedious and difficult efforts by scientists in the region.  In the 80’s more stations were added including some automatic weather stations (AWS) which sit in remote areas and report the temperature information automatically.  Unfortunately due to the harsh conditions in the region many of these stations have gaps in their records or very short reporting times (a few years in some cases).  Very few stations are located in the interior of the Antarctic, leaving the trend for the central portion of the continent relatively unknown.  The location of the stations is shown on the map below.

2jeffs-steig-image1

In addition to the stations there are satellite data from an infrared surface temperature measurement which records the temperature of the actual emission from the surface of the ice/ground in the Antarctic.  This is different from the microwave absorption measurements as made from UAH/RSS data which measure temperatures in a thickness of the atmosphere.  This dataset didn’t start until 1982.

Steig 09 is an attempt to reconstruct the continent-wide temperatures using a combination of measurements from the surface stations shown above and the post-1982 satellite data.  The complex math behind the paper is an attempt to ‘paste’ the 30ish pre-1982 real surface station measurements onto 5509 individual gridcells from the satellite data.  An engineer or vision system designer could use several straightforward methods which would insure reasonable distribution of the trends across the grid based on a huge variety of area weighting algorithms, the accuracy of any of the methods would depend on the amount of data available.  These well understood methods were ignored in Steig09 in favor of RegEM.

The use of Principal Component Analysis in the reconstruction

Steig 09 presents the satellite reconstructions as the trend and also provides an AWS reconstruction as verification of the satellite data rather than a separate stand alone result presumably due to the sparseness of the actual data.  An algorithm called RegEM was used for infilling the missing data. Missing data includes pre 1982 for satellites and all years for the very sparse AWS data.  While Dr. Steig has provided the reconstructions to the public, he has declined to provide any of the satellite, station or AWS temperature measurements used as inputs to the RegEM algorithm.  Since the station and AWS measurements were available through other sources, this paper focuses on the AWS reconstruction.

Without getting into the detail of PCA analysis, the algorithm uses covariance to assign weighting of a pattern in the data and does not have any input whatsoever for actual station location.  In other words, the algorithm has no knowledge of the distance between stations and must infill missing data based solely on the correlation with other data sets.  This means there is a possibility that with improper or incomplete checks, a trend from the peninsula on the west coast could be applied all the way to the east.  The only control is the correlation of one temperature measurement to another.

If you were an engineer concerned with the quality of your result, you would recognize the possibility of accidental mismatch and do a reasonable amount of checking to insure that the stations were properly assigned after infilling.  Steig et. al. described no attempts to check this basic potential problem with RegEM analysis.  This paper will describe a simple method we used to determine that the AWS reconstruction is rife with spurious (i.e. appear real but really aren’t) correlations attributed to the methods used by Dr. Steig.  These spurious correlations can take a localized climactic pattern and “smear” it over a large region that lacks adequate data of its own.

Now is where it becomes a little tricky.  RegEM uses a reduced information dataset to infill the missing values.  The dataset is reduced by Principal Component Analysis (PCA) replacing each trend with a similar looking one which is used for covariance analysis.  Think of it like a data compression algorithm for a picture which uses less computer memory than the actual but results in a fuzzier image for higher compression levels.

2jeffs-steig-image2

While the second image is still visible, the actual data used to represent the image is reduced considerably.  This will work fine for pictures with reasonable compression, but the data from some pixels has blended into others.  Steig 09 uses 3 trends to represent all of the data in the Antarctic.  In it’s full complexity using 3 PC’s is analogous to representing not just a picture but actually a movie of the Antarctic with three color ‘trends’ where the color of each pixel changes according to different weights of the same red, green and blue color trends (PC’s).  With enough PC’s the movie could be replicated perfectly with no loss.  Here’s an important quote from the paper.

“We therefore used the RegEM algorithm with a cut-off parameter K=3. A disadvantage of excluding higher-order terms (k>3) is that this fails to fully capture the variance in the Antarctic Peninsula region.  We accept this tradeoff because the Peninsula is already the best-observed region of the Antarctic.”

https://i0.wp.com/www.climateaudit.org/wp-content/uploads/2009/02/regpar9.gif?resize=520%2C390

Above: a graph from Steve McIntyre of ClimateAudit where he demonstrates how “K=3 was in fact a fortuitous choice, as this proved to yield the maximum AWS trend, something that will, I’m sure, astonish most CA readers.

K=3 means only 3 trends were used, the ‘lack of captured variance’ is an acknowledgement and acceptance of the fuzziness of the image.  It’s easy to imagine that it would be difficult to represent a complex movie image of Antarctic with any sharpness from 1957 to 2006 temperature with the same 3 color trends reweighted for every pixel.  In the satellite version of the Antarctic movie the three trends look like this.

2jeffs-steig-image3

Note that the sudden step in the 3rd trend would cause a jump in the ‘temperature’ of the entire movie.  This represents the temperature change between the pre 1982 recreated data and the after 1982 real data in the satellite reconstruction.  This is a strong yet overlooked hint that something may not be right with the result.

In the case of the AWS reconstruction we have only 63 AWS stations to make the movie screen, by which the trends of 42 surface station points are used to infill the remaining data.  If the data from one surface station is copied to the wrong AWS stations the average will overweight and underweight some trends. So the question becomes, is the compression level too high?

The problems that arise when using too few principal components

Fortunately, we’re here to help in this matter.  Steve McIntyre again provided the answer with a simple plot of the actual surface station data correlation with distance.  This correlation plot compares the similarities ‘correlation’ of each temperature station with all of the 41 other manual surface stations against the distance between them.  A correlation of 1 means the data from one station is exactly equal to the other.  Because A -> B correlation isn’t a perfect match for B->A there are 42*42 separate points in the graph.  This first scatter plot is from measured temperature data prior to any infilling of missing measurements.  Station to station distance is shown on the X axis.  The correlation coefficient is shown on the Y axis.

2jeffs-steig-image4

Since this plot above represents the only real data we have existing back to 1957, it demonstrates the expected ‘natural’ spatial relationship from any properly controlled RegEM analysis.  The correlation drops with distance which we would expect because temps from stations thousands of miles away should be less related than those next to each other.  (Note that there are a few stations that show a positive correlation beyond 6000 km.  These are entirely from non-continental northern islands inexplicably used by Steig in the reconstruction.  No continental stations exhibit positive correlations at these distances.)  If RegEM works, the reconstructed RegEM imputed (infilled) data correlation vs. distance should have a very similar pattern to the real data.  Here’s a graph of the AWS reconstruction with infilled temperature values.2jeffs-steig-image5

Compare this plot with the previous plot from actual measured temperatures.  Now contrast that with the AWS plot above.  The infilled AWS reconstruction has no clearly evident pattern of decay over distance.  In fact, many of the stations show a correlation of close to 1 for stations at 3000 km distant!  The measured station data is our best indicator of true Antarctic trends and it shows no sign that these long distance correlations occur.  Of course, common sense should also make one suspicious of these long distance correlations as they would be comparable to data that indicated Los Angeles and Chicago had closely correlated climate.

It was earlier mentioned that the use of 3 PCs was analogous to the loss of detail that occurs in data compressions.   Since the AWS input data is available, it is possible to regenerate the AWS reconstruction using a higher number of PCs.  It stood to reason that spurious correlations could be reduced by retaining the spatial detail lost in the 3 PC reconstruction.  Using RegEM, we generated a new AWS reconstruction using the same input data but with 7 PCs.  The distance correlations are shown in the plot below.

2jeffs-steig-image6

Note the dramatic improvement over that shown in the previous plot.  The correlation decay with distance so clearly seen in the measured station temperature data has returned.  While the cone of the RegEM data is slightly wider than the ‘real’ surface station data, the counterintuitive long distance correlations seen in the Steig reconstruction have completely disappeared.  It seems clear that limiting the reconstruction to 3 PCs resulted in numerous spurious correlations when infilling missing station data.

Using only 3 principal components distorts temperature trends

If Antarctica had uniform temperature trends across the continent, the spurious correlations might not have a large impact in the overall reconstruction.  Individual sites may have some errors, but the overall trend would be reasonably close.  However, Antarctica is anything but uniform.  The spurious correlations can allow unique climactic trends from a localized region to be spread over a larger area, particularly if an area lacks detailed climate records of its own.  It is our conclusion is that is exactly what is happening with the Steig AWS reconstruction.

Consider the case of the Antarctic Peninsula:

  • The peninsula is geographically isolated from the rest of the continent
  • The peninsula is less than 5% of the total continental land mass
  • The peninsula is known to be warming at a rate much higher than anywhere else in Antarctica
  • The peninsula is bordered by a vast area known as West Antarctica that has extremely limited temperature records of its own
  • 15 of the 42 temperature surface stations (35%) used in the reconstruction are located on the peninsula

If the Steig AWS reconstruction was properly correlating the peninsula stations temperature measurements to the AWS sites, you would expect to see the highest rates of warming at the peninsula extremes.  This is the pattern seen in the measured station data.  The plot below shows the temperature trends for the reconstructed AWS sites for the period of 1980 to 2006.  This time frame has been selected as this is the period when AWS data exists.  Prior to 1980, 100% of AWS reconstructed data is artificial (i.e. infilled by RegEM).

2jeffs-steig-image7

Note how warming extends beyond the peninsula extremes down toward West Antarctica and the South Pole.  Also note the relatively moderate cooling in the vicinity of the Ross Ice Shelf (bottom of the plot).  The warming once thought to be limited to the peninsula appears to have spread.  This “smearing” of the peninsula warming has also moderated the cooling of the Ross Ice Shelf AWS measurements.  These are both artifacts of limiting the reconstruction to 3 PCs.

Now compare the above plot to the new AWS reconstruction using 7 PCs.

2jeffs-steig-image8

The difference is striking.  The peninsula has become warmer and warming is largely limited to its confines.  West Antarctica and the Ross Ice Shelf area have become noticeably cooler.  This agrees with the commonly-held belief prior to Steig’s paper that the peninsula is warming, the rest of Antarctica is not.

Temperature trends using more traditional methods

In providing a continental trend for Antarctica warming, Steig used a simple average of the 63 AWS reconstructed time series.  As can be seen in the plots above, the AWS stations are heavily weighted toward the peninsula and the Ross Ice Shelf area.  Steig’s simple average is shown below.  The linear trend for 1957 through 2006 is +0.14 deg C/decade.  It is worth noting that if the time frame is limited to 1980 to 2006 (the period of actual AWS measurements), the trend changes to cooling, -0.06 deg C/decade.

2jeffs-steig-image9

We used a gridding methodology to weight the AWS reconstructions in proportion to the area they represent.  Using the Steig’s method, 3 stations on the peninsula over 5% of the continent’s area would have the same weighting as three interior stations spread over 30% of the continent area.  The gridding method we used is comparable to that utilized in other temperature constructions such as James Hansen’s GISStemp.  The gridcell map used for the weighted 7 PC reconstruction is shown here.

2jeffs-steig-image10

Cells with a single letter contain one or more AWS temperature stations.  If more than one AWS falls within a gridcell, the results were averaged and assigned to that cell.  Cells with multiple letters had no AWS within them, but had three or more contiguous cells containing AWS stations.  Imputed temperature time series were assigned to these cells based on the average of the neighboring cells.  Temperature trends were calculated both with and without the imputed cells.  The reconstruction trend using 7 PCs and a weighted station average follow.

2jeffs-steig-image11

The trend has decreased to 0.08 deg C/decade.  Although it is not readily apparent in this plot, from 1980 to 2006 the temperature profile has a pronounced negative trend.

Temporal smearing problems caused by too few PCs?

The temperature trends using the various reconstruction methods are shown in the table below.  We have broken the trends down into three time periods; 1957 to 2006, 1957 to 1979, and 1980 to 2006.  The time frames are not arbitrarily chosen, but mark an important distinction in the AWS reconstructions.  There is no AWS data prior to 1980.  In the 1957 to 1980 time frame, every single temperature point is a product of the RegEM algorithm.   In the 1980 to 2006 time frame, AWS data exists (albeit quite spotty at times) and RegEM leaves the existing data intact while infilling the missing data.

We highlight this distinction as limiting the reconstruction to 3 PCs has an additional pernicious effect beyond spatial smearing of the peninsula warming.   In the table below, note the balance between the trends of the 1957 to 1979 era vs. that of the 1980 to 2006 era. In Steig’s 3 PC reconstruction, moderate warming that happened prior to 1980 is more balanced with slight cooling that happened post 1980.  In the new 7 PC reconstruction, the early era had dramatic warming, the later era had strong cooling.  It is believed that the 7 PC reconstruction more accurately reflects the true trends for the reasons stated earlier in this paper.  However, the mechanism for this temporal smearing of trends is not fully understood and is under investigation.  It does appear to be clear that limiting the selection to three principal components causes warming that is largely constrained to a pre-1980 time frame to appear more continuous and evenly distributed over the entire temperature record.

Reconstruction

1957 to 2006 trend

1957 to 1979 trend (pre-AWS)

1980 to 2006 trend (AWS era)

Steig 3 PC

+0.14 deg C./decade

+0.17 deg C./decade

-0.06 deg C./decade

New 7 PC

+0.11 deg C./decade

+0.25 deg C./decade

-0.20 deg C./decade

New 7 PC weighted

+0.09 deg C./decade

+0.22 deg C./decade

-0.20 deg C./decade

New 7 PC wgtd imputed cells

+0.08 deg C./decade

+0.22 deg C./decade

-0.21 deg C./decade

Conclusion

The AWS trends which this incredibly long post was created from were used only as verification of the satellite data.  The statistics used for verification are another subject entirely.  Where Steig09 falls short in the verification is that RegEM was inappropriately applying area weighting to individual temperature stations.  The trends from the AWS reconstruction clearly have blended into distant stations creating an artificially high warming result.  The RegEM methodology also appears to have blended warming that occurred decades ago into more recent years to present a misleading picture of continuous warming.  It should also be noted that every attempt made to restore detail to the reconstruction or weight station data resulted in reduced warming and increased cooling in recent years.  None of these methods resulted in more warming than that shown by Steig.

We don’t yet have the satellite data (Steig has not provided it) so the argument will be:

“Silly Jeff’s you haven’t shown anything, the AWS wasn’t the conclusion it was the confirmation.”

To that we reply with an interesting distance correlation graph of the satellite reconstruction (also from only 3 PCs).  The conclusion has the exact same problem as the confirmation.  Stay tuned.

2jeffs-steig-image13

(Graph originally calculated by Steve McIntyre)

0 0 votes
Article Rating
135 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Richard111
February 28, 2009 11:45 pm

Any chance of a PDF file pretty please?
REPLY: Here you go, hot off the press, just for you. – Anthony
http://wattsupwiththat.files.wordpress.com/2009/03/steigs-antarctic-heartburn-wuwt-022809.pdf

Editor
February 28, 2009 11:58 pm

Jeff & Jeff
I left my graduate program in 1973 because they seemed to have a fondness for what I called 99X99 Sociology – I think they call it “data-mining” today. My statistics were always atrocious…. but I think I follow your argument here. What I can’t quite get, were there any “real” observations left in the analysis or were the results based all on smoothed and filled-in numbers? What happened if a station had a value outside the range of neighboring grids? I gather that the RegEm process is iterative?

Bruce Cunningham
March 1, 2009 12:06 am

I can see another Wegman report , along with a dozen (cough cough) “independent” studies confirming Steig 09 in the future. Tighten your seat belts.

E.M.Smith
Editor
March 1, 2009 12:06 am

So much data to fabricate, so little time. Why can’t these folks (Steig, Hansen) just use the real data?
Jeff & Jeff, thank you. A wonderful exposition.
FWIW, I think a very similar thing happens in GIStemp in that the recursive application of “The Reference Station Method” will cause blurring of one climate zone into another (in particular, the raising of temperatures on the coasts by comparing them with the interior ‘reference stations’ to adjust for UHI. Coasts are heavily populated, so the rural stations will tend to be inland, and inland tends to be more volatile, yet a simple subtraction is done rather than a comparative slope or correlation coefficient). It would be fascinating to see the same distance correlation plots done on the “raw” NOAA data and GIStemp processed temps.

vg
March 1, 2009 12:11 am

Unless Steig provides the requested data, Nature should withdraw this work or even that Issue/Volume. I certainly will not be submitting any work to this journal in the futute if this is the way they referee submissions.

Richard111
March 1, 2009 12:17 am

Gee! Thanks. Sixteen pages, just like that!
(My wife won’t thanks you 🙂 )

Lindsay H
March 1, 2009 12:25 am

a very nice piece of analysis, very good work

Leon Brozyna
March 1, 2009 12:26 am

A quick comment after reading through the forward; I’ll read the rest in the morning after I’m fully awake and have digested my scrambled eggs.
I tried wading through the analysis done on CA and, while I think I apprehended what was done there, I’m looking forward to this write-up.
The number of problems that surfaced after being peer-reviewed published in Nature highlights the problem with the so-called peer review process. If all the peers reviewing a paper accept the premise and conclusion already, such as global warming, they’re most likely to just scan quickly over a paper before giving it their blessing, as appears to have happened with Steig et al. This is the sort of problem I recall that was addressed in the Wegman report to Congress regarding the “hockey stick.” How embarrassing it must be to have a peer-reviewed paper published and then to have significant flaws pointed out, flaws which should have been found in a robust peer review process.

Juraj V.
March 1, 2009 12:26 am

What happened with the Harry station? Which data were used for it in this reconstruction?

March 1, 2009 12:29 am

Stunning. Thanks Jeffs.

Cold Englishman
March 1, 2009 12:42 am

Other side of the world, but here we go again, isn’t it time the bbc stopped this biassed reporting, they start with the solution, now they’re going to prove it. I want these individuals in 2010 to be paraded by the bbc to give an explanation of why the arctic is still frozen. The world has truly gone barmy!
http://news.bbc.co.uk/1/hi/sci/tech/7917266.stm

Manfred
March 1, 2009 1:00 am

antarctica cooling since at least 1980 is in good agreement with the record sea ice levels in recent years.
the pensinsula appears to become the last resort for the AGW crowd.
when the arctic ice recovers, i expect big tavel activity to the peninsula from politicians, press, sponsered cruises for the noisiest scientists and maybe a cayaker.

eo
March 1, 2009 1:04 am

Are we sure both sides of the debate are looking at the same defination? AWS could very well stand for automatic warming station.

Annabelle
March 1, 2009 1:28 am

Thanks so much Jeff and Jeff. I’ve been trying to follow the discussion at CA but lost the plot. This explains a lot.
I’m definitely staying tuned.

vivendi
March 1, 2009 1:57 am

Anthony, thanks for providing this clear, easy to understand explanation of a subject difficult to understand by non-experts. I was able to understand some of the conclusions in Steve’s and Jeff^2’s publications, but since I didn’t understand all the terms and the methods, I could get a good grip. By just spending 20 minutes in reading this article, I was able to brush up and complement my basic knowledge.

March 1, 2009 2:13 am

Let me see if I’ve got this right.
1) Everyone can agree that the Antarctic has warmed between 1957 & 2006, the amount is the question?.
2) That warming is confined to the 1957 – 1979 period, the amount is near enough the same no matter what methodology is applied to calculate it?.
3) 1980 – 2006 shows a cooling, again the amount is the question?.
There are lies, damned lies & statistics?

Ceolfrith
March 1, 2009 2:25 am

Of Topic, sort of
I have got into the habit of checking the images t http://igloo.atmos.uiuc.edu/cgi-bin/test/print.sh?fm=08 once a week out of curiosity.
I have found today that I can no longer access any images for 2009. Anyone know why they’ve closed all access even with the disclaimer on the site?

M White
March 1, 2009 2:28 am

“I want these individuals in 2010 to be paraded by the bbc to give an explanation of why the arctic is still frozen”
We may have to wait a bit longer
“Currently, he has it down for 2013 – but with an uncertainty range between 2010 and 2016.”
http://news.bbc.co.uk/1/hi/sci/tech/7917266.stm
From Arctic ice modeller Professor Wieslaw Maslowski

michel
March 1, 2009 2:30 am

Once again we come up against the basic question: why will you not release the satellite data?

March 1, 2009 2:40 am

Since originally finishing these calculations, I’ve calculated the video from the 3 PC’s of the Antarctic temperature anomaly according to Steig. It’s a bit difficult to interpret but I found it very interesting.
http://noconsensus.wordpress.com/2009/02/28/a-little-bit-of-magic/

Juraj V, the new improved Harry was used.

Mac
March 1, 2009 2:47 am

Lessons to be learnt.
1. Less data/more statistics gives us a warming trend.
2. More data/less statistics gives us a cooling trend.
The only thing that can be claimed is that since 1980 Antarctica has been cooling.

D. King
March 1, 2009 2:59 am

Wow! As you know, there are two places where sensor results
can be affected. The input, where the errors propagate through
the collection and processed results, and post collection,
where the processed results are affected by the processing.
It troubles me that your investigations are showing consistent
errors in the direction of warming. There is a third possibility for
errors, and that is the data itself is somehow being corrupted.
The hottest years of the last century and the hockey stick come
to mind for data corruption errors. These also showed a warming.
The troubling thing about the resent satellite sensor failure is that
the failure was not total. Data was still being produced and the
conclusions being drawn were in the direction of warming and loss
of sea ice area. With this new study of Antarctic AWS data processing
anomalies, which also show warming, it’s time to call this duck….
A Duck! Policies are being implemented that will impact millions
of people worldwide, some in very devastating ways. Is there no
international body of honorable scientists that can review results
and present their conclusions before these Draconian, and arguably
cruel policies are implemented?

March 1, 2009 3:06 am

Wouldn’t it be reasonable to make hand adjustments according to what stations are from a meterological perspective supposed to be correlated? (E.g. on the same side of a rim, or in the same general circulation flow? It certainly wouldn’t be any more arbitrary than Stiegs method.

Phil
March 1, 2009 3:32 am

Great article.
It’s great that we have people like the two Jeff’s keeping track and debunking these bogus reports. I think a lot of people including myself would be lost in all the statistics that we are presented with without such informative articles. Must admit, when I heard the BBC report of the central Antarctic warming it sounded like BS.
Steig et al must be burning the midnight oil to come up with an answer to this.

March 1, 2009 4:19 am

Nice job. The only thing missing is the margin of error which I think is a hoot. With RegEM PC 3 they have a 95% confidence level that their trend results are accurate to +/- what was it? 55% So the Jeffs much smaller trends still fall with in the error range of the paper.
That is what I find ridiculous. Why would Nature even publish, much less put on the cover, this study?

Keith Gelling
March 1, 2009 4:32 am

Well done. A nice piece of analysis.

Ron de Haan
March 1, 2009 4:36 am

Many thanks for this very nice article.
[snip ad hom]
http://www.nytimes.com/2009/03/01/science/earth/01treaty.html?_r=1&partner=rss&emc=rss
What are we going to do about that?

March 1, 2009 4:38 am

Jeff C and Jeff Id, what are the results if some of the data are removed; say 5 or 10 % randomly chosen? Shouldn’t the results be ‘about’ the same, where ‘about’ is kind of fuzzy?
A WAG on my part and very likely not useful.

Chris H
March 1, 2009 4:50 am

A really great post/article, and not *that* long when compared to some RC or Air Vent posts 🙂

Bernie
March 1, 2009 5:00 am

Many probably have already been tracking this discussion here and at CA and elsewhere. What strikes me, besides the very clever use of a mass of different statistical and graphical tools is the quality and tone of the debate. The commitment to transparency of methods and the sharing of code stands in sharp contrast to the obfuscation of Steig et al and Gavin who is apparently their surrogate. I am sure they are following this work closely and are probably somewhat unnerved by the findings, the level of effort, the sheer horsepower that is being targeted on this topic.
Congratulations to the Jeff’s for a clear exposition, Anthony for making it available and all those other contributors here, at CA and elsewhere. The satellite data will come in time and with it red faces among the “professionals”.

Allen63
March 1, 2009 5:19 am

Outstanding contribution! Very concrete, clear, and, as far as it goes, convincing. Now, the opposition has the task, should they accept it, to refute by means of an equally clear exposition.
We need more articles with this level of exposition, not less. The blogs are full of “unsubstantiated opinions” . Back and forth discussions with this level of “substantiation” could lead to genuine improvements in public understanding of AGW issues.

Pierre Gosselin
March 1, 2009 5:25 am

[snip off topic – trying to keep this thread centered]

chris y
March 1, 2009 5:38 am

Jeff and Jeff, thanks for this very interesting analysis. I am interested in the trends calculated using the 7 PC weighted and weight/imputed cells. These give similar positive trends for 1957-1979, and similar negative trends for 1980-2006. The positive and negative trends have almost identical absolute values of 0.21 C/decade. Yet the total trend for 1957-2006 comes out at +0.08 C/decade. Is this a consequence of the error bars of the resulting trends? End effects in the data? Or something else?

Pierre Gosselin
March 1, 2009 5:47 am

Now to this post –
Thanks for the big effort on our behalf in presenting this in an understandable way. I look forward to reading it later this evening.
This really should be featured at the NIPCC Convention in NY to further underscore the sloppiness behind the global warming movement.
The sloppiness just further confirms the thin ice the AGW theory is based on.

Juraj V.
March 1, 2009 5:50 am

“Juraj V, the new improved Harry was used.”
Thanks. I think it is worth publicing in some official way.

pyromancer76
March 1, 2009 6:03 am

Thanks Anthony, Jeff, and Jeff. I will be able to understand a bit more as I ask my brain to form new neurons for this science. The old neurons and networks have been greatly taxed by trying to wrap themselves around the depth of highly organized [snip] by people/organizations/publications I used to trust. [snip]

Allen63
March 1, 2009 6:03 am

By the way, why exactly 7 PC?
Why not use as many PCs as possible to get the best fit possible? I.e. why is more not better, in this particular case? For example, McIntyre’s plot seems to indicate that 32 would make a significant difference to the result.
If there is an optimum (or merely correct) number of PC to use, how is it “objectively” determined?

Steven Goddard
March 1, 2009 6:13 am

UAH shows a downwards trend in Antarctica of nearly 1C/century over the last 30 years. Nobody should be using sparse ground based data there. Antarctica is the same size as the US, and I’m pretty sure that you can’t accurately interpolate the temperature in western Kansas from thermometers in downtown Houston and Phoenix.
http://spreadsheets.google.com/pub?key=pj0h2MODqj3gMXQwEtd2uXg&oid=7&output=image

March 1, 2009 6:21 am

It is a well known fact that temperatures along Chile and the antartic peninsula have never corresponded to temperatures at the same latitudes in the norther hemisphere, because of the great extent of the pacific ocean. As allways GWrs choose the warmest month of the year of the antartic peninsula, february,(temperatures reach up to 2°C above zero) to issue their “convenient” studies.

March 1, 2009 6:43 am

And remember:
http://wattsupwiththat.com/2008/01/22/surprise-theres-an-active-volcano-under-antarctic-ice/
That mountains´ chain, which is a prolongation of southamerican andes, it is active again as shown by the continuous eruption of Chaiten volcano in Chile.

March 1, 2009 7:01 am

Thanks a million Jeff & Jeff. So lucid, so refreshing to get first-rate science again, finally confirming what I’d suspected in reading CA where it was sometimes like trying to read Chinese (no offence meant!).
Now here’s a paper for publication… and… go for another first, online peer-review? or rather, Craig Loehle’s already done that hasn’t he?
Now Antarctica looks like what it used to look like in 2004 (my page to help folk grasp Polar realities, all the way up to Steig), it fits what Svensmark would predict, both before and after 1980. With this, another detail comes to mind. I’d expect the mid-continent areas to show colder cold, and more fluctuation, than all coastal stations. Could that fit too? Finally: any instrument siting data issues (UHI sort of)? Have such been checked? especially with so few sites.

Clive
March 1, 2009 7:02 am

Two Jeffs. Thank you for this summary. We have a local “letter to the editor” writer who is convinced the Antarctic will soon be gone. Good ammo.
Pierre Gosselin: RE: the DC protest … the forecast is hilarious:
http://www.wunderground.com/cgi-bin/findweather/getForecast?query=washington%20DC%20&wuSelect=WEATHER

March 1, 2009 7:07 am

And remember too: Along those mountains we pay a gas tax of about 50%. You were used to pay up to $4.-per gallon last year, so it would be advisable ( :)), the sooner the better (in order not to get accustom to lower prices), to establish such a tax. (I guess this is what is behind the green agenda and, of course, Hansen´s “history march”).

Manfred
March 1, 2009 7:21 am

Gallon (02:13:22) :
“Let me see if I’ve got this right.
1) Everyone can agree that the Antarctic has warmed between 1957 & 2006, the amount is the question?.
2) That warming is confined to the 1957 – 1979 period, the amount is near enough the same no matter what methodology is applied to calculate it?.
3) 1980 – 2006 shows a cooling, again the amount is the question?.”
the fourth picture shows that talking 3 or even 7 pc’s still introduces a warming bias.
talking all pc’s (why should anyone not use all information ?), should further lower the trends.

March 1, 2009 7:27 am

In Canada, province of Quebec, the Government of Quebec must food white-tailed deer for a second year because the are too snow. The news in french: http://droitemonde.blogspot.com/2009/03/une-autre-mauvaise-nouvelle-pour-nos.html

John Philip
March 1, 2009 7:35 am

[snip – off topic, trying to keep this thread centered ]

HasItBeen4YearsYet?
March 1, 2009 7:39 am

QUESTION(s)
(1) What result would they get for only the regions where sensors exist, without any statistical reconstruction? It would seem that if they warmed, or didn’t, in the same way the derived changes did, that might give us some idea whether the reconstruction was at least plausible. I mean, if they don’t show any change, or very little, and the generated data do, I would be very suspicious, as I would if the proportion of change inferred was much greater than that observed.
(2) Is there anything like a control, like performing that hat trick using US monitoring stations selected to border a large region of the US to model the trends within the border, then compare the computed result with what actually happened?

Dr. Bob
March 1, 2009 7:45 am

Thanks for pulling this together. I have a reasonable statistical background so I understand a good deal of this argument. What I do not see discussed is a presentation of the Null Hypothesis (H0) and an analysis of whether or not the data/model supports or refutes the hypothesis at a statistically significant level.
For the Antarctic, the Null Hypothesis should be something like this: “Over the time frame where temperature data is available, there is no discernable trend in temperature change.” This would allow testing the alternative hypothesis that there is a trend in the data statistically different from “0”. That trend could be up or down. Statistical significance of the alternative hypothesis would determine if the null hypothesis would be rejected.
Simply looking at the available data presented in this post leads on to believe that the null hypothesis cannot be refuted by the data. Perhaps this is true for both the Steig data and the McIntyre data.
Another issue that could be discussed is the use of models of data to identify outliers. Thus a reasonable model of the Antarctic data could identify either data within a station that does not fit modeled trends or identification of stations themselves that appear to be outside of modeled trends. Investigation of outlier data can lead to interesting conclusions. The simple case would be recalibration of an instrument or resiting to a better location. The complex case would be discovery of an unexpected fact.
Finally, as George E. P. Box is quoted to have said, “All models are wrong, but some are useful.” Thanks to diligent efforts by McIntyre and many others, I believe we can say that the Steig model is both wrong and not useful.

evanjones
Editor
March 1, 2009 7:48 am

What are we going to do about that?
Judging by past experience, we will sign it and abide by it to about an 80% level. The RoW will sign it and blithely ignore it. The news you will read about it will be about how the US is 20% in violation.
OTOH, I’d rather the RoW ignore it than abide by it because if they do not ignore it, millions of babies will starve, which a result which I do not favor most days.

March 1, 2009 7:49 am

[thanks, noted, but off topic. trying to keep this thread centered ]

Jeff C.
March 1, 2009 7:58 am

chris y (05:38:11) :
“These give similar positive trends for 1957-1979, and similar negative trends for 1980-2006. The positive and negative trends have almost identical absolute values of 0.21 C/decade. Yet the total trend for 1957-2006 comes out at +0.08 C/decade. Is this a consequence of the error bars of the resulting trends? End effects in the data? Or something else?”
This is a good question and was something I investigated when writing this up. Applying trend lines is tricky and can be subject to manipulation and cherry picking of start/end dates. We chose the two periods (1957-1979 and 1980-2006) because their is a clear distinction between these two time frames. Prior to 1980, every single data point is imputed from the surface stations (no true measured values and AWS did not exist). After 1980, the data includes real measured temperatures from the AWS and RegEM infills missing dates.
If you view the plot below, you can see their is a step in the values around the 79/80 breakpoint. I beleive this is caused to some extent by RegEM, but I don’t fully understand it. However, it is clear that there is a pronounced downward trend from 1980 on. This coincides with the inclusion of actual measured temperatures in the AWS reconstruction.
http://i404.photobucket.com/albums/pp127/jeffc1728/7PCwghtdreconwithtrends.jpg

John F. Hultquist
March 1, 2009 8:15 am

[snip, noted thank you, but this is off topic and I’m trying to keep this thread centered]

March 1, 2009 8:17 am

Dr. Bob,
We didn’t attempt to test the significance of the trend. Any test of significance is usually based on the variance of the output. If we had some kind of significance test for the quality of the method I think it would be more meaningful. From memory, Steig reported a +/- 0.07 C/decade trend with a 95% significance repeatedly stated to be highly robust on RC. We tweaked the method a bit and got a 0.06C/Decade drop in trend. It isn’t like we looked for ways to reduce the trend either, we just made an effort to not overweight concentrations of station data. Perhaps 8 PC’s is the way to go and it might make the trend even flatter.
Allen63,
The plot by SteveM used his own method for getting the expectation maximization to converge. I’m not sure if Jeff C has tried it but I couldn’t get convergence from the algorithm much beyond 8 PC’s – this is also a very interesting point to me but I haven’t proven out why. I can almost do post on it now. If you notice SteveM’s method has a flatter slope than RegEM at 7.

Wyatt A
March 1, 2009 8:21 am

Anthony or Jeff or Jeff,
Can anyone point me to a good URL to get a better understanding of Principle Component Analysis?
Thanks,
REPLY: Here are a few – Anthony
http://www.cs.otago.ac.nz/cosc453/student_tutorials/principal_components.pdf
http://www.snl.salk.edu/~shlens/pub/notes/pca.pdf
http://en.wikipedia.org/wiki/Principal_components_analysis

Jack Simmons
March 1, 2009 8:21 am

It would be great fun to use the same methods to project the temperature trends of North America by using only the data collected in a peninsula known for its warmth and a few data points, around 30, from the interior.
Using Baja California, it would be interesting to see what the correlations would be for weather stations in Calgary, Chicago, and so forth. Then, show what these calculations would come up with for the missing data in say, Denver.
Great article.

Pamela Gray
March 1, 2009 8:23 am

Love the idea about using a control. Would Australia work? It is also surrounded by water. How about choosing stations near the Arctic using the same distance and spacial pattern? Granted, that area is surrounded by land so that is a variable that will change what you get. But still, it seems reasonable to me to produce a known control to show just how far off the back-filling method can be compared to the actual data log from all the stations.

March 1, 2009 8:44 am

It may be instructive to study the Wiki entry on PCA here:
http://en.wikipedia.org/wiki/Principal_components_analysis
In essence, PCA transforms simple data sets into linear combinations of data sets. X, Y, and Z become D = aX + bY +cZ. Thus new imaginary data is created. The coefficients of the linear equations are selected mathematically:

PCA is theoretically the optimal linear scheme, in terms of least mean square error, for compressing a set of high dimensional vectors into a set of lower dimensional vectors and then reconstructing the original set.

The imaginary data was used instead of the real data in Steig’s original paper and in the paper above by “the two Jeffs” (I apologize for not knowing who Jeff C is).
For many the use of imaginary data is sufficient and satisfying, but not for me. I prefer real data, not the imaginary kind. I like my data raw, uncooked, and not jiggered into fanciful linear combinations.
There are always “missing data” in any study. Nobody (nor any machine) can collect all the possible readings from every point in space and time. There is always measurement error, to some degree. Such are the realities of empirical science.
Those limitations do not justify the use of imaginary data. I am sorry to see so much hoorah about imaginary trends from imaginary data. I am sure all the authors and all the readers of these studies have much better things to do with their time.
Such frivolity and foolishness would be more tolerable if the imaginary science were not being used to impose Draconian economic and social oppression in the real world.

robert brucker
March 1, 2009 8:52 am

[take this issue to another thread please]

Pamela Gray
March 1, 2009 8:54 am

See the following:
http://oceancurrents.rsmas.miami.edu/southern/antarctic-cp.html
Near the bottom of the article, it describes a warm pool/cold pool oceanic wave connected to the Antarctic Current that stretches anywhere from 8 to 10 years in oscillation, but could be shorter or longer because it hasn’t been studied for very long. This wave has a major affect on SST and land temps, as well as precipitation, in the higher SH latitudes, including the Antarctic.
hmmmmm
I would want to graph any Antarctic peninsula temp changes to this wave current. Of all the places that this wave would most likely cause a temp fluctuation, it would be in and around that peninsula.

March 1, 2009 9:04 am

A small but important note: next time please include page numbers in long papers! It makes discussion about the document much easier!

dearieme
March 1, 2009 9:07 am

“It would be great fun to use the same methods to project the temperature trends of North America by using only the data collected in a peninsula known for its warmth and a few data points, around 30, from the interior.” I agree, Jack, but we must be sure to include data from a few neighbouring islands – e.g. Bermuda, Hawaii and Cuba.

Jeff C.
March 1, 2009 9:18 am

Allen63 (06:03:48) :
“By the way, why exactly 7 PC?”
Let me follow up on Jeff Id’s reply. RegEM runs in Matlab which is pricey. Steve McIntyre wrote a limited version for R (statistical analysis software in the public domain) to allow people who don’t have access to Matlab to experiment with the program. As Jeff mentioned, the Matlab version doesn’t work much beyond 8 PCs. I also this reconran it with 8 and got virtually the same results as with 7 PCs.
We beleive 7 PCs is reasonable number to use based on the distance correlation plots shown above. When we first started looking at this, we thought RegEM was not sophisticated enough to properly correlate the temperature series without explicit site to site distance information. The 3 PC distance correlation plot seemed to confirm this belief. Once we started testing higher numbers of PCs in RegEM, we made a startling discovery. RegEM could perform proper distance correlation, but when limited to 3 PCs, it was throwing that information away.

March 1, 2009 9:34 am

Anthony, thank you for hosting such a fascinating forum, for making available more lyrical expositions of pithy data, and also for your Resources and Glossary sections. I appreciate being able to send my less mathematical friends to WUWT. If only we could get our legislators to read WUWT…
To the Jeffs. More, please! Wonderful stuff.

Bernie
March 1, 2009 9:36 am

Pamela and Jack:
Perhaps one could use Tasmania as the equivalent of the Antarctica peninsula. The additional benefit is that it has the same rough shape as Antarctica and distribution of weather stations (see Steve McIntyre’s and CA discussion on shapes and PCA methods)

RickA
March 1, 2009 9:37 am

Wow. Very impressive work Jeff C and Jeff Id. I have been following along and it has been more fun than a good mystery novel. Keep up the good work.

Paul Maynard
March 1, 2009 9:43 am

This may seem trite but I agree with Mike D.
When the hockey stick first appeared in TAR, surely the first reaction of the IPCC, all scientists and indeed historians should have been “how strange this result is when set against the plethora of recorded history since the Egyptians”. Instead, they swallowed it hook line and sinker and it required M&M to trash the stats rather than for everyone to observe the obvious inconsistency.
When Steig and the gang produced their results, they could just have looked at Gistemp for the South Pole
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=700890090008&data_set=1&num_neighbors=1
Then they could have debated the theory that allowed a warming trend except at the South Pole. Of course that would have been really inconvenient.
Regards
Paul

Pamela Gray
March 1, 2009 9:45 am

I don’t have much quibble with that peninsula area of Antarctic warming up. Several factors are at play. Temperature station growth, urban heat, seabed volcanic activity, and oceanic oscillation patterns. If my AIMS memory serves me, Antarctica is not exactly a hotbed of atmospheric CO2. If it is being touted as an indicator of global pollution and warming due to CO2 in the atmosphere, tell me how that is so in Antarctica?

Jeff C.
March 1, 2009 9:53 am

“I apologize for not knowing who Jeff C is”
No apology needed. My background is systems engineering in a technical field unrelated to climate science. One of my primary roles is the development of verification algorithms. These are techniques for post-processing measured test data in order to extrapolate information for parameters beyond those we can easily measure. Sort of like climate science where you get to create data where it doesn’t exist. The difference being if we get it wrong the product fails and people get fired.

Ken Hall
March 1, 2009 10:05 am

Phil: “Steig et al must be burning the midnight oil to come up with an answer to this.”
——————————
If only that were necessary. Unfortunately for truthful and honest reportage, the mainstream media will report Steig et al as unquestionable truth and not even touch the counter reports such as this with a barge-pole.
So as far as the lay-man in the street is concerned, Stieg et al is accurate, correct and beyond question.
Stieg will not lose sleep over his theory being shown to be based upon much faulty and unreliable science, because the mainstream media and peer-reviewers do not care if the science is shoddy and wrong. So long as they get the headlines, they are happy.

REPLY:
Barring their acceptance of publishing a rebuttal, perhaps we should consider a full page ad in Nature. I think we could garner enough financial support from readers to make that happen. – Anthony

Scottie
March 1, 2009 10:06 am

My knowledge of Higher Math would fit on the back of the proverbial postage stamp but I think I’ve managed to follow this one, so thanks, Jeffs, for making it understandable to a thicko!
I don’t want to accuse anyone of dishonest science (no, seriously) but it worries me that Steig reports only a rising temperature trend when there seems to be ample evidence from all sorts of other sources that we can expect 30 years of rising temps followed by 30 years of falling temps which seems to be pretty much what his research and this paper shows.
I’m not naive enough to think all scientists are perfect but what is the gain in the long-term for any of us — ‘Warm-mists’ included — if we are caught “looking the wrong way” when the temps turn down?
BTW, I fancy the idea of using the US as a control. At least you could guarantee to be able to cross-check theory against reality.

Robert Bateman
March 1, 2009 10:14 am

This is a case example of attempting to reconstruct data to a high resolution in which no consistent high resolution data was taken. While the end product is an image, it is still a DIGITAL image composed of data. Filling in gives you more of the same blur of missing data. What would be far more appropriate here is to use a median filter to put as much data points in from actual data sets, rather than strict interpolation.
The bottom line is that an image is simply a graphical representation of the data set, operated on or not. If the operation is a kludge, the image shows it.
When somebody comes along with some advanced image restoration techniques such as drizzle, SWARP, MaxEnt, LR or others, then we’ll have a better handle on what’s going on. In the meantime, blurred images don’t have much to say.

DAV
March 1, 2009 10:29 am

Interesting article. Well, done.
The underlying assumption in the EM algorithm is that data dropouts are normally distributed, i.e., non-systemic. It is effectively stereotyping. Filling in a station that never was is like determining how a person would answer — if only he had been born — by using the demographics of presumably surrounding hypothetical neighbors. We all should know the dangers of stereotyping.
The process works well for photographs — even for systemic errors such as dead or hot pixels — because the eye will average them anyway. The value of each pixel is small compared to the overall image. It only works because 1) you just need the average and 2) there are a large number of surrounding pixels.
The process fails miserably for gaping holes.
RegEM is essentially the equivalent of normalizing then rotating the data for maximum variable separation by minimizing cross-correlations. Using RegEM might mean your guesses are more refined when applicable but it can’t perform magic, either . The end result is still a guess.
I’m reminded of a short-lived program a few years ago about the CIA (“The Agency,” IIRC) where, in one episode, technicians completely reconstructed a face from what couldn’t have been more than 16 pixels obtained from a reflection from the back of a rearview mirror imaged by a liquor store security camera. Pure Fantasy.

John Peter
March 1, 2009 10:33 am

I have been following this and other articles revealing faults in the calculations etc. in papers particularly submitted by AGW supporters with increasing degrees of incredulity. When I worked for a Scottish company owned by a US corporation we were accused of manufacturing missile carrieres for Iraq. A US government auditor was sent over and found no such evidence. Is it not possible for a relevant US government auditing department to send in neutral observers to investigate potential fraud or consistent reported manipulations of evidence in organisations either owned by or obtaining funding from US government funds? Surely NASA would be a case in point.

Julius StSwithin
March 1, 2009 10:33 am

Excellent work. Thanks to all of you. A few brief comments:
1. You would have thought that after the Mann/Wegman sequence Nature would have been more circumspect.
2. I am surprised that the inter-station correlation drops to zero with large distances. Working with similar data in other parts of the world
I have found that R2 drops to a low (0.1 to 0.3) value and stays there for all distances. This residual value reflects the fact that all stations have similar seasonal patterns.
3. There is one general point the two Jeffs might like to comment on which I have not seen discussed elsewhere. In any regression of Y = a1.X1 + a2.X2 + … +b calculated values of Y will have a lower variance than observed Y. This is clearly seen in PC3 above but applies to all proxy data sets.

Bob Buchanan
March 1, 2009 10:42 am

Paul M
I looked at the GISS plot you linked.
Something very weird is going on since there is an obvious (to me anyway) step increase in year-to-year variability about the same time we started getting satellite data.
What’s going on?
Regards
Bob B

P Folkens
March 1, 2009 10:42 am

1) Providing PDFs of important contributions like this one raises the value of WUWT to a new level. Thank you!
2) This work should be submitted to Nature under “correspondences” or as a full-tilt rebuttal (if not done already). If it is rejected, a scathing rebuke of Nature is warranted.
3) President Obama has laid the gauntlet over the new budget which includes the carbon cap and trade scheme. Perhaps a summary of “The Jeffs” article linked to the complete article needs to be sent to everyone’s member of Congress emphasizing that a more balanced scientific review is warranted before imposing a $600 billion+ new tax on the country in the midsts of a major economic recession.

Terry
March 1, 2009 10:47 am

Steig’s Mona Lisa appears to have grown a moustache… 🙂 Very nice work, Jeffs, thanks for summing up several weeks of effort into a succinct, easily understandable analysis.

March 1, 2009 11:03 am

One more thought.
With such rich quality of material appearing as Jeff & Jeff (to be known as JJ 09 in future??) I dream again of the wiki we climate skeptics owe ourselves to put together, to write gold-standard pieces like this one, that we all know and refer to, that deconstruct the Hockey Stick and all the rest, and are comprehensible like this is, to get the word out about the real science. Now couldn’t it be fun too, if we can do it together?

Stephen Brown
March 1, 2009 11:10 am

[snip – noted, thank you, but off topic; Arctic Trying to keep the discussion centered on the Antarctic]

Rocket Man
March 1, 2009 11:21 am

Thank you Jeff, Jeff and Anthony for spending your own time to do this analysis. It is too bad you don’t get paid by “Big Oil” (or by anybody for that matter) to do it, unlike Steig and the Team who get paid by “Big Government” to do their work.
In my opinion, what this analysis shows is the futility of trying to measure global temperatures (or even continent wide temperatures) with ground based measurements. GISS Temp has a lot of the same problems. Sure, you can use the data to do an analysis, but the results will be strongly dependent on what methodology you use. And without knowing the “actual” temperature trends, which of course is what you are trying to find, it can never be known if the methodology used gives you an accurate picture of what is actually happening.
What Steig and company should have done is to release a paper showing a representative number different methodologies (not all of them, because that would be an infinite number) and their results and then present an argument as to why the one they chose is the best one.
Of course the best way is to quit using ground based measurements at all as primary sources of temperature data and use exclusively satellite data. With all the money spent trying to interpret ground based data, we could launch a couple of more satellites and get high quality, wide coverage data of the entire planet.

March 1, 2009 11:34 am

The difference being if we get it wrong the product fails and people get fired.
Thank you, Jeff C. I appreciate your candor. I like extrapolatory algorithms, too, in their proper place.
I wish it were not the case, but good people, friends of mine, have lost their jobs because of GW alarmism.
In my own field, forestry, vast tracts of heritage forests are being incinerated (millions of acres per year) with massive deleterious effects to vegetation, habitat, watersheds, airsheds, public health and safety, homes, lives, etc. Those catastrophes are blamed on phony global warming, a paltry excuse for failing to to do the active stewardship required.
We face a runaway government heck-bent on imposing cap-and-stifle carbon taxes, also justified by scientifically defective GW alarmism.
These wholly preventable real world tragedies and injustices bug me no end. I appreciate your efforts to debunk the bunkum. I wish we could do more to stop the actual active and future tragedies incited by the GW claptrap.

timbrom
March 1, 2009 11:35 am

Has anyone calculated the energy required to melt all the ice in Antartica and then compared that with the available energy transferrable to the continent? At a rough guess I’d hazard that it would take a couple of years, at least.

Just want truth...
March 1, 2009 12:14 pm

Anthony
“REPLY:Barring their acceptance of publishing a rebuttal, perhaps we should consider a full page ad in Nature. I think we could garner enough financial support from readers to make that happen. – Anthony”
I’m in.

March 1, 2009 12:15 pm

Count me in too.

Aron
March 1, 2009 12:15 pm

There have been people looking for the cause of the peninsula’s warming. Some have suggested volcanic activity. There are other suggestions too.
What I have not heard yet is what about the winds that blow over from South America. Could wind be bringing some accumulated heat from the many urban heat islands in South America? If so, then the warming is not caused by climate change but by atmospheric temperature contamination from another continent.

J. Peden
March 1, 2009 12:16 pm

Many thanks, again, Jeffs, and your post wasn’t really that long.
If I place more thermometers on my one acre, do I now own more acres?/sarc
As already noted, it should be interesting to see what Nature does. This is perhaps Nature’s moment of truth. Why don’t they just admit that their peer review is not an audit or a guarantor of truth – or something?

Aron
March 1, 2009 12:25 pm

OK, I’ve looked at the direction Westerlies (winds that flow from west to east) from South America usually take and they do pass directly over the Antarctic Peninsula. We need more attention paid to this because it seems that the temperature monitors are simply being contaminated by warmer winds from South America.

Policyguy
March 1, 2009 12:30 pm

Would someone please parse this acknowledgment that there is a disadvantage to using K>3?
“We therefore used the RegEM algorithm with a cut-off parameter K=3. A disadvantage of excluding higher-order terms (k>3) is that this fails to fully capture the variance in the Antarctic Peninsula region. We accept this tradeoff because the Peninsula is already the best-observed region of the Antarctic.”
This appears to be doublespeak. It seems to me to say that the author is sacrificing greater accuracy on the peninsula in order to see greater clarity in the rest of the continent. But if it is true that we know more about the peninsula, why shouldn’t that be used to verify information elsewhere?

HasItBeen4YearsYet?
March 1, 2009 12:54 pm

Rocket Man (11:21:30) :
I would think we would still need some ground based stations for calibration purposes, just to keep the satellites honest.

DAV
March 1, 2009 12:54 pm

Wyatt A (08:21:13) : Can anyone point me to a good URL to get a better understanding of Principle Component Analysis?
The problem with most PCA discussions is that they are fairly thick unless you are good at seeing mathematical relationships. I personally think in images . Anthony’s second link has a good illustration at Fig 2a. (http://www.snl.salk.edu/~shlens/pub/notes/pca.pdf). Here’s my mental image:
A multivariate vector is a multidimensional vector where each variable defines an axis. If the variables are uncorrelated, they will be orthogonal with each other. For a number of reasons that I won’t go into, anything else is often undesirable. The goal of PCA is to change the data coordinates to an orthogonal set.
It does this by placing axes centered on the average variance. For it to work properly, the variance has to be normalized. If you look at Fig. 2a, the original variables (Xa, Ya) are highly correlated. The largest variance extends along the diagonal between the two and the smallest is 90 degrees from that. So two new axes (variables) are generated. It is customary to label them in order of descending contribution. In Fig 2a, PC1 is the longer line and PC2 is the shorter. In the Wikipedia article Anthony linked under Derivation of PCA using the covariance method, this corresponds to having a covariance matrix with the only non-zero values on the diagonal.
To use the PCA on the data, one rotates and translates the data to the PC coordinate system.
Note that the new coordinates may or may not have any physical interpretation. It is often hoped that PCA will separate the individual contribution (signal) of each variable to the observed data but that can only be proven outside of the PCA. Likewise, the PCA may or may not have predictive power or usefulness in obtaining a prediction.
HTH

a jones
March 1, 2009 12:59 pm

This is an excellent piece of work which highlights the dangers of the modern fashion for using statistical reconstructions which are open to interpretation: not least because the method used can be chosen to produce a desired result.
It doesn’t only happen in climate studies either.
Now this may be OT, if so snip, but what I find of particular interest and never before known until this analysis, is that there was a warming trend in the Antarctic followed by a cooling trend which seems to be the inverse of that in the Arctic.
This may be coincidence and mean nothing at all.
But the Arctic was cooling until 1979 which is why that date is used for ice extent data, because the ice was then at its maximum. Actual satellite data goes back to 1974 when the ice extent was rather less.
Similarly we know that from about 1979 the sea ice extent and season, the period for which the Antarctic sea ice persists in seasonally open water, has increased from the late 1970’s, it is about four weeks longer today than back then.
The speculation that there is an oscillation of temperature between the two poles with one warming whilst the other cools and vice versa is old: and essentially based on sea ice records.
Yet here we have a new source of data to match the temperature data in the Arctic: and it shows just such an inverse relationship.
It also goes to show how important it is you use statistical techniques to reveal what is actually happening rather than to support some preconceived idea.
Because I will take a small bet than neither of the Jeffs ably assisted by Steve knew that the outcome of their analysis might reveal either this rather interesting fact: or that the said fact might possibly help to confirm that there may indeed be an inverse temperature relationship between the poles.
Fascinating.
Kindest Regards

Norm in the Hawkesbury
March 1, 2009 1:20 pm

Excuse a poorly educated old man who can only understand by reading a lot, relying on the resultant osmosis of knowledge and intuition.
It looks to me like there are two separate climatic areas in the Antartic; the penisula and the rest.
Could we do an extrapolation of each area individually, note the variance and then work out the cause?
I am led to believe the penisula has a warmer earth crust below than the majority of the continent. Would that not be like the inverse correlation to including Alaska into the US mainland figures? They are not really alike.
Also, the fact that the peninsula protrudes into the ocean wouldn’t it be affected by the prevailing weather patterns from the ocean?

Jeff Alberts
March 1, 2009 1:21 pm

I’m so proud to be a Jeff! 😉 Too bad I’m not nearly as smart as these guys. 🙁
I still maintain that infilling ANY temperature data cannot be rationalized. Unless the working and non-working sensors are within a couple hundred meters of each other, they will tend to have different weather.

thefordprefect
March 1, 2009 1:23 pm

No infill. No reconstruction. Just the data:
1971 2000 temperature trends from British Antarctic Survey
http://www.nerc-bas.ac.uk/public/icd/gjma/reader.temp.pdf
1951 to 2006 temperature trends
http://www.nerc-bas.ac.uk/public/icd/gjma/trends2006.col.pdf
1951 version shows most stations with increase in temperature
1971 version shows warming from180 to 15 deg E (clockwise)

Wyatt A
March 1, 2009 1:27 pm

Anthony and DAV,
Thanks for the links and discussions!
This the most awesomely-awesomest of websites. Most deserved of the “Best Science Blog” award.
Jeff-n-Jeff,
Great work!
I was wondering though, rather than correlate station distance to temperature maybe we should look at latitude? Stations separated by miles, but at the same latitude, might have a strong correlation.
Thanks again,

BarryW
March 1, 2009 1:35 pm

So if the rule of thumb for climate vs weather is 30 yrs and even Steig’s analysis shows a cooling since 1979, then the antarctic climate is, by the climatologists’ own definition, definitely getting colder! Yet they publish the opposite. Alert the media!

March 1, 2009 1:36 pm

Until I read through this, I had no idea that there was so much complexity in this. Reminded of Dante, and his observance that “complex frauds” resulted in having those who willingly participated in them spending eternity somewhere below the 6th or 7th Circle.
The Global Warming “calculations”, here and in other areas of “concern” certainly seem to fall under the classification of “complex fraud”.
catholicfundamentalism.com makes use of many of your articles to let believers know that they should pray for those who tell lies for money or prestige. Or, both. They seem to define “lost souls” by their very existence.

John F. Hultquist
March 1, 2009 1:47 pm

Jeff C. (09:18:12) :
Allen63 (06:03:48) :
Regarding the number of components used – A simple word explanation:
I’ll use an off-topic example because I think all will be able to relate to it.
Say I had data by county that included “new car purchases” along with age classes (0-5, 5-10, etc.; sex (2 classes), income level (again with several classes), % foreign born, and on and on. Some of these variables are obviously related (maybe r^2 > .9). The goal is to find one, two, or more variables that “explain” our dependent variable, namely, “new car purchases.”
For example, % with income >$100,000 might be one with high explanatory power. But that would be highly correlated with age between 50-65, and also, % employed in the “high tech” industry. Think of several other things that would be related to these.
The idea then is to manipulate the data in such a way as to collapse these several related measures (variables) into a “component” variable that would, by itself, have a high correlation with our independent variable.
We would like to have several of these components that are not themselves correlated, but when these several are all used they have high explanatory power. That means they should explain the variance in the data of the independent variable.
We also would like these components to be “interpretable” or have a meaning we could assign a name to, as in my example, maybe the term “Status.” You want principle components because as the number of components increases each next one has less and less explanatory and is less interpretable, or has less meaning. However, the more you have or use the more variance you explain but the tradeoff is you can’t say what was added to your degree of understanding.
In the case of this temperature data, this last statement would seem to be a non-issue.

Rocket Man
March 1, 2009 1:50 pm

HasItBeen4YearsYet? (12:54:42) :
If you are trying to measure atmospheric warming, using ground based measurements is not going to give you the atmospheric temperature of the column of air over the measurement site. Rather, ground based measurements are going to give you a representation of the interaction between the ground and the air at ground level. While this information might be useful in determining micro climate effects, it is not very useful in telling you what is happening in the atmosphere as a whole.

John F. Hultquist
March 1, 2009 1:52 pm

In my post “That means they should explain the variance in the data of the independent variable.” This last should be dependent variable. Sorry for the too quick submit.

Eric
March 1, 2009 2:26 pm

When this was discussed on CA, I asked about the possible criteria that could be used to determine the optimum number of PC’s to use, and what they said.
It seems clear from the data that 3 were not enough to represent the station data.
There are 2 criteria mentioned on this web site which explains the principal components representation method.
http://www.statsoft.com/textbook/stfacan.html
I am referring to the Kaiser Criterion
“The Kaiser criterion. First, we can retain only factors with eigenvalues greater than 1.”
and the Scree Criterion.
“Catell suggests to find the place where the smooth decrease of eigenvalues appears to level off to the right of the plot. “

Taphonomic
March 1, 2009 2:45 pm

I posted a similar post at Climate Audit and decided to post this here too as Climate Audit appears to be down,
This is an excellent analysis and rework of the methods. However, as the Jeffs note, it analyzes the AWS data which Steig et al. indicate were only used for corroborative purposes. As documented by Steve McIntyre (from statements on Realclimate). In his post
http://www.climateaudit.org/?p=5312#comment-329096
the actual AVHRR data that the Steig et al. paper were based upon have not been provided.
This raises the question of why Nature published this paper at all (not to mention gave it a cover) when the paper violates Nature’s own editorial policies for “Availability of data and materials”
(editorial policies available at: http://www.nature.com/authors/editorial_policies/availability.html ), which clearly state:
“An inherent principle of publication is that others should be able to replicate and build upon the authors’ published claims. Therefore, a condition of publication in a Nature journal is that authors are required to make materials, data and associated protocols promptly available to readers without preconditions. Any restrictions on the availability of materials or information must be disclosed to the editors at the time of submission. Any restrictions must also be disclosed in the submitted manuscript, including details of how readers can obtain materials and information. If materials are to be distributed by a for-profit company, this should be stated in the paper.”
“Supporting data must be made available to editors and peer-reviewers at the time of submission for the purposes of evaluating the manuscript. Peer-reviewers may be asked to comment on the terms of access to materials, methods and/or data sets; Nature journals reserve the right to refuse publication in cases where authors do not provide adequate assurances that they can comply with the journal’s requirements for sharing materials.”
“After publication, readers who encounter refusal by the authors to comply with these policies should contact the chief editor of the journal (or the chief biology/chief physical sciences editors in the case of Nature). In cases where editors are unable to resolve a complaint, the journal may refer the matter to the authors’ funding institution and/or publish a formal statement of correction, attached online to the publication, stating that readers have been unable to obtain necessary materials to replicate the findings.”
It was bad enough that Steig refused to provide the actual code for replication. As the AVHRR data are not available and apparently were not available to the reviewers, one has to ask how did this paper get published when it violates Nature’s policies?

Retired Engineer
March 1, 2009 3:01 pm

“Rocket Man (11:21:30) :
I would think we would still need some ground based stations for calibration purposes, just to keep the satellites honest.”
OK, then what do we do to keep the ground based stations honest?
Having spent nearly 40 years in the measurement business, long term absolute accuracy is very hard to achieve. How often are these sensors calibrated? By what means? In the cold of Antarctica, I suspect a lot of things drift with time. And not all in the same direction.
GIGO big time.
J&J’s work does an excellent job of showing Steig may have to go back to the drawing board. We need more of this.

Basil
Editor
March 1, 2009 3:17 pm

Allen63 (06:03:48) :
By the way, why exactly 7 PC?
Why not use as many PCs as possible to get the best fit possible?

I might ask “Why use more than one?” What we are seeing here, I think, is a misuse of PCA. Adding PC’s is a bit like adding variables to a regression equation: the goodness of fit, R-squared, always increases, with each variable added to the equation. But eventually, the equation gets overspecified, or overdetermined, and the variables are not adding anything meaningful to the analysis.
Where is the table that shows the proportion of the covariance matrix explained by each PC? Especially for those who are not familiar with PCA, I think this is crucial info. Almost everybody who has ever heard of “regression analysis” has a basic understanding of “R-squared.” (That is not always good, I know, but what I am getting at here is roughly equivalent to whatever good R-squared tells us.) In the same way, it is pretty easy to understand that PC’s are ranked in the order of their contribution to the covariance matrix.
Now it is easy to understand, from the third figure, why Steig et al didn’t stop at just one, and did stop at three. The first, and most significant PC, is negative. Since that isn’t what Steig et al set out to show, they couldn’t stop there. So they went on to PC3, the most positive PC, and stopped there. What Jeff & Jeff have done, so nicely, with the fourth figure is demonstrate that PC3 is just an artifact, and is not indicative of a underlying natural or physical process.
Which brings me full circle here. PCA, when used correctly, is intended to extract signals from multivariate data that are thought to be significant, in this case natural process. It is not the combination of PC’s that we are after, it is the individual PC’s that are supposed to be representative of something. Without articulating what those individual PC’s might represent — which is either something meaningful, or otherwise is just “noise” — then PCA, as we’re seeing it with Steig et al, is nothing but a technique of massaging the data to get it to say what we want it to say.
Or, as I prefer to put it: If you torture the data, it will confess, even to crimes it did not commit.
Until I hear a convincing explanation why we shouldn’t stop at PC1, then all I know is that the strongest PC is negative.

Richard P
March 1, 2009 3:21 pm

To Jeff and Jeff,
A very understandable and effective article. In many of these studies the influence of an engineer would keep the results grounded in reality. Having to design something for the real world and being held accountable for the results makes it very difficult to perform this type of analysis without thinking of all of the ramifications and variance.
It is unfortunate that it takes one to apply a rigorous analysis of the system under study.
Also, thanks Anthony for sponsoring this great article.

March 1, 2009 3:47 pm

Thanks to Anthony and everyone for the supportive comments. BTW: if someone can find that elusive big oil check, I can be bought. Just kidding 🙂
There were several good questions and examples here that I’m not sure I can answer cleanly, if I left some unattended which really need addressing I apologize. There were several pertinent comments about the number of PC’s from people who have clearly got some experience.
In my opinion PC1 is all that’s required if the stations are weighted according to the area they cover. However, if the stations are weighted that way, I believe PC1 breaks down to a least squares fit of the average. Nice simple math that anyone can appreciate. If the stations are simply tossed in the number masher without concern for area weighting (as was done in Steig 09) higher PC’s are clearly required to infill the data correctly.
Anyway, I’ll keep stopping by because this thread is still very active but if I miss something from the comments you can find me pretty easily and ask again on my much smaller but still fast growing blog.

Rocket Man
March 1, 2009 3:53 pm

Retired Engineer (15:01:44) :
The statement:
“I would think we would still need some ground based stations for calibration purposes, just to keep the satellites honest.”
was made by HasItBeen4YearsYet? (12:54:42) . I was just quoting him.
I completely agree with you that the accuracy of the ground based stations is open to questions due to calibration issues. Working in Aerospace, we regularly calibrate every piece of instrumentation on a fixed schedule. Yearly is the most common, but some are longer and some are shorter. We regularly find errors in the instruments brought in for calibration, which sometimes requires us to go back and repeat a test.
In addition to the siting issues the Surfacestations.org project has brought to light, I would love to see the calibration data on the temperature sensors as well. How often are they calibrated? How often are they out of calibration when they are calibrated and how much are they off? What is the true accuracy of the temperature sensors?
Perhaps the sensors in Antarctica would be a good place to look at these issues as there are so few stations to evaluate. My guess is that the errors in these sensors swamp whatever temperature signal they are trying to tease out of the data.

Jeff C.
March 1, 2009 4:08 pm

Basil and others,
There is plenty more to come on this. Jeff and I focused on one particular aspect, the flaws in the AWS reconstruction (primarily the false long distance correlations) and what happens to the trends if you include the omitted information that caused the false correlations (the higher order PCs).
Steve McIntyre has done some very insightful posts on the spatial patterns of the PC coefficients. In the paper, Dr. Steig claims:

“The first three principal components are statistically separable and can be meaningfully related to important dynamical features of high-latitude Southern Hemisphere atmospheric circulation, as defined independently by extrapolar instrumental data.”

Steve has shown that the spatial patterns are driven by the shape of Antarctica, and any relationship to the “Southern Hemisphere atmospheric circulation” is most-likely incidental. Even worse, citations from climate journals over the past 30 years warned against drawing premature conclusions from the spatial patterns of the PC coefficients. One author likened it to “the observations of children who see castles in the clouds”. Thus Dr. Steig’s justification for stopping at three PCs appears weak.
Another fascinating aspect is that every single data point in the satellite reconstruction can be described with an accuracy of 10E-8 by three principal components. This is over 3 million data points (600 months x 5509 locations). The takeaway is that the contents of the entire satellite reconstruction are fitted values. Unlike the AWS reconstruction which contains real measured values and infilled values, the entire satellite reconstruction is infilled! The measured satellite data wasn’t supplemented, it was replaced. We would like to compare the raw satellite measurements to the fitted values in the satellite reconstruction. However, this is the data Dr. Steig has not seen fit to release.
We also have the very last plot in this article which shows the satellite reconstruction distance correlation is just as bad as the AWS reconstruction, if not worse.
For those interested, I would urge you to follow the almost daily deconstruction of this paper at Climate Audit or Jeff Id’s site http://noconsensus.wordpress.com/.

Ross
March 1, 2009 4:24 pm

Very well presented study.
Thanks and congratulations to all involved in its production, but we should perhaps temper our enthusiasm with the thought that the results seen here and their implications will never receive the publicity splash that Steig got from his faulty piece. … and that is truly sad.

Jeff C.
March 1, 2009 5:00 pm

thefordprefect (13:23:13) :
“No infill. No reconstruction. Just the data:”
Exactly. So why didn’t Dr. Steig just use this to support his claim instead of turning to the statistical sausage grinder?
The answer is the stations cited are clustered on the peninsula and around the coast. There are only three interior stations, Vostok, Amundsen-Scott (south pole) and Byrd. Three stations for 90% of the area of the continent.
Dr. Comiso of NASA put out a very thorough paper in 2000 that detailed the results of the satellite temperature measurements enhanced using a technique known as cloud masking. His findings, laid out in painstaking detail in the paper, show that the vast majority of the continent interior was cooling. Yes, some areas were warming, but they were largely restricted to the coastal belts. Here is the link, check it out. http://ams.allenpress.com/archive/1520-0442/13/10/pdf/i1520-0442-13-10-1674.pdf
Dr Steig needed to show that Comiso 2000 was wrong. He couldn’t do that with the station data because it doesn’t exist, hence the foray with RegEM.

March 1, 2009 6:31 pm

Wow, what great work on an obviously complicated subject. Kudos to all involved. It’s just fantastic to have so many committed to efforts like these to “stress” test the shaky science that is coming out those $50 billion in funds to prove global warming is really “global warming.”

David Gladstone
March 1, 2009 7:05 pm

Do these writers of this paper have real names? What are they please?
I am being badgered by the AGW crowd over the anonymous nature of their posting. I felt it should stand or fall on its technical merits but..

Basil
Editor
March 1, 2009 7:14 pm

Jeff C. (16:08:51) :
Jeff,
I’m glad there’s more to come, and I’m not trying to detract from what you and Jeff Id (and Steve, and Hu, and whomever else) are doing. But somewhere along the way I’d like to see it brought out that the statement
“The first three principal components are statistically separable and can be meaningfully related to important dynamical features of high-latitude Southern Hemisphere atmospheric circulation, as defined independently by extrapolar instrumental data.”
is meaningless unless the “important dynamical features…yada yada” are related to specific PC’s in a plausible and meaningful way. Let’s get the first off the table. What “important dynamical feature” created the negative PC1? I’m guessing, rather, that PC1 is nothing more than a weighted proxy of temperature, so that it doesn’t really explain anything, or tell us what anything that we didn’t already know: that without massaging the data to make data where there is none, the weight of the evidence indicates that Antarctica has been cooling (except the Peninsula).
While I think you guys are doing good work exposing the statistical nonsense in all this, the real travesty is that this was accepted for publication in a science journal without, it appears, any kind of real science underpinning the analysis. It was, in the end, all about getting the data to confess to a crime it did not commit.

March 1, 2009 7:27 pm

David,
Who’s badgering?

Pamela Gray
March 1, 2009 7:37 pm

For those wondering about that peninsula, just google Antarctic Ocean Current and you will get lots of good info. That peninsula is far more affected by cyclic components of this huge current, including a warm and cold wave phase. The information is easy to digest and will be very educational in terms of taking “global warming” and “the ice is melting” stuff with a grain of salt water.

jorgekafkazar
March 1, 2009 8:25 pm

DAV said: “The underlying assumption in the EM algorithm is that data dropouts are normally distributed, i.e., non-systemic.”
I really doubt if the dropouts are totally non-systemic. One of the reason for data dropout is burial of the sensor by snow. I.e., the stations with the worst weather will tend to have more missing data, IMO. I know that there’s no established correlation between snow accumulation and temperature, but it might bear investigation.

jorgekafkazar
March 1, 2009 8:37 pm

timbrom said: “Has anyone calculated the energy required to melt all the ice in Antartica…? At a rough guess I’d hazard that it would take a couple of years, at least.”
I understand that the esteemed Dr. Steig himself, in a candid moment, stipulated that it would take a very long time for all the accumulated Antarctic ice to melt, based on the slow warming trend he is certain exists. He’s been a lot less open, lately, which I find regrettable, since I know he’s well regarded by some reasonable scientists. I’m hoping he’ll relent, but he may be under some pressure from other interested parties.

alkataba
March 1, 2009 9:52 pm

There is a considerable amount of misinformation propagated about the greenhouse effect by people from both sides of the debate. This is true, but all the graphs and data will not convince joe sixpack of anything. Joe sixpack worries about more taxes, does not want to give up his hemi powered pickup, and has no idea what a carbon foot print is. While science provides us with the needed information, Rush, and those like him constantly regail against it. Joe has disregarded the science and believes that climate change is natural, from a wobble in the Earths orbit, a change in the sun, or the end of the last ice age. He may even believe there is no climate change at all. Joe believes there is no reason to change anything, as natural, we did not cause it, we can not change it, therefore nothing can be done about it. Going green is foolishness. An electric or hydroelectric car is pointless. If joe sixpack was the republican hero, than Homer Simpson is his archtype. To evolve from the horse drawn wagon to the gas powered automobile was easy, to evolve beyond that is more than Joe can handle. Maybe he has a point, Consider this, In one mans life span we have gone from horse drawn wagon, to the automobile, to the jet airplane, then to walking on the moon. Throw in the atom bomb, Vietnam, and computers, maybe it is more than Joe can handle. With the last President, science was ignored. Perhaps now with President Obama things will change. If the U.S. continues its downward trend regarding it economic downturn, then the climate change problem will be put on the back burner. Europe and Asia could lead the way for change. Greed is greed though. What drove us drives them as well. To Joe, anything that changes his lifestyle is a loss of freedom. A seatbelt use law is like a communist plot. Even though it is for everyones good, to mandate any change for going greener, would be viewed as some sort of government restriction on Joes freedom. His motto is family, land and his rifle. Any change to deal with climate change will have to be done with out him. Alkataba

Roger Knights
March 2, 2009 12:13 am

“the results seen here and their implications will never receive the publicity splash that Steig got from his faulty piece.”
Unless Nature withdraws the paper.

Claude Harvey
March 2, 2009 12:15 am

Works like this one are important to those few who actually seek truth for its own sake. Unfortunately, the general public has neither the will nor the capacity to seek out truth on the subject of “Manmade Global Warming”. For that, they depend on “the court of world opinion” where the foreman of the jury is the news media. The “attorneys” who present the case on the side of the affirmative are the likes of Al Gore and Dr. James Hansen, and they have been very effective at capturing the jury foreman’s imagination. The “attorneys” on the side of the negative, whoever they may be, have yet capture the jury’s attention, but works like this one will be important evidence in the event said “attorneys” ever actually do make an effective appearance.
In the meantime, the jury foreman (news media) is suffering a problem with leadership of the jury. He’s having a devil of a time keeping a straight face as he presents “weather stories” of record cold temperatures left and right while simultaneously presenting “climate stories” of “we’re all going to burn up and die (at least those of us who don’t drown beforehand)”. In the final analysis, nature may well decide the outcome while the court dithers; even the dullest bulb on the human tree can read a thermometer and comes equipped with an uncanny ability to detect an icicle working its way down his or her back.

Paul S
March 2, 2009 2:08 am

Amazing! Over 100 posts over several days and not one counter argument from the AGW crowd! I expected them to be all over this like a rash crying foul.
Good work Jeff’s!

Stephen Parrish
March 2, 2009 2:48 am

alkataba (21:52:51) :
What does “science was ignored” under Bush mean? I tend to understand this to mean the technocratic planners were not allowed to plan the economy and lives of the people.
When science is applied to a messianic mission I’ll be slow to take it up, because for every polio example there is a DDT example.
Does this mean I ignore science? Nope. It means when it is politicized I will slow my adoption of its strictures.

March 2, 2009 6:55 am

Jeff and Jeff
Very good work. I raised this question at RC (“Antarctic warming is robust” thread, comment 353) and Gavin failed to answer it as pointed out by ApolytonGP, comment 364.
Steig etal has to be wrong, because their ‘reconstruction’ has the peninsula warming at 0.11 C/decade when the observations show it’s about 0.5 C/decade (their error bars are +- 0.04, so their actual error is about ten times their error bars!) As you say, they seem to accept this. But given that they are out by a factor of 5 on the peninsula, what hope can we have of the accuracy of their ‘reconstruction’ in the interior?
But there are some crucial numbers you haven’t given us yet as far as I can see.
Steig et al say that over 1956-2007 W antarctica warmed at 0.17C/decade and the peninsula at 0.11.
What are the results for these separate regions in your 7-PC reconstruction? This may be difficult because they dont define these areas exactly but you should be able to get a rough answer. To put it another way, can you produce a picture like the Nature front cover using (a) 3PCs (b) 7 PCs?
(And, if you have time, what happens with more PCS? And what happens if you exclude data from the Islands like Grytviken etc?…)

Neo
March 2, 2009 7:24 am

[snip, off topic – nothing to do with Antarctica]

March 2, 2009 7:28 am

Thanks again for the support.
Paul S makes a good point.
I’m curious about the amazing silence on Tamino and RC. They usually take any shot they can at WUWT yet not one comment on their threads. I left a comment last night and it was clipped. Someone always takes a shot a Tamino, yet there’s nothing.
I would like to explore this a little further. I did a short post requesting simple reasonable, polite, on topic questions to RC regarding our result with a copy of the request placed in my comment thread. It will be interesting to see if anyone can get through and how they reply.
http://noconsensus.wordpress.com/2009/03/02/the-stunning-sound-of-silence-requests-for-reply/

Ross
March 2, 2009 9:08 am

Roger Knights (00:13:34) :
“the results seen here and their implications will never receive the publicity splash that Steig got from his faulty piece.”
Unless Nature withdraws the paper.

True!
” ‘Tis a dream devoutly to be wished.”

David Gladstone
March 2, 2009 9:35 am

Jeff, to be honest, it’s my climate challenged older brother, so I get nowhere without extreme effort!:} Is there a reason for not using your names?

Jeff C.
March 2, 2009 11:43 am

David,
Thanks for your comments. This response is a bit long-winded, but here are my reasons for anonymity. These are my thoughts alone and don’t in anyway claim to speak for Jeff Id.
The paper attracted my attention due to almost immediate 180-degree shift on Antarctica seen in the popular press. Overnight it went from Antarctica is cooling but we expected that, to Antarctica is warming just like we thought it would. Virtually no questions aside from the Trenberth quote regarding how the paradigm could completely change. The pronouncement was that this was the new reality.
Unlike the paper’s authors, all of the work discussed in the article was done on my own time without any compensation. The authors of this paper are well-paid (as they should be for their experience and education) using public funding to perform research that will ultimately shape government policy. That policy will affect the quality of our lives whether we individually consent or not. The studies that shape the policies need to be transparent, verifiable and sound.
I am employed in private industry. My employer has the same right to transparency in my work that I would expect from those employed by the public. I don’t withhold methods, or provide vague descriptions when clarification is requested. I often have to present my finding in design review presentations where the company and the customer’s experts are brought in to judge if I did my job correctly. If I made a mistake, it isn’t pleasant to have it exposed in an auditorium with as many as 100 people watching. Because of that, I strive to be cautious in my work and not venture into areas where I am not confident of my findings. When pressed with hard questions, I answer them honestly or admit I don’t know. One thing I never do (and would be reprimanded if I did) is demand to know the name and credentials of those asking the questions before I decide to answer. My employer thinks it is important for them to question my work, if want to be able to continue to pay my mortgage, I answer them.
Unfortunately, climate science is heavily politicized. Those who dare question the AGW orthodoxy are vilified on environmental extremist and left-wing websites. My last name isn’t common. A few minutes googling around and you are looking at satellite photos of my home. If I were being paid six-figures to look into this, I might accept that as the cost of doing business. I’m not, so I don’t.
I will gladly supply all of the back up documentation (R and Matlab scripts, Excel workbooks, etc.) to those who have a true interest in following up on this. I can be reached though Jeff Id’s site. If those associated with the paper would like to privately discuss this (as opposed to public sniping on blogs), I have no problem using my real name.

March 2, 2009 6:56 pm

If you’ll put up a separate tip jar for the costs for an ad, I’ll ding it periodically.

E.M.Smith
Editor
March 5, 2009 4:46 pm

@Pierre-Luc (07:27:02) : http://droitemonde.blogspot.com/2009/03/une-autre-mauvaise-nouvelle-pour-nos.html
Merci Pierre-Luc pour votre lien affiché sur WUWT. C’est tres chaud, n’est pas?

E.M.Smith
Editor
March 5, 2009 4:48 pm

Avec un sarc/> 😎

Richard S Courtney
March 7, 2009 2:52 pm

All:
The following is correspondence I have had with Nature magazine. I have still not had a reply.
Please note that Nature refuses to publish anything that has appeared in the public domain and, therefore, I have not submitted the matter of this correspondence publicly before. However, the amount of time since I first asked the questions makes clear that Nature – which I remember from times past was a serious scientific journal – is not willing to address the issue. Therefore, I am now posting the correspondence here in hope of providing the information widely, and the correspondence is to be published in ‘Energy and Environment’.
Richard
Letter to Nature 24 February 2009
Dear Sirs:
A month has passed since I sent you the letter that I copy below. It asks for necessary explanation of the methodology used by Steig et al. to conduct work that they reported in Nature (ref. Nature v457 Issue 7228, pp 459-462, 22 Jan.2009).
You replied saying:
In a message dated 24/01/2009 10:59:56 GMT Standard Time, feedback@nature.com writes:
Thank you for contacting us. We acknowledge receipt of your question.
Since then I have heard nothing from you, and I have found no mention of the matter in subsequent editions of Nature.
I find it strange that you have not provided answers to the questions in my letter (copied below). The paper by Steig et al. was used by Nature as a ‘cover story’, and Nature issued a press release to announce the paper’s publication, but I have questioned the methodology of that paper. And it would require very little effort by the paper’s authors to answer my questions if there are valid answers to my questions. Importantly, as my letter (copied below) explains, the paper by Steig et al. cannot be considered to be a work of science until my questions have been answered because – at present – the reported methodology of that work is based on circular reasoning.
So, I am writing this message as a strong request that a valid answer to my questions now be provided. If no such answers are forthcoming then I will address the problem in another journal.
Regards
Richard S Courtney
Letter submitted to Nature 24 January 2009
Dear Sirs:
Steig et al. report (Nature v457 Issue 7228, pp 459-462, 22 Jan.2009) assert they have shown “that significant warming extends well beyond the Antarctic Peninsula to cover most of West Antarctica” by interpolation of data from a few surface stations and by adjusting satellite telemetry data.
The supplementary information to their paper includes the following:
“Accuracy in the retrieval of ice sheet surface temperatures from satellite infrared data depends on successful cloud masking, which is challenging because of the low contrast in the albedo and emissivity of clouds and the surface. In Comiso (ref. 8), cloud masking was done by a combination of channel differencing and daily differencing, based on the change in observed radiances due to the movement of clouds. Here, we use an additional masking technique in which daily data that differ from the climatological mean by more than some threshold value are assumed to be cloud contaminated and are removed. We used a threshold value of 10°C, which produces the best validation statistics in the reconstruction procedure.”
The above quotation says Steig et al. removed all “satellite infrared data” data that were 10 or more °C from the “climatological mean”. This raises interesting questions that warrant answer before the findings of Steig et al. should be accepted; viz.
How did Steig et al. know the “climatological mean” when that is the parameter they were trying to determine?
And if they knew it then why did they need to determine it?
Importantly, why did peer review of the paper by Steig et al. fail to obtain clarification of such a fundamental point concerning the validity of the used methodology?
Regards
Richard S Courtney
From: Richard S Courtney
88 Longfield
Falmouth
Cornwall
TR11 4SL
United Kingdom
Tel: +44 01326 211849
email: RichardSCourtney@aol.com

March 8, 2009 8:15 am

How Wattsup can lie and get away with it:
http://www.davidsuzuki.org/about_us/Dr_David_Suzuki/Article_Archives/weekly03060901.asp
Why does the public often pay more attention to climate change deniers than climate scientists? Why do denial arguments that have been thoroughly debunked still show up regularly in the media?
Some researchers from New York’s Fordham University may have found some answers. Prof. David Budescu and his colleagues asked 223 volunteers to read sentences from reports by the Intergovernmental Panel on Climate Change. The responses revealed some fundamental misunderstandings about how science works.
Science is a process. Scientists gather and compare evidence, then construct hypotheses that “make sense” of the data and suggest further tests of the hypothesis. Other scientists try to find flaws in the hypothesis with their own data or experiments. Eventually, a body of knowledge builds, and scientists become more and more certain of their theories. But there’s always a chance that a theory will be challenged. And so the scientists speak about degrees of certainty. This has led to some confusion among the public about the scientific consensus on climate change.
What Prof. Budescu and his colleagues found was that subjects interpreted statements such as “It is very likely that hot extremes, heat waves and heavy precipitation events will continue to become more frequent” to mean that scientists were far from certain. In fact, the term very likely means more than 90 per cent certain, but almost half the subjects thought it meant less than 66 per cent certain, and three quarters thought it meant less than 90 per cent.

REPLY: Thanks for the kind words “CCPO”. Unfortunately yours is mostly the angry opinion of a coward. You denigrate, hurl insults, and make labels much like our good friend Tenney. But you do so from behind the safety of a fake internet name. You even hide behind a proxy. At least have the courage to address the posting directly, using your own name, rather than to hide behind cowardice. I wonder though, as a teacher, do you say the same sorts of things to your students when they question things? Try not to be so angry all the time, it does little good here.- Anthony

Don S
March 18, 2009 1:27 pm

And, today, Mar 18, comes from NSF a press release on the ANDRILL program, a $20 million(US, plus $10 million from other govts) Antarctic core drilling boondoggle which has resulted in a preliminary report more full of “coulds”, “shoulds”, “mights” “projected to occur”and “models indicate” than a Madoff sales pitch. Somebody tell me what it all means.

travis may
April 20, 2009 10:02 pm

Science is a process. Scientists gather and compare evidence, then construct hypotheses that “make sense” of the data and suggest further tests of the hypothesis. Other scientists try to find flaws in the hypothesis with their own data or experiments…………..
..Hello..This is the end to the result, conclusive science would take 100’s of years. i thank you for your insight, i plead that year heart finds the Son of God, Jesus the christ, because of you..people will believe not in man but in Gods will with his creation. Thanks for the infooo heres mine…aplusraingutter@gmail.com