Steig et al 'Antarctica Warming Paper' process is finally replicated, and dealt a blow to "robustness".

Jeff Id emailed me today, to ask if I wanted to post this with the caveat “it’s very technical, but I think you’ll like it”. Indeed I do, because it represents a significant step forward in the puzzle that is the Steig et all paper published in Nature this year ( Nature, Jan 22, 2009) that claims to have reversed the previously accepted idea that Antarctica is cooling. From the “consensus” point of view, it is very important for “the Team” to make Antarctica start warming. But then there’s that pesky problem of all that above normal ice in Antarctica. Plus, there’s other problems such as buried weather stations which will tend to read warmer when covered with snow.  And, the majority of the weather stations (and thus data points) are in the Antarctic peninsula, which weights the results. The Antarctic peninsula could even be classified under a different climate zone given it’s separation from the mainlaind and strong maritime influence.

A central prerequisite point to this is that Steig flatly refused to provide all of the code needed to fully replicate his work in MatLab and RegEM, and has so far refused requests for it. So without the code, replication would be difficult, and without replication, there could be no significant challenge to the validity of the Steig et al paper.

Steig’s claim that there has been “published code” is only partially true, and what has been published by him is only akin to a set of spark plugs and a manual on using a spark plug wrench when given the task of rebuilding an entire V-8 engine.

In a previous Air Vent post, Jeff C points out the percentage of code provided by Steig:

“Here is an excellent flow chart done by JeffC on the methods used in the satellite reconstruction. If you see the little rectangle which says RegEM at the bottom right of the screen, that’s the part of the code which was released, the thousands of lines I and others have written for the rest of the little blocks had to be guessed at, some of it still isn’t figured out yet.”

http://noconsensus.files.wordpress.com/2009/04/steigflowrev4-6-09.jpg?w=598&h=364
RegEM Satellite data flow chart. Courtesy Jeff C - click for larger image

With that, I give you Jeff and Ryan’s post below. – Anthony

Antarctic Coup de Grace

Posted by Jeff Id on May 20, 2009

I was going to hold off on this post because Dr. Weinstein’s post is getting a lot of attention right now it has been picked up on several blogs and even translated into different languages but this is too good not to post.

Ryan has done something amazing here, no joking. He’s recalibrated the satellite data used in Steig’s Antarctic paper correcting offsets and trends, determined a reasonable number of PC’s for the reconstruction and actually calculated a reasonable trend for the Antarctic with proper cooling and warming distributions – He basically fixed Steig et al. by addressing the very concern I had that AVHRR vs surface station temperature(SST) trends and AVHRR station vs SST correlation were not well related in the Steig paper.

Not only that he demonstrated with a substantial blow the ‘robustness’ of the Steig/Mann method at the same time.

If you’ve followed this discussion whatsoever you’ve got to read this post.

RegEM for this post was originally transported to R by Steve McIntyre, certain versions used are truncated PC by Steve M as well as modified code by Ryan.

Ryan O – Guest post on the Air Vent

I’m certain that all of the discussion about the Steig paper will eventually become stale unless we begin drawing some concrete conclusions. Does the Steig reconstruction accurately (or even semi-accurately) reflect the 50-year temperature history of Antarctica?

Probably not – and this time, I would like to present proof.

I: SATELLITE CALIBRATION

As some of you may recall, one of the things I had been working on for awhile was attempting to properly calibrate the AVHRR data to the ground data. In doing so, I noted some major problems with NOAA-11 and NOAA-14. I also noted a minor linear decay of NOAA-7, while NOAA-9 just had a simple offset.

But before I was willing to say that there were actually real problems with how Comiso strung the satellites together, I wanted to verify that there was published literature that confirmed the issues I had noted. Some references:

(NOAA-11)

Click to access i1520-0469-59-3-262.pdf

(Drift)

Click to access orbit.pdf

(Ground/Satellite Temperature Comparisons)

Click to access p26_cihlar_rse60.pdf

The references generally confirmed what I had noted by comparing the satellite data to the ground station data: NOAA-7 had a temperature decrease with time, NOAA-9 was fairly linear, and NOAA-11 had a major unexplained offset in 1993.

Fig_1
Fig. 1: AVHRR trend (points common with ground data).

Let us see what this means in terms of differences in trends.

Fig_2
Fig. 2: Difference in trend between AVHRR data and ground data.

The satellite trend (using only common points between the AVHRR data and the ground data) is double that of the ground trend. While zero is still within the 95% confidence intervals, remember that there are 6 different satellites. So even though the confidence intervals overlap zero, the individual offsets may not.

In order to check the individual offsets, I performed running Wilcoxon and t-tests on the difference between the satellites and ground data using a +/-12 month range. Each point is normalized to the 95% confidence interval. If any point exceeds +/- 1.0, then there is a statistically significant difference between the two data sets.

Fig_3
Fig. 3: Results of running Wilcoxon and t-tests between satellite and ground data.

Note that there are two distinct peaks well beyond the confidence intervals and that both lines spend much greater than 5% of the time outside the limits. There is, without a doubt, a statistically significant difference between the satellite data and the ground data.

As a sidebar, the Wilcoxon test is a non-parametric test. It does not require correction for autocorrelation of the residuals when calculating confidence intervals. The fact that it differs from the t-test results indicates that the residuals are not normally distributed and/or the residuals are not free from correlation. This is why it is important to correct for autocorrelation when using tests that rely on assumptions of normality and uncorrelated residuals. Alternatively, you could simply use non-parametric tests, and though they often have less statistical power, I’ve found the Wilcoxon test to be pretty good for most temperature analyses.

Here’s what the difference plot looks like with the satellite periods shown:

Fig_4
Fig. 4: Difference plot, satellite periods shown.

The downward trend during NOAA-7 is apparent, as is the strange drop in NOAA-11. NOAA-14 is visibly too high, and NOAA-16 and -17 display some strange upward spikes. Overall, though, NOAA-16 and -17 do not show a statistically significant difference from the ground data, so no correction was applied to them.

After having confirmed that other researchers had noted similar issues, I felt comfortable in performing a calibration of the AVHRR data to the ground data. The calculated offsets and the resulting Wilcoxon and t-test plot are next:

Fig_5
Fig. 5: Calculated offsets.
Fig_6
Fig. 6: Post-calibration Wilcoxon and t-tests

To make sure that I did not “over-modify” the data, I ran a Steig (3 PC, regpar=3, 42 ground stations) reconstruction. The resulting trend was 0.1079 deg C/decade and the trend maps looked nearly identical to the Steig reconstructions. Therefore, the satellite offsets – while they do produce a greater trend when not corrected – do not seem to have a major impact on the Steig result. This should not be surprising, as most of the temperature rise in Antarctica occurs between 1957 and 1970.

II: PCA

One of the items that we’ve spent a lot of time doing sensitivity analysis is the PCA of the AVHRR data. Between Jeff Id, Jeff C, and myself, we’ve performed somewhere north of 200 reconstructions using different methods and different numbers of retained PCs. Based on that, I believe that we have a pretty good feel for the ranges of values that the reconstructions produce, and we all feel that the 3 PC, regpar=3 solution does not accurately reproduce Antarctic temperatures. Unfortunately, our opinions count for very little. We must have a solid basis for concluding that Steig’s choices were less than optimal – not just opinions.

How many PCs to retain for an analysis has been the subject of much debate in many fields. I will quickly summarize some of the major stopping rules:

1. Kaiser-Guttman: Include all PCs with eigenvalues greater than the average eigenvalue. In this case, this would require retention of 73 PCs.

2. Scree Analysis: Plot the eigenvalues from largest to smallest and take all PCs where the slope of the line visibly ticks up. This is subjective, and in this case it would require the retention of 25 – 50 PCs.

3. Minimum explained variance: Retain PCs until some preset amount of variance has been explained. This preset amount is arbitrary, and different people have selected anywhere from 80-95%. This would justify including as few as 14 PCs and as many as 100.

4. Broken stick analysis: Retain PCs that exceed the theoretical scree plot of random, uncorrelated noise. This yields precisely 11 PCs.

5. Bootstrapped eigenvalue and eigenvalue/eigenvector: Through iterative random sampling of either the PCA matrix or the original data matrix, retain PCs that are statistically different from PCs containing only noise. I have not yet done this for the AVHRR data, though the bootstrap analysis typically yields about the same number (or a slightly greater number) of significant PCs as broken stick.

The first 3 rules are widely criticized for being either subjective or retaining too many PCs. In the Jackson article below, a comparison is made showing that 1, 2, and 3 will select “significant” PCs out of matrices populated entirely with uncorrelated noise. There is no reason to retain noise, and the more PCs you retain, the more difficult and cumbersome the analysis becomes.

The last 2 rules have statistical justification. And, not surprisingly, they are much more effective at distinguishing truly significant PCs from noise. The broken stick analysis typically yields the fewest number of significant PCs, but is normally very comparable to the more robust bootstrap method.

Note that all of these rules would indicate retaining far more than simply 3 PCs. I have included some references:

Click to access pca.pdf

Click to access North_et_al_1982_EOF_error_MWR.pdf

I have not yet had time to modify a bootstrapping algorithm I found (it was written for a much older version of R), but when I finish that, I will show the bootstrap results. For now, I will simply present the broken stick analysis results.

Fig_7

Fig. 7: Broken Stick Analysis on AVHRR data.

The broken stick analysis finds 11 significant PCs. PCs 12 and 13 are also very close, and I suspect the bootstrap test will find that they are significant. I chose to retain 13 PCs for the reconstruction to follow.

Without presenting plots for the moment, retaining more than 11 PCs does not end up affecting the results much at all. The trend does drop slightly, but this is due to better resolution on the Peninsula warming. The rest of the continent does not change if additional PCs are added. The only thing that changes is the time it takes to do the reconstruction.

Remember that the purpose of the PCA on the AVHRR data is not to perform factor analysis. The purpose is simply to reduce the size of the data to something that can be computed. The penalty for retaining “too many” – in this case – is simply computational time or the inability for RegEM to converge. The penalty for retaining too few, on the other hand, is a faulty analysis.

I do not see how the choice of 3 PCs can be justified on either practical or theoretical grounds. On the practical side, RegEM works just fine with as many as 25 PCs. On the theoretical side, none of the stopping criteria yield anything close to 3. Not only that, but these are empirical functions. They have no direct physical meaning. Despite claims in Steig et al. to the contrary, they do not relate to physical processes in Antarctica – at least not directly. Therefore, there is no justification for excluding PCs that show significance simply because the other ones “look” like physical processes. This latter bit is a whole other discussion that’s probably post worthy at some point, but I’ll leave it there for now.

III: RegEM

We’ve also spent a great deal of time on RegEM. Steig & Co. used a regpar setting of 3. Was that the “right” setting? They do not present any justification, but that does not necessarily mean the choice is wrong. Fortunately, there is a way to decide.

RegEM works by approximating the actual data with a certain number of principal components and estimating a covariance from which missing data is predicted. Each iteration improves the prediction. In this case (unlike the AVHRR data), selecting too many can be detrimental to the analysis as it can result in over-fitting, spurious correlations between stations and PCs that only represent noise, and retention of the initial infill of zeros. On the other hand, just like the AVHRR data, too few will result in throwing away important information about station and PC covariance.

Figuring out how many PCs (i.e., what regpar setting to use) is a bit trickier because most of the data is missing. Like RegEM itself, this problem needs to be approached iteratively.

The first step was to substitute AVHRR data for station data, calculate the PCs, and perform the broken stick analysis. This yielded 4 or 5 significant PCs. After that, I performed reconstructions with steadily increasing numbers of PCs and performed a broken stick analysis on each one. Once the regpar setting is high enough to begin including insignificant PCs, the broken stick analysis yields the same result every time. The extra PCs show up in the analysis as noise. I first did this using all the AWS and manned stations (minus the open ocean stations).

Fig_8

Fig. 8: Broken stick analysis on manned and AWS stations, regpar = 8.

Fig_9

Fig. 9: Broken stick analysis on manned and AWS stations, regpar=12.

I ran this all the way up to regpar=20 and the broken stick analysis indicates that 9 PCs are required to properly describe the station covariance. Hence the appropriate regpar setting is 9 if all the manned and AWS stations are used. It is certainly not 3, which is what Steig used for the AWS recon.

I also performed this for the 42 manned stations Steig selected for the main reconstruction. That analysis yielded a regpar setting of 6 – again, not 3.

The conclusion, then, is similar to the AVHRR PC analysis. The selection of regpar=3 does not appear to be justifiable. Additional PCs are necessary to properly describe the covariance.

IV: THE RECONSTRUCTION

So what happens if the satellite offsets are properly accounted for, the correct number of PCs are retained, and the right regpar settings are used? I present the following panel:

Fig_10

Fig. 10: (Left side) Reconstruction trends with the post-1982 PCs spliced back in (Steig’s method).

(Right side) Reconstruction trends using just the model frame.

RegEM PTTLS does not return the entire best-fit solution (the model frame, or surface). It only returns what the best-fit solution says the missing points are. It retains the original points. When imputing small amounts of data, this is fine. When imputing large amounts of data, it can be argued that the surface is what is important.

RegEM IPCA returns the surface (along with the spliced solution). This allows you to see the entire solution. In my opinion, in this particular case, the reconstruction should be based on the solution, not a partial solution with data tacked on the end. That is akin to doing a linear regression, throwing away the last half of the regression, adding the data back in, and then doing another linear regression on the result to get the trend. The discontinuity between the model and the data causes errors in the computed trend.

Regardless, the verification statistics are computed vs. the model – not the spliced data – and though Steig did not do this for his paper, we can do it ourselves. (I will do this in a later post.) Besides, the trends between the model and the spliced reconstructions are not that different.

Overall trends are 0.071 deg C/decade for the spliced reconstruction and 0.060 deg C/decade for the model frame. This is comparable to Jeff’s reconstructions using just the ground data, and as you can see, the temperature distribution of the model frame is closer to that of the ground stations. This is another indication that the satellites and the ground stations are not measuring exactly the same thing. It is close, but not exact, and splicing PCs derived solely from satellite data on a reconstruction where the only actual temperatures come from ground data is conceptually suspect.

When I ran the same settings in RegEM PTTLS – which only returns a spliced version – I got 0.077 deg C/decade, which checks nicely with RegEM IPCA.

I also did 11 PC, 15 PC, and 20 PC reconstructions. Trends were 0.081, 0.071, and 0.069 for the spliced and 0.072, 0.059, and 0.055 for the model. The reason for the reduction in trend was simply better resolution (less smearing) of the Peninsula warming.

Additionally, I ran reconstructions using just Steig’s station selection. With 13 PCs, this yielded a spliced trend of 0.080 and a model trend of 0.065. I then did one after removing the open-ocean stations, which yielded 0.080 and 0.064.

Note how when the PCs and regpar are properly selected, the inclusion and exclusion of individual stations does not significantly affect the result. The answers are nearly identical whether 98 AWS/manned stations are used, or only 37 manned stations are used. One might be tempted to call this “robust”.

V: THE COUP DE GRACE

Let us assume for a moment that the reconstruction presented above represents the real 50-year temperature history of Antarctica. Whether this is true is immaterial. We will assume it to be true for the moment. If Steig’s method has validity, then, if we substitute the above reconstruction for the raw ground and AVHRR data, his method should return a result that looks similar to the above reconstruction.

Let’s see if that happens.

For the substitution, I took the ground station model frame (which does not have any actual ground data spliced back in) and removed the same exact points that are missing from the real data.

I then took the post-1982 model frame (so the one with the lowest trend) and substituted that for the AVHRR data.

I set the number of PCs equal to 3.

I set regpar equal to 3 in PTTLS.

I let it rip.

Fig_11

Fig. 11: Steig-style reconstruction using data from the 13 PC, regpar=9 reconstruction.

Look familiar?

Overall trend: 0.102 deg C/decade.

Remember that the input data had a trend of 0.060 deg C/decade, showed cooling on the Ross and Weddel ice shelves, showed cooling near the pole, and showed a maximum trend in the Peninsula.

If “robust” means the same answer pops out of a fancy computer algorithm regardless of what the input data is, then I guess Antarctic warming is, indeed, “robust”.

———————————————

Code for the above post is HERE.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
168 Comments
Inline Feedbacks
View all comments
Editor
May 21, 2009 7:25 pm

Jeff, Jeff and Ryan have done the most to investigate Dr. Steig’s work and if they suggest that there is little if any basis to throw around the “F” word maybe the rest of us ought to pay heed. I would be delighted to see Dr. Steig reply here and I would hope he would be given a courteous and attentive hearing. We might even learn something… just as we do when two of our favorite resident curmudgins whack away at solar science.

Mike Bryant
May 21, 2009 7:30 pm

Wreak may properly be pronounced either “wreck” or “reek”.

VG
May 21, 2009 7:39 pm

George: The issue is AGW and the horrific costs it is going to entail. You as an inventor/sales person (as I gather from your posting) would appreciate this. OK, so we are going to go ahead and do all this (and Steig papers re-inforces the concept of warming, probably falsely) while in fact the world IS NOT warming, but in fact cooling at this time. Please check all the temp and ice data to confirm, yes even Hansen etc et al and CT etc.. they all show increasing ice and declining temps. It is in fact outrageous that these people should be let off so lightly.. If this was another issue that was not so costly/affecting peoples jobs ect then maybe ok in the case of AGW = NOT OK.

May 21, 2009 8:14 pm

I believe I’ve used a lot of George E. Smiths product inventions. I used to make a living programming CCD vision systems.

Gilbert
May 21, 2009 8:15 pm

Gary P (10:35:33) :
A coworker sent me a copy of a report a month ago and I finally got around to reading it. It is from SRI Consulting Business Intellegence, “The US Consumer and Global Warming”. It is scary:
Link here:
http://www.sric-bi.com/Scan/SoC/SoC347.shtml

Jeremy
May 21, 2009 8:31 pm

I find a lot of the posters here forgetting that service to the public is supposed to be a responsibility given to the honorable. This should apply doubly-so to the scientists who are given the privelage and honor of using their skills to service the public good through taxpayer dollars. We may not trust politicians, but the system the work in was at least designed with clear checks and balances, and two opposing parties keeps some of that working as it should. We *IMPLICITLY* trust scientists, and those who work for the government should be honest to a fault.
People throw around the “F” word *because* those who have been given the privelage of working for the public good are telling us to trust them without displaying and justifying their methods. It’s akin to being told by your spouse that they’re not cheating on you and they don’t have to justify leaving the house each night at 11pm and you should just trust them even though all the circumstantial evidence says there’s reason to doubt. If Mann and Steig were politicians it might be easy to ignore the betrayal of trust, but they’re not, they’re scientists who have chosen to take taxpayer money to service the public with the toil of their mind. They *are* held to a higher standard because the public has placed a special trust on them.
This plague of silence on data, methods and justification from those who publish under tax dollars must stop. I would guess the best way to do that would be to engage enough of the rest of the community that those who withold lose obstructions to hide behind.

PC
May 21, 2009 8:35 pm

Does anyone else notice that the Antarctic peninsula has a shape somewhat like a hockeystick?

Fluffy Clouds (Tim L)
May 21, 2009 8:38 pm

Here is my summation,
looks good for .5C/century +/- .2C or 1F +/- .4F (50 years of data)
The next question is, DO WE WANT TO GO BACK TO THE DARK AGES FOR .5C ???????
DO WE? Do we want to stop burning coal/oil? DO WE?
links:
(significant digits) http://en.wikipedia.org/wiki/Significant_figures
calculations carried out to greater accuracy than that of the original data
http://en.wikipedia.org/wiki/Accuracy_and_precision
I am with Jeff,Jeff,Ryan that this is not fraud per say, but deliberate cherry picking.
the next Question is, TO WHAT ENDS? What? GREED?

Mark
May 21, 2009 8:38 pm

How many times have these AGW scientists refused data, code, etc? Does anybody know? Is it only a handful of times or is it something like 10%?.

juan
May 21, 2009 9:04 pm

Badly OT, but I’m not sure where to take it & I think others will be interested. I sent this to The News Hour:
Dear Sirs,
I have listened to The News Hour for many years, and have appreciated your efforts to get beyond the headlines. I have also had the sense that you seek some reasonable balance when dealing with disputed issues.
I must say that I was disappointed with your segment of last Tuesday, “Georgia’s Reliance on Coal….”. The effect of carbon dioxide on climate is very much a disputed issue, and Heidi Cullen is clearly a partisan in the dispute. It would be very appropriate to interview her and let her express her views; it is very inappropriate to introduce her as an objective ‘reporter.’
The piece itself is clearly tendentious, sometimes absurdly so. One example:
LAURA DEVENDORF, Sunbury, Georgia: “We’re worried about sea level rise, indeed. I think everyone on the coast is. You can just sit there and see the tides getting bigger.”
This statement, allowed to pass unchallenged should have been edited out early on. (Unless, of course you really believe a casual observer can notice a rise of 2.98 millimeters per year — see this link: http://tidesandcurrents.noaa.gov/sltrends/ :
Fort Pulaski, Georgia
8670870
The mean sea level trend is 2.98 mm/year with a 95% confidence interval of +/- 0.33 mm/year based on monthly mean sea level data from 1935 to 2006 which is equivalent to a change of 0.98 feet in 100 years.
Here’s wishing you well and hoping for a little less advocacy in the future.
Sincerely,
John Slayton

Ivan
May 21, 2009 9:21 pm

Jeff, Ryan,
I am very curious to know how would you describe Mann’s method and results of Hockey Stick study if not fraud? If picking one data set deemed by original providers as not reliable temp proxy and constructing algorithm that produces HS out of red noise is not fraud, I am actually not sure what the term means whatsoever.
If you say that Steig teamed up with Mann because of his statistical skills in constructing hockey stick, it is for me very strong indication of new fraud. Furthermore, if Doran et al (2002) and Chapman and Walsh (2007) and Moinighan et al (2008) as well as your own calculations all show cooling in last 40 years, and only Steig et al warming, and Steig et al refuse to give you the code and data, what neutral observer can conclude from all that facts?

Ivan
May 21, 2009 9:22 pm

Jeff, Ryan,
I am very curious to know how would you describe Mann’s method and results of Hockey Stick study if not fraud? If picking one data set deemed by original providers as not reliable temp proxy at all, and “deriving” North Hemispheric temp from it, and constructing algorithm that produces HS out of red noise is not fraud, I am actually not sure what the term means whatsoever.
If you say that Steig teamed up with Mann because of his statistical skills in constructing hockey stick, it is for me very strong indication of new fraud. Furthermore, if Doran et al (2002) and Chapman and Walsh (2007) and Moinighan et al (2008) as well as your own calculations all show cooling in last 40 years, and only Steig et al warming, and Steig et al refuse to give you the code and data, what neutral observer can conclude from all that facts?

May 21, 2009 9:50 pm

Nice Mr George said (18:13:15) :
“It’s an even bet that maybe half of the people posting on WUWT anywhere in the world did so using something I had a big hand in the practical design of.”
I have always wanted to meet the man who designed my computer’s “on” button, thank you Mr George. Somehow I feel my life is now complete.

May 21, 2009 9:53 pm

O/T I’m afraid, but relevent and interesting too, the re-emergence of islands off the Indian coast (and the disappearance of others). And no, agw has nothing to do with it.
Fascinating blog, from a blogger on the spot.
http://sunderbanislands.blogspot.com/
Please also see mickysmuses.blogspot.com

Jeremy
May 21, 2009 9:58 pm

Good one. Thanks WUWT. Amazing that Steig et Al could make such basic mistakes as to not thoroughly check the effect of allowing different degrees of freedom upon their modeling results. Sloppy work does not even begin to describe how poorly they conduct analysis. Absolutely stunning that Nature reviewers would not demand to see the convergence of results with a higher number of principal components on an iterative computer solution. Simply amazing how sloppy shoddy researchers have become these days.

VG
May 21, 2009 10:14 pm

Anthony: just a suggestion (you may actually be doing this anyway). when you get such a significant posting such as this is it possible to keep it “number 1” and still put new postings below with a notice the new posting is below or in side as done now (that is, until it is decided this one’s had its day etc)? Postings such as this may actually get some resolution from the pesrson’s involved, whereas if they are “moved on” they tend to be forgotten. ie Example nothing against Dr Steig , but probably, can’t wait for this one to disappear LOL

Steven G
May 21, 2009 10:47 pm

Very impressive work.
For these longer and more technical posts, I would recommend a short abstract at the beginning that summarizes the results in layman’s terms. It will make your results more accessible to a wider audience.

Flanagan
May 21, 2009 10:54 pm

Err, to come back on the post. It really really looks like any type of temperature reconstruction actually leads to warming. So where’s the problem, again?
Reply: Er…I claim exclusive usage of Err, Er, and its myriad derivations on this website. ~ charles the sometimes dismissive moderator

Andrew P
May 21, 2009 11:17 pm

Slightly IT but related:
Talking of snow-buried weather stations, can anyone explain why the Dome Argus Subsurface 10m temperature is always 2 or 3 degrees colder than the Sub-surface 3m? Or have they just got the sensor wires or graph labels mixed up?
http://www.aad.gov.au/weather/aws/dome-a/index.html

Geoff Sherrington
May 21, 2009 11:57 pm

Lovely work, but I have a problem with fig 5, which is pivotal (as are some of the others). A reasonable assumption would be that the accuracy of the satellite sensors changed with time. You have opted for a solution that is close to taking an average of high to low response. However, what happens if you assume that each sparlky new satellite was working accurately at launch, so that the figure 5 bars should all be horizontal and at the level of the start of each?
Conceded that whatever choice is made cannot be substantiated by hindsight data, but is it not reasonable to just examine the assumption that there was an initially correct signal followed by degradation with each satellite? As it happens, most start their careers at about +0.2 deg C, fig 4.
It’s a bit like Athony might assume that ground climate stations in the USA were initially sited more or less ok, then trees grew and suburbs encroached and airconditioners were installed ….

jorgekafkazar
May 22, 2009 12:08 am

Manfred (12:24:16) : “doesn’t this sound familiar ?
‘http://www.telegraph.co.uk/scienceandtechnology/5345963/The-scientific-fraudster-who-dazzled-the-world-of-physics.html'”
OT and unrelated to Steig’s paper(s), but perhaps relevant to others in the field:
“As book author Gary Taubes, no stranger to ferreting out bad science, said in in an interview here: ‘I used to joke with my friends in the physics community that if you want to cleanse your discipline of the worst scientists in it, every three or four years, you should have someone publish a bogus paper claiming to make some remarkable new discovery — infinite free energy or ESP, or something suitably cosmic like that. Then you have it published in a legitimate journal ; it shows up on the front page of the New York Times, and within two months, every bad scientist in the field will be working on it.'”
Shhhhh, Gary Taubes!! What if the grant system then eventually causes 95% of all workers in entire branches of science to tie their work to the pseudoscience? What if a “consensus is arrived at?” What if the NYT refuses to report on work debunking the consensus? What, if anything, have Nature and Science learned from the Schön case? *
http://www.scientificblogging.com/science_20/jan_hendrik_sch%C3%B6n_world_class_physics_fraud_gets_last_laugh_whole_book_about_himself
* Some might say all they learned was not to let anybody get their hands on the original data or the computer code. Ha-ha! Just kidding!

Brendan H
May 22, 2009 2:50 am

jorgekafkazar “Let’s all retain the behavioral high ground…”
It’s a bit late for that. As far as accusations of fraud are concerned, the behavioural high ground was deserted long ago.
That said, the behavioural high ground could be re-occupied easily enough: climate sceptics could simply stop using the F word.
Or perhaps the WUWT moderators could place the F word in the same category as the D word and snip as appropriate.
Reply: Different contexts, sorry. Should you call another poster a fraud, then you may have some grounds for complaint. And remember, WUWT officially allows open season on Hansen and Mann. ~ charles the moderator

Julie
May 22, 2009 3:06 am

Flanagan – if you had taken the time to actually read the previous posts you would have read that any warming was prior to around 40 years ago. Since then it has been cooling. Trends are only as good as their start and end points, after all.

Mr Lynn
May 22, 2009 4:50 am

Gilbert (20:15:44) :

Gary P (10:35:33) :
A coworker sent me a copy of a report a month ago and I finally got around to reading it. It is from SRI Consulting Business Intellegence, “The US Consumer and Global Warming”. It is scary:
Link here:
http://www.sric-bi.com/Scan/SoC/SoC347.shtml

Password required.
/Mr Lynn

Mr Lynn
May 22, 2009 4:58 am

Kirls (18:16:54) :
. . . actually, the totus said “wrecked” so that’s what the potus said… “wrecked havoc on our climate”.

Wouldn’t surprise me. I heard the quote, but don’t remember his pronunciation. I copied it from Whitehouse.gov, so it’s possible they fixed the word.

Mike Bryant (19:30:11) :
Wreak may properly be pronounced either “wreck” or “reek”.

True, although saying “wrecked havoc” makes you sound like an idiot. Which in the case of the current occupant may not be far off.
/Mr Lynn