Steig et al – falsified

Smearing around data or paint - the results are similar
Smearing around data or paint - the results are similar

Jeff Id of The Air Vent emailed me today inviting me to repost Ryan O’s latest work on statistical evaluation of the Steig et al “Antarctica is warming” paper ( Nature, Jan 22, 2009) I thought long and hard about the title, especially after reviewing the previous work from Ryan O we posted on WUWT where the paper was dealt a serious blow to “robustness”. After reading this latest statistical analysis, I think it is fair to conclude that the paper’s premise has been falsified.

Ryan O, in his conclusion, is a bit more gracious:

I am perfectly comfortable saying that Steig’s reconstruction is not a faithful representation of Antarctic temperatures over the past 50 years and that ours is closer to the mark.

Not only that, Ryan O did a more complete job of the reconstruction than Steig et al did, he mentions this in comments at The Air Vent:

Steig only used 42 stations to perform his reconstruction. I used 98, since I included AWS stations.

The AWS stations have their problems, such as periods of warmer temperatures due to being buried in snow, but even when using this data, Ryan O’s analysis still comes out with less warming than the original Steig et al paper

Antarctica as a whole is not warming, the Antarctic peninsula is, which is signficantly removed climatically from the main continent.

Click for a larger image
Click for a larger image

It is my view that all Steig and Michael Mann have done with their application of RegEm to the station data is to smear the temperature around much like an artist would smear red and white paint on a pallete board to get a new color “pink” and then paint the entire continent with it.

It is a lot like “spin art” you see at the county fair. For example, look (at left) at the different tiles of colored temperature results for Antarctica you can get using Steig’s and Mann’s methodology. The only thing that changes are the starting parameters, the data remains the same, while the RegEm program smears it around based on those starting parameters. In the Steig et al case, PC and regpar were chosen by the authors to be a value of 3. Chosing any different numbers yields an entirely different result.

So the premise of the Steig et al paper paper boils down to an arbitrary choice of values that “looked good”.

I hope that Ryan O will write a rebuttal letter to Nature, and/or publish a paper. It is the only way the Team will back down on this. – Anthony

UPDATE: To further clarify, Ryan O writes in comments:

“Overall, Antarctica has warmed from 1957-2006. There is no debating that point. (However, other than the Peninsula, the warming is not statistically significant. )

The important difference is the location of the warming and the magnitude of the warming. Steig’s paper has the warming concentrated on the Ross Ice Shelf – which would lead you to entirely different conclusions than having a minimum on the ice shelf. As far as magnitude goes, the warming for the continent is half of what was reported by Steig (0.12 vs. 0.06 Deg C/Decade).

Additionally, Steig shows whole-continent warming from 1967-2006; this analysis shows that most of the continent has cooled from 1967-2006. Given that the 1940’s were significantly warmer in the Antarctic than 1957 (the 1957-1960 period was unusually cold in the Antarctic), focusing on 1957 can give a somewhat slanted picture of the temperature trends in the continent.”

Ryan O  adds later:  “I should have said that all reconstructions yield a positive trend, though in most cases the trend for the continent is not statistically significant.


Verification of the Improved High PC Reconstruction

Posted by Jeff Id on May 28, 2009

There is always something going on around here.

Up until now all the work which has been done on the antarctic reconstruction has been done without statistical verification. We believed that they are better from correlation vs distance plots, the visual comparison to station trends and of course the better approximation of simple area weighted reconstructions using surface station data.

The authors of Steig et al. have not been queried by myself or anyone else that I’m aware of regarding the quality of the higher PC reconstructions. And the team has largely ignored what has been going on over on the Air Vent. This post however demonstrates strongly improved verification statistics which should send chills down their collective backs.

Ryan was generous in giving credit to others with his wording, he has put together this amazing piece of work himself using bits of code and knowledge gained from the numerous other posts by himself and others on the subject. He’s done a top notch job again, through a Herculean effort in code and debugging.

If you didn’t read Ryan’s other post which led to this work the link is:

Antarctic Coup de Grace

——————————————————————————–

Fig_1

Fig. 1: 1957-2006 trends; our reconstruction (left); Steig reconstruction (right)

HOW DO WE CHOOSE?

In order to choose which version of Antarctica is more likely to represent the real 50-year history, we need to calculate statistics with which to compare the reconstructions. For this post, we will examine r, r^2, R^2, RE, and CE for various conditions, including an analysis of the accuracy of the RegEM imputation. While Steig’s paper did provide verification statistics against the satellite data, the only verification statistics that related to ground data were provided by the restricted 15-predictor reconstruction, where the withheld ground stations were the verification target. We will perform a more comprehensive analysis of performance with respect to both RegEM and the ground data. Additionally, we will compare how our reconstruction performs against Steig’s reconstruction using the same methods used by Steig in his paper, along with a few more comprehensive tests.

To calculate what I would consider a healthy battery of verification statistics, we need to perform several reconstructions. The reason for this is to evaluate how well the method reproduces known data. Unless we know how well we can reproduce things we know, we cannot determine how likely the method is to estimate things we do not know. This requires that we perform a set of reconstructions by withholding certain information. The reconstructions we will perform are:

1. A 13-PC reconstruction using all manned and AWS stations, with ocean stations and Adelaide excluded. This is the main reconstruction.

2. An early calibration reconstruction using AVHRR data from 1982-1994.5. This will allow us to assess how well the method reproduces the withheld AVHRR data.

3. A late calibration reconstruction using AVHRR data from 1994.5-2006. Coupled with the early calibration, this provides comprehensive coverage of the entire satellite period.

4. A 13-PC reconstruction with the AWS stations withheld. The purpose of this reconstruction is to use the AWS stations as a verification target (i.e., see how well the reconstruction estimates the AWS data, and then compare the estimation against the real AWS data).

5. The same set of four reconstructions as above, but using 21 PCs in order to assess the stability of the reconstruction to included PCs.

6. A 3-PC reconstruction using Steig’s station complement to demonstrate replication of his process.

7. A 3-PC reconstruction using the 13-PC reconstruction model frame as input to demonstrate the inability of Steig’s process to properly resolve the geographical locations of the trends and trend magnitudes.

Using the above set of reconstructions, we will then calculate the following sets of verification statistics:

1. Performance vs. the AVHRR data (early and late calibration reconstructions)

2. Performance vs. the AVHRR data (full reconstruction model frame)

3. Comparison of the spliced and model reconstruction vs. the actual ground station data.

4. Comparison of the restricted (AWS data withheld) reconstruction vs. the actual AWS data.

5. Comparison of the RegEM imputation model frame for the ground stations vs. the actual ground station data.

The provided script performs all of the required reconstructions and makes all of the required verification calculations. I will not present them all here (because there are a lot of them). I will present the ones that I feel are the most telling and important. In fact, I have not yet plotted all the different results myself. So for those of you with R, there are plenty of things to plot.

Without further ado, let’s take a look at a few of those things.

Fig_2Fig. 2: Split reconstruction verification for Steig reconstruction

You may remember the figure above; it represents the split reconstruction verification statistics for Steig’s reconstruction. Note the significant regions of negative CE values (which indicate that a simple average of observed temperatures explains more variance than the reconstruction). Of particular note, the region where Steig reports the highest trend – West Antarctica and the Ross Ice Shelf – shows the worst performance.

Let’s compare to our reconstruction:

Fig_3Fig. 3: Split reconstruction verification for 13-PC reconstruction

There still are a few areas of negative RE (too small to see in this panel) and some areas of negative CE. However, unlike the Steig reconstruction, ours performs well in most of West Antarctica, the Peninsula, and the Ross Ice Shelf. All values are significantly higher than the Steig reconstruction, and we show much smaller regions with negative values.

As an aside, the r^2 plots are not corrected by the Monte Carlo analysis yet. However, as shown in the previous post concerning Steig’s verification statistics, the maximum r^2 values using AR(8) noise were only 0.019, which produces an indistinguishable change from Fig. 3.

Now that we know that our method provides a more faithful reproduction of the satellite data, it is time to see how faithfully our method reproduces the ground data. A simple way to compare ours against Steig’s is to look at scatterplots of reconstructed anomalies vs. ground station anomalies:

Your browser may not support display of this image.

Fig_4Fig. 4: 13-PC scatterplot (left); Steig reconstruction (right)

The 13-PC reconstruction shows significantly improved performance in predicting ground temperatures as compared to the Steig reconstruction. This improved performance is also reflected in plots of correlation coefficient:

Fig_5Fig. 5: Correlation coefficient by geographical location

As noted earlier, the performance in the Peninsula , West Antarctica, and the Ross Ice Shelf are noticeably better for our reconstruction. Examining the plots this way provides a good indication of the geographical performance of the two reconstructions. Another way to look at this – one that allows a bit more precision – is to plot the results as bar plots, sorted by location:

Fig_6Fig. 6: Correlation coefficients for the 13-PC reconstruction

Fig_7Fig. 7: Correlation coefficients for the Steig reconstruction

The difference is quite striking.

While a good performance with respect to correlation is nice, this alone does not mean we have a “good” reconstruction. One common problem is over-fitting during the calibration period (where the calibration period is defined as the periods over which actual data is present). This leads to fantastic verification statistics during calibration, but results in poor performance outside of that period.

This is the purpose of the restricted reconstruction, where we withhold all AWS data. We then compare the reconstruction values against the actual AWS data. If our method resulted in overfitting (or is simply a poor method), our verification performance will be correspondingly poor.

Since Steig did not use AWS stations for performing his TIR reconstruction, this allows us to do an apples-to-apples comparison between the two methods. We can use the AWS stations as a verification target for both reconstructions. We can then compare which reconstruction results in better performance from the standpoint of being able to predict the actual AWS data. This is nice because it prevents us from later being accused of holding the reconstructions to different standards.

Note that since all of the AWS data was withheld, RE is undefined. RE uses the calibration period mean, and there is no calibration period for the AWS stations because we did the reconstruction without including any AWS data. We could run a split test like we did with the satellite data, but that would require additional calculations and is an easier test to pass regardless. Besides, the reason we have to run a split test with the satellite data is that we cannot withhold all of the satellite data and still be able to do the reconstruction. With the AWS stations, however, we are not subject to the same restriction.

Fig_8Fig. 8: Correlation coefficient, verification period, AWS stations withheld

With that, I think we can safely put to bed the possibility that our calibration performance was due to overfitting. The verification performance is quite good, with the exception of one station in West Antarctica (Siple). Some of you may be curious about Siple, so I decided to plot both the original data and the reconstructed data. The problem with Siple is clearly the short record length and strange temperature swings (in excess of 10 degrees), which may indicate problems with the measurements:

Fig_9Fig. 9: Siple station data

While we should still be curious about Siple, we also would not be unjustified in considering it an outlier given the performance of our reconstruction at the remainder of the station locations.

Leaving Siple for the moment, let’s take a look at how Steig’s reconstruction performs.

Fig_10Fig. 10: Correlation coefficient, verification period, AWS stations withheld, Steig reconstruction

Not too bad – but not as good as ours. Curiously, Siple does not look like an outlier in Steig’s reconstruction. In its place, however, seems to be the entire Peninsula. Overall, the correlation coefficients for the Steig reconstruction are poorer than ours. This allows us to conclude that our reconstruction more accurately calculated the temperature in the locations where we withheld real data.

Along with correlation coefficient, the other statistic we need to look at is CE. Of the three statistics used by Steig – r, RE, and CE – CE is the most difficult statistic to pass. This is another reason why we are not concerned about lack of RE in this case: RE is an easier test to pass.

Fig_11Fig. 11: CE, verification period, AWS stations withheld

Your browser may not support display of this image.

Fig_12Fig. 12: CE, verification period, AWS stations withheld, Steig reconstruction

The difference in performance between the two reconstructions is more apparent in the CE statistic. Steig’s reconstruction demonstrates negligible skill in the Peninsula, while our skill in the Peninsula is much higher. With the exception of Siple, our West Antarctic stations perform comparably. For the rest of the continent, our CE statistics are significantly higher than Steig’s – and we have no negative CE values.

So in a test of which method best reproduces withheld ground station data, our reconstruction shows significantly more skill than Steig’s.

The final set of statistics we will look at is the performance of RegEM. This is important because it will show us how faithful RegEM was to the original data. Steig did not perform any verification similar to this because PTTLS does not return the model frame. Unlike PTTLS, however, our version of RegEM (IPCA) does return the model frame. Since the model frame is accessible, it is incumbent upon us to look at it.

Note: In order to have a comparison, we will run a Steig-type reconstruction using RegEM IPCA.

There are two key statistics for this: r and R^2. R^2 is called “average explained variance”. It is a similar statistic to RE and CE with the difference being that the original data comes from the calibration period instead of the verification period. In the case of RegEM, all of the original data is technically “calibration period”, which is why we do not calculate RE and CE. Those are verification period statistics.

Let’s look at how RegEM IPCA performed for our reconstruction vs. Steig’s.

Fig_13Fig. 13: Correlation coefficient between RegEM model frame and actual ground data

As you can see, RegEM performed quite faithfully with respect to the original data. This is a double-edged sword; if RegEM performs too faithfully, you end up with overfitting problems. However, we already checked for overfitting using our restricted reconstruction (with the AWS stations as the verification target).

While we had used regpar settings of 9 (main reconstruction) and 6 (restricted reconstruction), Steig only used a regpar setting of 3. This leads us to question whether that setting was sufficient for RegEM to be able to faithfully represent the original data. The only way to tell is to look, and the next frame shows us that Steig’s performance was significantly less than ours.

 Fig. 14: Correlation coefficient between RegEM model frame and actual ground data, Steig reconstructionFig. 14: Correlation coefficient between RegEM model frame and actual ground data, Steig reconstruction

The performance using a regpar setting of 3 is noticeably worse, especially in East Antarctica. This would indicate that a setting of 3 does not provide enough degrees of freedom for the imputation to accurately represent the existing data. And if the imputation cannot accurately represent the existing data, then its representation of missing data is correspondingly suspect.

Another point I would like to note is the heavy weighting of Peninsula and open-ocean stations. Steig’s reconstruction relied on a total of 5 stations in West Antarctica, 4 of which are located on the eastern and southern edges of the continent at the Ross Ice Shelf. The resolution of West Antarctic trends based on the ground stations alone is rather poor.

Now that we’ve looked at correlation coefficients, let’s look at a more stringent statistic: average explained variance, or R^2.

Fig. 15: R2 between RegEM model frame and actual ground dataFig. 15: R^2 between RegEM model frame and actual ground data

Using a regpar setting of 9 also provides good R^2 statistics. The Peninsula is still a bit wanting. I checked the R^2 for the 21-PC reconstruction and the numbers were nearly identical. Without increasing the regpar setting and running the risk of overfitting, this seems to be about the limit of the imputation accuracy.

Fig_16Fig. 16: R^2 between RegEM model frame and actual ground data, Steig reconstruction

Steig’s reconstruction, on the other hand, shows some fairly low values for R^2. The Peninsula is an odd mix of high and low values, West Antarctica and Ross are middling, while East Antarctica is poor overall. This fits with the qualitative observation that the Steig method seemed to spread the Peninsula warming all over the continent, including into East Antarctica – which by most other accounts is cooling slightly, not warming.

CONCLUSION

With the exception of the RegEM verification, all of the verification statistics listed above were performed exactly (split reconstruction) or analogously (restricted 15 predictor reconstruction) by Steig in the Nature paper. In all cases, our reconstruction shows significantly more skill than the Steig reconstruction. So if these are the metrics by which we are to judge this type of reconstruction, ours is objectively superior.

As before, I would qualify this by saying that not all of the errors and uncertainties have been quantified yet, so I’m not comfortable putting a ton of stock into any of these reconstructions. However, I am perfectly comfortable saying that Steig’s reconstruction is not a faithful representation of Antarctic temperatures over the past 50 years and that ours is closer to the mark.

NOTE ON THE SCRIPT

If you want to duplicate all of the figures above, I would recommend letting the entire script run. Be patient; it takes about 20 minutes. While this may seem long, remember that it is performing 11 different reconstructions and calculating a metric butt-ton of verification statistics.

There is a plotting section at the end that has examples of all of the above plots (to make it easier for you to understand how the custom plotting functions work) and it also contains indices and explanations for the reconstructions, variables, and statistics. As always, though, if you have any questions or find a feature that doesn’t work, let me know and I’ll do my best to help.

Lastly, once you get comfortable with the script, you can probably avoid running all the reconstructions. They take up a lot of memory, and if you let all of them run, you’ll have enough room for maybe 2 or 3 more before R refuses to comply. So if you want to play around with the different RegEM variants, numbers of included PCs, and regpar settings, I would recommend getting comfortable with the script and then loading up just the functions. That will give you plenty of memory for 15 or so reconstructions.

As a bonus, I included the reconstruction that takes the output of our reconstruction, uses it for input to the Steig method, and spits out this result:

Fig_17Fig. 17: Steig reconstruction using the 13-PC reconstruction as input.

The name for the list containing all the information and trends is “r.3.test”.

—————————————————————-

Code is here Recon.R

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

225 Comments
Inline Feedbacks
View all comments
May 29, 2009 3:55 pm

George E. Smith (14:18:29) :
“””Nasif Nahle (13:21:18) :
As more often than never, I differ a bit with Leif Svalgaard’s opinion. Let’s assume the normalized TSI is 1367 W/m^2 (Modest. 1997) and the average of TSI measurements in 2008 deviates by -6.13 W/m^2 from the normalized TSI. I would say it is not significant if all deviations since 1700 AD were 1 W/m^2 above or below -6.13 W/m^2. However, if I see that the maximum deviation in the last 308 years has been -0.154 W/m^2, then I’m sure that the deviation of -6.13 W/m^2 is a significant deviation from the normalized TSI, whether it is only a measurement or not. Don’t you agree? “””
Wow ! I have looked at all the various satellite measurments of TSI I can find going back about three solar cycles total, although not from any one satellite, and I don’t think I have ever seen any change of the order of -6.13.
All the curves I have seen have about a 1 W/m^2 p-p over the cycle and that is about all. So where did this -6.13 change come from ?
George

Heh! I took the data from here:
http://lasp.colorado.edu/cgi-bin/ion-p?page=input_data_for_tsi.ion
Complete database:
http://lasp.colorado.edu/cgi-bin/ion-p?ION__E1=PLOT%3Aplot_tsi_data.ion&ION__E2=PRINT%3Aprint_tsi_data.ion&ION__E3=BOTH%3Aplot_and_print_tsi_data.ion&START_DATE=25-Jan-2008&STOP_DATE=30-Dec-2008&TIME_SPAN=24&ERR=%27ERR%27&PRINT=Output+Data+as+Text
Measurements from channel 10 give another result.
Please, read the Special Note on TIM-TSI Data just below the inputs box. 🙂

DaveE
May 29, 2009 4:05 pm

George E. Smith (14:18:29) : & Ray (15:31:52) :
I think you’re missing the point Nasif Nahle (13:21:18) : was trying to make.
The -6.13 W/m^2 was I think a hypothetical deviation which would be insignificant if all previous deviations had been the same or there-abouts.
Given that previous deviations were much less, that would then make it significant.
I could of course be wrong. 😉
DaveE

Roger Knights
May 29, 2009 4:06 pm

Bated is short for “abated” (halted)–in other words, waiting with bated breath means waiting breathlessly, in great anticipation.
Here are names that have been suggested by others. Warmmongers. Hotheads. Greenshirts. The latter is the best, being the strongest, as well as being a nice riposte to “deniers.”

May 29, 2009 4:07 pm

Ray (15:31:52) :
What are the significant digits on the 1367 W/m^2? What is the error on that number? At the very least give a standard deviation with proper significant figures. In any case, you can’t get better accuracy in the measurement than what it can do, anything smaller than that is part of the noise.

I agree with your last assertion, which doesn’t mean that I agree with the remainder of your post. 1367 W/m^2 is the normalized output of TSI and it is applied for avoiding incongruities on updates, insertions and deletions, even whether the latter are involuntarily or voluntariliy introduced into databases.

May 29, 2009 4:10 pm

Ray (15:31:52) :
Nasif Nahle (13:21:18) :
Let’s assume the normalized TSI is 1367 W/m^2 (Modest. 1997) and the average of TSI measurements in 2008 deviates by -6.13 W/m^2 from the normalized TSI.
It does not deviate. The absolute calibration of TSI is hard and different spacecraft instruments disagree. The best modern measurements from TIM on SORCE is about 5 W.m^2 below the ‘normal’ TSI of 1366 because of instrumental differences. The ‘relative error’ [that is the error on any given instrument compared to earlier values from the same instrument] is almost a thousand times better, at the 0.007 W/m^2 level. We know the variation of the TSI value over time better than 0.1 W/m^2.

slowtofollow
May 29, 2009 4:18 pm

Smokey (14:00:17) “RC must be green with envy. Sucks to be them.”
I think it is possible they are content with the level of comments? –
http://www.realclimate.org/index.php/archives/2009/01/warm-reception-to-antarctic-warming-story/
and
http://www.realclimate.org/index.php/archives/2009/02/antarctic-warming-is-robust/
seem to be the relevant links for this post leading this thread and they appear to be no longer accepting new thoughts. Having said that the posts are worth reading for a recap.

May 29, 2009 4:20 pm

A. Smith… Channel 10, which is for TSI at Earth’s distance, does give a deviation of -5.99 W/m^2; that’s close to the deviation from the normalized TSI at channel 5 which is for TSI at 1 AU from the Sun. The difference between channel 5 and 10 is quite small (0.14 W/m^2).

hunter
May 29, 2009 4:21 pm

‘Climate conartists’?
‘Fear Mongers’?
‘Climate crazies’?
‘Climate Profiteers’?
‘Apocalyptic conartists’?
But whatever the name is, do not let them get away with their transparent con in changing the name from global warming to ‘climate change’.
They predicted ‘global warming’: AGW
They predicted ramatic dangerous terrible changes in the climate by *now*.
They do not get to redefine AGW after 20 years of being wrong about AGW in a transparently cynical attempt to re-frame the issue.

May 29, 2009 4:31 pm

Nasif Nahle (16:20:13) :
Channel 10, which is for TSI at Earth’s distance, does give a deviation of -5.99 W/m^2;
Get off that 5 W/m^2 difference, it is the ‘normalized’ TSI that is ‘wrong’ by that amount. There is no long-term variation of that magnitude. A single large sunspot can [very rarely] give several W/m^2 signal, but that does not show up in the, say, yearly average.

George E. Smith
May 29, 2009 4:35 pm

“”” Nasif Nahle (16:20:13) :
A. Smith… Channel 10, which is for TSI at Earth’s distance, does give a deviation of -5.99 W/m^2; that’s close to the deviation from the normalized TSI at channel 5 which is for TSI at 1 AU from the Sun. The difference between channel 5 and 10 is quite small (0.14 W/m^2). “””
Well Nasif, I’ll just have to take your word for it; I don’t get either channel 10 or channel 5 very well on my T&V, but you probably are referring to something else I am not privy to.
All I have to go on is some plots that purport to be data from about three or maybe four different satellites; that somehow turned up on a NOAA web site or some other place.
When I went to school, the solar constant was 1353 W/m^2, so I’m still getting used to the 1367 range of numbers. So where can one get access to these channels 5 and 10, and how many channels are there.
Oh I see that there’s some manipulations going on there; so channel 5, wherever it is to be found, is a mythical one AU outpost; I can see that that is a sensible idea; and channel 10, is the unexpurgated version in earth orbit.
I’m catching on; as they say, I may not do very good work, but I sure am slow.
Thanks for the heads up.
George

May 29, 2009 4:40 pm

Leif Svalgaard (16:10:43) :
It does not deviate. The absolute calibration of TSI is hard and different spacecraft instruments disagree. The best modern measurements from TIM on SORCE is about 5 W.m^2 below the ‘normal’ TSI of 1366 because of instrumental differences. The ‘relative error’ [that is the error on any given instrument compared to earlier values from the same instrument] is almost a thousand times better, at the 0.007 W/m^2 level. We know the variation of the TSI value over time better than 0.1 W/m^2.

Nevertheless, the same as it happens with TSI measurements, the statistical reconstruction of fluctuations of temperature in Antarctica depends on the databases obtained from the measurements provided by the stations, the number of stations considered for the reconstruction, the quality of the instruments at those selected stations, the location of those stations, the period considered for measurements, etc., which has been exposed on this article. These are not just measurements, but also interpretations of the measurements.

George E. Smith
May 29, 2009 4:45 pm

“”” George E. Smith (14:29:36) :
“”” Stephen Brown (13:08:41) :
“Chris S (04:50:12) :
I await the publication of this after peer review with baited breath”
Erm … Wouldn’t the word “bated” be more appropriate? It’s breathing you are talking about, not fishing!
Your friendly nit-picking pedant. “””
Given that you have people here speaking several languages at once (including me), and the frequent appearance of typos; it is generally not considered Kosher to be too pedantic about incorrect spellings. Mis-usage that does not corrupt the scientific content, is generally regarded as uncouth to comment on. And in the current instance; it is a rather humerous Malapropism.
I once had a very nice Chinese young lady comment that a missing office colleague was out on fraternity leave. As the lady in question was very single; it was an appropriate observation.
George
Reply: Oy, nitpicking about nitpicking, and btw, you misspelled humorous. ~ charles the sometimes anti-semantic moderator. “””
Oh come now Charles ! I know you are much faster on your feet than that.
Or don’t you have a funnybone ? If not what would you call it if you had one ?
George
Reply: D’oh! I was severely hungover. There was a contest in SF last night and I knew 3 of the 6 contestants ~ charles the “that’s all you get” mysterious moderator

George E. Smith
May 29, 2009 4:47 pm

And if you are really picky; I spelled that rong too.
George

May 29, 2009 4:56 pm

Leif Svalgaard (16:31:10) :
Get off that 5 W/m^2 difference, it is the ‘normalized’ TSI that is ‘wrong’ by that amount. There is no long-term variation of that magnitude. A single large sunspot can [very rarely] give several W/m^2 signal, but that does not show up in the, say, yearly average.

I know there is not long term variation such as -5 W/m^2 from the normalized TSI. I just was trying to point to the nature of Steig’s error because you said that it was only a measurement, when the interpretation of that measurement was involved.
The maximum deviation from the normalized TSI from your database is -0.154 W/m^2, for example.
Sorry if some of my responses are delayed. I’m having problems with my connection…

Owen Hughes
May 29, 2009 5:09 pm

Name for the AGW hysterics and con artists? Here are some candidates:
Warmageddonists
Warmongers
See-oh-clueless (pun on CO2)
Carboneheads
….Just some practice swings. I’ll try again later. I think it is definitely worth our while to try to re-frame the “narrative.” These people have had a natural advantage here –they are almost wholly concerned with using “science” as an input to produce desired social consequences (in which, typically, they will issue edicts and win fame, love and money). Because they see science as instrument, not an end in itself (in which the integrity of the process is paramount), they begin with the conclusion they want, and then cherrypick, backfill, torture, hide, spin and otherwise make up whatever data and models they need to get the job done.
So anyone who is doing real science is flummoxed by such behavior and is easily outflanked or disgusted. It takes a fundamental re-set of values and strategies to deal effectively with them over the “long march” they typically use (How many years has the AGW story been building? Twenty or so? That’s the bulk of many academic careers, and has given their network the time to spread and set deep roots of tenure, obligation, funding power, tireless publication and symposium-hosting).
So they have had a real edge, and have faced low costs in behaving as they do. We need to raise the cost. Ridicule is a good start.

John W.
May 29, 2009 5:10 pm

Jeff, Ryan, Anthony, et. al.
I stumbled across the AGW response to everything you’ve been doing.
http://www.theonion.com/content/opinion/oh_no_its_making_well_reasoned?utm_source=b-section

Steptoe Fan
May 29, 2009 5:29 pm

Looks like the fools at the U of W are on a “team” driven vendetta that simply will not abate ! Here, with the gleeful help of Seattle Times publicity is the latest dish they offer up ….
The tool, called ClimateWizard, allows natural-resource managers, lawmakers, scientists and residents to see historical temperature and precipitation data in their local areas. They also can view projections of how these factors might change as the Earth continues to warm.
Scientists say this tool is the first of its kind to present vast amounts of climate-change information to the public in a way that’s easy to use and understand. The Intergovernmental Panel on Climate Change (IPCC) data used in this tool are already available but often difficult to access and cumbersome to sort through.
“We needed a tool that could bring that data to the desktops of people who can use it,” said Jon Hoekstra, climate-change-program director at the Nature Conservancy, which funded this project. “The power of visualization is extraordinary.”
ClimateWizard is a joint effort among the Nature Conservancy, the University of Washington and the University of Southern Mississippi. It lets users zoom in on specific cities or regions to track temperature and precipitation changes. Maps with color-coded information show where changes are likely to happen, and how severe they could be.
I’m tending to think more and more like the previous poster, offering rational current robust science to counter this stupidity is getting nowhere with regard to the wish to educate the masses, who could then influence our government.

jorgekafkazar
May 29, 2009 5:30 pm

Leif Svalgaard (16:10:43) : “[TSI] does not deviate…The best modern measurements from TIM on SORCE is about 5 W.m^2 below the ‘normal’ TSI of 1366 because of instrumental differences. The ‘relative error’ [that is the error on any given instrument compared to earlier values from the same instrument] is almost a thousand times better, at the 0.007 W/m^2 level. We know the variation of the TSI value over time better than 0.1 W/m^2.”
I hope Ray, Nasif, and George are keeping in mind that TSI is defined based on the average terrestrial distance from the Sun. The Earth is not there, most of the time; consequently, insolation varies about +7.0 /-6.3 percent from perihelion to aphelion. This makes variations in TSI minuscule by comparison, if they weren’t already. Although this should average out over a year, differences between the hemispheres (albedos, ocean mass, usw) make it almost certain that it doesn’t, and probably by a lot more than the putative changes in TSI cited above.

D. King
May 29, 2009 6:05 pm

I feel like I’m beating a dead horse here, but,
many sensors are calibrated exterior to the
platform. The cal results are then uplinked to
the sensor. This will affect any future data
collection until new cal results are uplinked.
So, do you trust the people operating the
sensor? Not the people who own it, the ones
who operate it. Additionally, there is a record
of all uplinked data.
Ryan O
A brilliant piece of work.

Mike Bryant
May 29, 2009 6:12 pm

HotEarthers, IceLovers, HotEarthWhiners, ThermoStatists, BurntEarthers, WarmEarthScreamers, EarthFixers, HeatChaosMob, EarthPessimists, ThermoCarboPhobes, EarthCrispers, EarthMeltDowners, HotEarthDearthers, HotDoomers… Really like that Puritans from above…

May 29, 2009 6:14 pm

jorgekafkazar (17:30:32) :
I hope Ray, Nasif, and George are keeping in mind that TSI is defined based on the average terrestrial distance from the Sun. The Earth is not there, most of the time; consequently, insolation varies about +7.0 /-6.3 percent from perihelion to aphelion. This makes variations in TSI minuscule by comparison, if they weren’t already. Although this should average out over a year, differences between the hemispheres (albedos, ocean mass, usw) make it almost certain that it doesn’t, and probably by a lot more than the putative changes in TSI cited above.
Which is only valid for the normalized TSI figure, not for the measurements made by multi-instrumental satellites and ground-based observations.

May 29, 2009 6:36 pm

I wish to congratulate Ryan O (and Anthony) for presenting some ‘real’ science. What do I mean by that?
Ryan’s work has been in the open, the methodology, the script, etc. are there on the table with no hidden information. THAT is science or at least a very crucial part of the scientific process. Put all of your cards on the table where others can pick through them to see if the deck was stacked or the cards marked.
A large amount of the ‘confusion’ related to current climate science and understanding of the climate is created not by nature, not by our ability to understand, but rather by the secrecy with which some shroud their research and studies. Most of us seek only truth, only reality, regardless of what they be in reference to our opinion. The truth need not hide in the shadows, the truth need not employ attempts to conceal or distort. Ryan O has done an excellent job of seeking the truth. That does not mean he is necessarily right, but that he has provided an open and unhindered path.
I, again, congratulate Ryan O for his efforts, his ethic, and his integrity.
If Ryan will grant me some quarter I wish to venture in the direction of (part of) Anthony’s work and a related factor.
I recently was pressed into a situation where defense of Anthony and the Surfacestations.org project came under attack. The message board where this occurred handles 8 – 10,000 posts per day. The particular thread in 10 days accumulated 454 posts (not all related to surface stations…. in fact few). However, the thread during the tens days so far has been viewed over 4,500 times (I cannot {easily} say when the viewing occurred or which posts were the point of focus).
What I can say is that every challenge presented was effectively addressed. The means for doing so came not from my personal knowledge but rather from the information that Anthony provided on the site and his openness regarding prosecution of operation and cross checks to ensure quality data / information.
Good science is like an honest deck of cards…. all 52 are on the table. Ryan, Anthony, Jeff, and a few others have the integrity to play with a full deck. They seek truth, nothing more, nothing less.
Again, I thank you.

May 29, 2009 6:51 pm

DaveE (16:05:39) :
George E. Smith (14:18:29) : & Ray (15:31:52) :
I think you’re missing the point Nasif Nahle (13:21:18) : was trying to make.
The -6.13 W/m^2 was I think a hypothetical deviation which would be insignificant if all previous deviations had been the same or there-abouts.
Given that previous deviations were much less, that would then make it significant.
I could of course be wrong. 😉
DaveE

Yes, you’re correct! It’s exactly my idea.

Keith Minto
May 29, 2009 8:03 pm

Owen Hughes (17:09:48) “Ridicule is a good start” and John W (17:10:48) with that very clever and funny link ,made me think that we need PR expertise to craft an apt title. It needs to include the term that they throw at us, ‘deniers’. It has been used before but ‘Natural Climate Change Deniers’ is clumsy but on the right track.
Any true believers from Advertising Agencies out there willing to help?

Keith Minto
May 29, 2009 8:18 pm

Here is an image of the location of the Antarctic volcanoes.
http://icecap.us/images/uploads/AntarcticVolcanoes2.jpg
I would really like to find a SST image of the Circumpolar current.

1 3 4 5 6 7 9
Verified by MonsterInsights