
Jeff Id of The Air Vent emailed me today inviting me to repost Ryan O’s latest work on statistical evaluation of the Steig et al “Antarctica is warming” paper ( Nature, Jan 22, 2009) I thought long and hard about the title, especially after reviewing the previous work from Ryan O we posted on WUWT where the paper was dealt a serious blow to “robustness”. After reading this latest statistical analysis, I think it is fair to conclude that the paper’s premise has been falsified.
Ryan O, in his conclusion, is a bit more gracious:
I am perfectly comfortable saying that Steig’s reconstruction is not a faithful representation of Antarctic temperatures over the past 50 years and that ours is closer to the mark.
Not only that, Ryan O did a more complete job of the reconstruction than Steig et al did, he mentions this in comments at The Air Vent:
Steig only used 42 stations to perform his reconstruction. I used 98, since I included AWS stations.
The AWS stations have their problems, such as periods of warmer temperatures due to being buried in snow, but even when using this data, Ryan O’s analysis still comes out with less warming than the original Steig et al paper
Antarctica as a whole is not warming, the Antarctic peninsula is, which is signficantly removed climatically from the main continent.

It is my view that all Steig and Michael Mann have done with their application of RegEm to the station data is to smear the temperature around much like an artist would smear red and white paint on a pallete board to get a new color “pink” and then paint the entire continent with it.
It is a lot like “spin art” you see at the county fair. For example, look (at left) at the different tiles of colored temperature results for Antarctica you can get using Steig’s and Mann’s methodology. The only thing that changes are the starting parameters, the data remains the same, while the RegEm program smears it around based on those starting parameters. In the Steig et al case, PC and regpar were chosen by the authors to be a value of 3. Chosing any different numbers yields an entirely different result.
So the premise of the Steig et al paper paper boils down to an arbitrary choice of values that “looked good”.
I hope that Ryan O will write a rebuttal letter to Nature, and/or publish a paper. It is the only way the Team will back down on this. – Anthony
UPDATE: To further clarify, Ryan O writes in comments:
“Overall, Antarctica has warmed from 1957-2006. There is no debating that point. (However, other than the Peninsula, the warming is not statistically significant. )
The important difference is the location of the warming and the magnitude of the warming. Steig’s paper has the warming concentrated on the Ross Ice Shelf – which would lead you to entirely different conclusions than having a minimum on the ice shelf. As far as magnitude goes, the warming for the continent is half of what was reported by Steig (0.12 vs. 0.06 Deg C/Decade).
Additionally, Steig shows whole-continent warming from 1967-2006; this analysis shows that most of the continent has cooled from 1967-2006. Given that the 1940’s were significantly warmer in the Antarctic than 1957 (the 1957-1960 period was unusually cold in the Antarctic), focusing on 1957 can give a somewhat slanted picture of the temperature trends in the continent.”
Ryan O adds later: “I should have said that all reconstructions yield a positive trend, though in most cases the trend for the continent is not statistically significant.”
Verification of the Improved High PC Reconstruction
Posted by Jeff Id on May 28, 2009
There is always something going on around here.
Up until now all the work which has been done on the antarctic reconstruction has been done without statistical verification. We believed that they are better from correlation vs distance plots, the visual comparison to station trends and of course the better approximation of simple area weighted reconstructions using surface station data.
The authors of Steig et al. have not been queried by myself or anyone else that I’m aware of regarding the quality of the higher PC reconstructions. And the team has largely ignored what has been going on over on the Air Vent. This post however demonstrates strongly improved verification statistics which should send chills down their collective backs.
Ryan was generous in giving credit to others with his wording, he has put together this amazing piece of work himself using bits of code and knowledge gained from the numerous other posts by himself and others on the subject. He’s done a top notch job again, through a Herculean effort in code and debugging.
If you didn’t read Ryan’s other post which led to this work the link is:
——————————————————————————–
HOW DO WE CHOOSE?
In order to choose which version of Antarctica is more likely to represent the real 50-year history, we need to calculate statistics with which to compare the reconstructions. For this post, we will examine r, r^2, R^2, RE, and CE for various conditions, including an analysis of the accuracy of the RegEM imputation. While Steig’s paper did provide verification statistics against the satellite data, the only verification statistics that related to ground data were provided by the restricted 15-predictor reconstruction, where the withheld ground stations were the verification target. We will perform a more comprehensive analysis of performance with respect to both RegEM and the ground data. Additionally, we will compare how our reconstruction performs against Steig’s reconstruction using the same methods used by Steig in his paper, along with a few more comprehensive tests.
To calculate what I would consider a healthy battery of verification statistics, we need to perform several reconstructions. The reason for this is to evaluate how well the method reproduces known data. Unless we know how well we can reproduce things we know, we cannot determine how likely the method is to estimate things we do not know. This requires that we perform a set of reconstructions by withholding certain information. The reconstructions we will perform are:
1. A 13-PC reconstruction using all manned and AWS stations, with ocean stations and Adelaide excluded. This is the main reconstruction.
2. An early calibration reconstruction using AVHRR data from 1982-1994.5. This will allow us to assess how well the method reproduces the withheld AVHRR data.
3. A late calibration reconstruction using AVHRR data from 1994.5-2006. Coupled with the early calibration, this provides comprehensive coverage of the entire satellite period.
4. A 13-PC reconstruction with the AWS stations withheld. The purpose of this reconstruction is to use the AWS stations as a verification target (i.e., see how well the reconstruction estimates the AWS data, and then compare the estimation against the real AWS data).
5. The same set of four reconstructions as above, but using 21 PCs in order to assess the stability of the reconstruction to included PCs.
6. A 3-PC reconstruction using Steig’s station complement to demonstrate replication of his process.
7. A 3-PC reconstruction using the 13-PC reconstruction model frame as input to demonstrate the inability of Steig’s process to properly resolve the geographical locations of the trends and trend magnitudes.
–
Using the above set of reconstructions, we will then calculate the following sets of verification statistics:
–
1. Performance vs. the AVHRR data (early and late calibration reconstructions)
2. Performance vs. the AVHRR data (full reconstruction model frame)
3. Comparison of the spliced and model reconstruction vs. the actual ground station data.
4. Comparison of the restricted (AWS data withheld) reconstruction vs. the actual AWS data.
5. Comparison of the RegEM imputation model frame for the ground stations vs. the actual ground station data.
–
The provided script performs all of the required reconstructions and makes all of the required verification calculations. I will not present them all here (because there are a lot of them). I will present the ones that I feel are the most telling and important. In fact, I have not yet plotted all the different results myself. So for those of you with R, there are plenty of things to plot.
Without further ado, let’s take a look at a few of those things.
You may remember the figure above; it represents the split reconstruction verification statistics for Steig’s reconstruction. Note the significant regions of negative CE values (which indicate that a simple average of observed temperatures explains more variance than the reconstruction). Of particular note, the region where Steig reports the highest trend – West Antarctica and the Ross Ice Shelf – shows the worst performance.
Let’s compare to our reconstruction:
There still are a few areas of negative RE (too small to see in this panel) and some areas of negative CE. However, unlike the Steig reconstruction, ours performs well in most of West Antarctica, the Peninsula, and the Ross Ice Shelf. All values are significantly higher than the Steig reconstruction, and we show much smaller regions with negative values.
As an aside, the r^2 plots are not corrected by the Monte Carlo analysis yet. However, as shown in the previous post concerning Steig’s verification statistics, the maximum r^2 values using AR(8) noise were only 0.019, which produces an indistinguishable change from Fig. 3.
Now that we know that our method provides a more faithful reproduction of the satellite data, it is time to see how faithfully our method reproduces the ground data. A simple way to compare ours against Steig’s is to look at scatterplots of reconstructed anomalies vs. ground station anomalies:
Your browser may not support display of this image.
The 13-PC reconstruction shows significantly improved performance in predicting ground temperatures as compared to the Steig reconstruction. This improved performance is also reflected in plots of correlation coefficient:
As noted earlier, the performance in the Peninsula , West Antarctica, and the Ross Ice Shelf are noticeably better for our reconstruction. Examining the plots this way provides a good indication of the geographical performance of the two reconstructions. Another way to look at this – one that allows a bit more precision – is to plot the results as bar plots, sorted by location:
The difference is quite striking.
While a good performance with respect to correlation is nice, this alone does not mean we have a “good” reconstruction. One common problem is over-fitting during the calibration period (where the calibration period is defined as the periods over which actual data is present). This leads to fantastic verification statistics during calibration, but results in poor performance outside of that period.
This is the purpose of the restricted reconstruction, where we withhold all AWS data. We then compare the reconstruction values against the actual AWS data. If our method resulted in overfitting (or is simply a poor method), our verification performance will be correspondingly poor.
Since Steig did not use AWS stations for performing his TIR reconstruction, this allows us to do an apples-to-apples comparison between the two methods. We can use the AWS stations as a verification target for both reconstructions. We can then compare which reconstruction results in better performance from the standpoint of being able to predict the actual AWS data. This is nice because it prevents us from later being accused of holding the reconstructions to different standards.
Note that since all of the AWS data was withheld, RE is undefined. RE uses the calibration period mean, and there is no calibration period for the AWS stations because we did the reconstruction without including any AWS data. We could run a split test like we did with the satellite data, but that would require additional calculations and is an easier test to pass regardless. Besides, the reason we have to run a split test with the satellite data is that we cannot withhold all of the satellite data and still be able to do the reconstruction. With the AWS stations, however, we are not subject to the same restriction.
With that, I think we can safely put to bed the possibility that our calibration performance was due to overfitting. The verification performance is quite good, with the exception of one station in West Antarctica (Siple). Some of you may be curious about Siple, so I decided to plot both the original data and the reconstructed data. The problem with Siple is clearly the short record length and strange temperature swings (in excess of 10 degrees), which may indicate problems with the measurements:
While we should still be curious about Siple, we also would not be unjustified in considering it an outlier given the performance of our reconstruction at the remainder of the station locations.
Leaving Siple for the moment, let’s take a look at how Steig’s reconstruction performs.
Not too bad – but not as good as ours. Curiously, Siple does not look like an outlier in Steig’s reconstruction. In its place, however, seems to be the entire Peninsula. Overall, the correlation coefficients for the Steig reconstruction are poorer than ours. This allows us to conclude that our reconstruction more accurately calculated the temperature in the locations where we withheld real data.
Along with correlation coefficient, the other statistic we need to look at is CE. Of the three statistics used by Steig – r, RE, and CE – CE is the most difficult statistic to pass. This is another reason why we are not concerned about lack of RE in this case: RE is an easier test to pass.
Your browser may not support display of this image.
The difference in performance between the two reconstructions is more apparent in the CE statistic. Steig’s reconstruction demonstrates negligible skill in the Peninsula, while our skill in the Peninsula is much higher. With the exception of Siple, our West Antarctic stations perform comparably. For the rest of the continent, our CE statistics are significantly higher than Steig’s – and we have no negative CE values.
So in a test of which method best reproduces withheld ground station data, our reconstruction shows significantly more skill than Steig’s.
The final set of statistics we will look at is the performance of RegEM. This is important because it will show us how faithful RegEM was to the original data. Steig did not perform any verification similar to this because PTTLS does not return the model frame. Unlike PTTLS, however, our version of RegEM (IPCA) does return the model frame. Since the model frame is accessible, it is incumbent upon us to look at it.
Note: In order to have a comparison, we will run a Steig-type reconstruction using RegEM IPCA.
There are two key statistics for this: r and R^2. R^2 is called “average explained variance”. It is a similar statistic to RE and CE with the difference being that the original data comes from the calibration period instead of the verification period. In the case of RegEM, all of the original data is technically “calibration period”, which is why we do not calculate RE and CE. Those are verification period statistics.
Let’s look at how RegEM IPCA performed for our reconstruction vs. Steig’s.
As you can see, RegEM performed quite faithfully with respect to the original data. This is a double-edged sword; if RegEM performs too faithfully, you end up with overfitting problems. However, we already checked for overfitting using our restricted reconstruction (with the AWS stations as the verification target).
While we had used regpar settings of 9 (main reconstruction) and 6 (restricted reconstruction), Steig only used a regpar setting of 3. This leads us to question whether that setting was sufficient for RegEM to be able to faithfully represent the original data. The only way to tell is to look, and the next frame shows us that Steig’s performance was significantly less than ours.
Fig. 14: Correlation coefficient between RegEM model frame and actual ground data, Steig reconstructionThe performance using a regpar setting of 3 is noticeably worse, especially in East Antarctica. This would indicate that a setting of 3 does not provide enough degrees of freedom for the imputation to accurately represent the existing data. And if the imputation cannot accurately represent the existing data, then its representation of missing data is correspondingly suspect.
Another point I would like to note is the heavy weighting of Peninsula and open-ocean stations. Steig’s reconstruction relied on a total of 5 stations in West Antarctica, 4 of which are located on the eastern and southern edges of the continent at the Ross Ice Shelf. The resolution of West Antarctic trends based on the ground stations alone is rather poor.
Now that we’ve looked at correlation coefficients, let’s look at a more stringent statistic: average explained variance, or R^2.
Using a regpar setting of 9 also provides good R^2 statistics. The Peninsula is still a bit wanting. I checked the R^2 for the 21-PC reconstruction and the numbers were nearly identical. Without increasing the regpar setting and running the risk of overfitting, this seems to be about the limit of the imputation accuracy.
Steig’s reconstruction, on the other hand, shows some fairly low values for R^2. The Peninsula is an odd mix of high and low values, West Antarctica and Ross are middling, while East Antarctica is poor overall. This fits with the qualitative observation that the Steig method seemed to spread the Peninsula warming all over the continent, including into East Antarctica – which by most other accounts is cooling slightly, not warming.
CONCLUSION
With the exception of the RegEM verification, all of the verification statistics listed above were performed exactly (split reconstruction) or analogously (restricted 15 predictor reconstruction) by Steig in the Nature paper. In all cases, our reconstruction shows significantly more skill than the Steig reconstruction. So if these are the metrics by which we are to judge this type of reconstruction, ours is objectively superior.
As before, I would qualify this by saying that not all of the errors and uncertainties have been quantified yet, so I’m not comfortable putting a ton of stock into any of these reconstructions. However, I am perfectly comfortable saying that Steig’s reconstruction is not a faithful representation of Antarctic temperatures over the past 50 years and that ours is closer to the mark.
NOTE ON THE SCRIPT
If you want to duplicate all of the figures above, I would recommend letting the entire script run. Be patient; it takes about 20 minutes. While this may seem long, remember that it is performing 11 different reconstructions and calculating a metric butt-ton of verification statistics.
There is a plotting section at the end that has examples of all of the above plots (to make it easier for you to understand how the custom plotting functions work) and it also contains indices and explanations for the reconstructions, variables, and statistics. As always, though, if you have any questions or find a feature that doesn’t work, let me know and I’ll do my best to help.
Lastly, once you get comfortable with the script, you can probably avoid running all the reconstructions. They take up a lot of memory, and if you let all of them run, you’ll have enough room for maybe 2 or 3 more before R refuses to comply. So if you want to play around with the different RegEM variants, numbers of included PCs, and regpar settings, I would recommend getting comfortable with the script and then loading up just the functions. That will give you plenty of memory for 15 or so reconstructions.
As a bonus, I included the reconstruction that takes the output of our reconstruction, uses it for input to the Steig method, and spits out this result:
The name for the list containing all the information and trends is “r.3.test”.
—————————————————————-
Code is here Recon.R
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
















I like AL GORES PROFITS OF DOOM
Mike D. (01:38:00) :
Warming is neither confirmed nor rejected.
Your are correct on several points IMHO. First, there appears to be a slight warming which is robust to reasonable methods. Area weighted surface calcs show about 0.05C/decade. Personally, I consider this to be the best accuracy we can create so far.
This trend is upward for a short section based on very few stations back prior to 1967 which places them at the sensitive endpoint of a LS fit. After that the Antarctic has cooled and I’m told if the start point is slightly before 1957 the Anarctic was warmer than today which would probably change the trend again If we started there.
As far as agenda, I don’t care if the Antarctic is warming as I don’t have any climate computer models I’ve devoted my life to designing. The current models ‘apparently’ predict significant warming in the Antarctic based on evil CO2. This is an extremely important point for AGW as if the Antarctic won’t melt, the flood disasters don’t happen. It is critical to AGW that the Antarctic melts. Without rebuttal, this paper and those which Dr. Steig claimed at RC are being developed by others will be the new poster children for the IPCC.
They know the temps aren’t even close to melting the ice but the papers can proclaim warming (already have) so people will a priori ‘know’ it’s coming.
AGW has placed the burden of proof on the rational rather than the extremist predictions. How else does the highest trend ever published (that I know of) for the Antarctic become the cover of Nature?
The fact that the squiggle has a slight linear least squares uptrend obscures that there is a 40 year downtrend during the highest CO2 portion of the graph. The downtrend is outside of model predictions (as stated today apparently) and is longer than the uptrend. Steig et al, mashes, blends and crushes the numbers into a continuous uptrend – consistent with models as Gavin stated in celebration when this paper was acclaimed on RC.
So my long winded comment is getting to the point that in fact any reasonable Antarctic temperature math contradicts predicted CO2 based warming. It also ruins the AGW’s most serious disaster point of FLOODING. After all, no melt, no flood. Finally, it contradicts most computer models, demonstrating yet another weakness in what most of us already know is oversimplified and often flawed math.
The frustrating part is that even if climatology rejects this unreasonable paper outright (as it should), gavin and modelers will just shift positions. Models will again show local cooling followed by doom-filled warming. Pinning a modeler down is like putting a railroad spike through a jar of jelly on a hot day – mush.
OT… Does anyone remember the Florida newspaper announcement about a ten-year Global Warming warning? It was printed maybe mid eighties… I can’t find it.
Thanks,
Mike
“Additionally, Steig shows whole-continent warming from 1967-2006; this analysis shows that most of the continent has cooled from 1967-2006. Given that the 1940’s were significantly warmer in the Antarctic than 1957 (the 1957-1960 period was unusually cold in the Antarctic), focusing on 1957 can give a somewhat slanted picture of the temperature trends in the continent.”
– So, what does a temp reconstruction starting in 1940 look like?
Ray (23:54:11) :
Then if the error is of only 0.1 W/m^2, then it should be 1366.0 W/m^2.
No, the error is 5 W/m^2 [or some number close to that]. Once you correct the error, the variation becomes very small. The error is such that before SORCE all spacecraft data were adjusted to ACRIM at 1366 or some number like that. Now that we know [from SORCE] that the ‘true’ number is more like 1361, all other spacecraft TSIs should be adjusted to that mean by subtracting 5, only then can you begin to compare them. And if you do compare the adjusted values you’ll find that they vary very little with time, like 1 W/m^2 from solar min to solar max, and that that variation is accurate to about 0.1 W/m^2.
Basil (05:40:38) :
I would assume that Ryan simply meant that any trend is not significantly different than the mean (a flat line).
I don’t think I’m coming across at all. Any trend is correct and significant. If I measure 10 today and 9 last year, the measured trend is 10-9 = +1 and is significant and real. The only question is whether that trend is different from the expected trend and “The significance comes in if you compare the measured value to its ‘expected’ value and want to argue that it is significantly different than the observed spread in such differences. So, one may ask what the expected value for the Antarctic would be and what the observed spread is.”
If you believe AGW is bunk and that the trend SHOULD be zero, then you can only argue that the observed trend is significantly different from zero if you know what the observed spread is, which we may or may not know. But I think that Ryan has already admitted that his statement was internally contradictory so perhaps no need to beat that dead horse.
Another way to obscure the debate is to question how to measure ‘the trend’. Over what interval of time? 30 years, 10 years, whatever? And here people tend to pick a value that fits their need.
Geothermal Flux or Climate Change?
An interesting study on Antarctica’s volcanoes reports:
Heat Flux Anomalies in Antarctica Revealed by Satellite Magnetic Data Cathrine Fox Maule, Michael E. Purucker, Nils Olsen, Klaus Mosegaard, Science Express on 9 June 2005; Science 15 July 2005: Vol. 309. no. 5733, pp. 464 – 467 DOI: 10.1126/science.1106888
Contrast the Insolation reported by NASA
For example, picking Paulet Volcano at -67.5 (S) to -52.5 (W), the insolation ranges from 511 W/m2 to 7 W/m2 (Average about 259 W/m2.)
From Maule’s abstract, the typical geothermal flux reported is about 112 MW/m2 compared to this insolation example of 259 W/m2.
Ryan O clearly shows a statistically significant higher temperature in the West Antarctica Peninsula compared to continental Antarctica.
Can typical geothermal fluxes 430,000 times greater than the solar insolation be detected as an increase in temperature in the West Antarctica Peninsula?
If so, can this peninsular temperature anomaly be statistically attributable to volcanic activity over the climatic change claimed by Steig et al?
Obviously Paulet volcano is ice free. What might the influence of the underlying geothermal flux be over the whole West Antarctica Peninsula?
Anyone up to some detailed Computational Fluid Dynamics (CFD) modeling to model and explore these different sources? (There is probably a PhD thesis or two buried in there somewhere.)
Chip’s Menken Quote is one of my favorites.Not a big fan of Menken but he got off some astute observations…
This whole Antarctic exercise strikes me as the Warmist’s looking for that Christmas Pony, and not finding one in the pile of Horse manure…
The semantics of “statistically insignificant” (or not) has to be unequivocally resolved. It may be the most important sentence in Ryan’s analysis.
Here’s why: The competent scientists and mathematicians among us can read through the analysis and derive the truth of the statement. Not so the lay public, and especially the reporters of various mass media around the planet.
Most of the latter will skim the details for what they believe are catch phrases that summarize what the unintelligible analysis (to them) is saying. They will then take those phrases and weave them into whatever prose-bias they are selling – tailored to market.
Reporters and journalists sell a product; they do not simply report an observation (scientists used to do this, many do not now either). Their talent is wordsmithing, and they make a living at it, so they are good at it. Content is frequently only incidental to the product.
Its a sad reality that knowledge of a paper’s content will be far more widely disseminated based on the common language in it than the actual technical analysis. Hence, “the Antarctic is warming!”.
While the discussion about “statistically insignificant” may seem arcane to the eigenheads aboard, it is fundamentally important to the broader acceptance of the analysis.
Ayrdale (05:08:23) :
“What exactly does it take to appeal to those with a science background who still feel that our influence on the climate will have catastrophic results ?
What is the state of the play ?”
We are about to see the second brick thrown (first by Ian Plimer) by none other than Nicolas Sarkozy President of France who is on the verge of appointing renowned geophysicist Claude Allegre – one of France’s most celebrated scientists, a socialist leader and one-time climate alarmist – to a new environmental post. Allegre is an alarmist no more having recanted his position two years ago:
http://www.nationalpost.com/news/story.html?id=2f4cc62e-5b0d-4b59-8705-fc28f14da388&p=1
It appears as though France in her choice to adopt nuclear energy on a mass scale some thirty years ago – has trumped most western nations’ goal of energy independence. And will do so again by recognizing AGW as a thinly veiled money/power grab by faux-green misanthropes.
Kenneth Chang blogging on Volcanoes and Antarctic Warming said:
Hmm. Somehow I don’t think Steig thought through the problem.
Assuming a one dimensional heat transfer problem, the temperature at the surface will be controlled by the upward geothermal flux, downward solar insolation, outward surface radiation and conductive and convective heat losses to the air. From that mix, somehow I think that 2 miles will have very little to do with the answer when compared to compared to a mantel thickness of about 1500 miles. I expect that there will be a detectable temperature rise. The question is how much. That will primarily depend on the geothermal flux distribution relative to convection.
mae culpa. The mantel has little to do with it. Geothermal flux will be melting the ice and hold that bottom temperature close to 0 deg C.
The peninsula temperature varies from -5 deg C to -25 deg C depending on location and summer/winter.
V, to which videos are you referring? Can you provide a link please.
Leif Svalgaard (22:07:49) :
No, not at all. There is no interpretation when it comes to TSI. There is simply an not understood measurement error.
Which was corrected by simply substracting 5 W/m^2 from each measurement after comparing the results with other sources. Perhaps the errors were due to Plamaspheric Hiss, or perhaps to a careless engineering, we don’t know.
The Steig analogy would be that some temperatures were measured wrong in the first place, perhaps with thermometers that were leaking or such. In such a situation you correct the error once you have decided what it is and how best to correct it. You do not interpret the faulty data once you know they are faulty.
Steig’s methodology was wrong, so I agree on the first assertion. Regarding the bolded paragraph, there are exceptions, especially if an agenda or a dense belief is behind.
Robinson (08:57:20) :
I think you are referring to these in another thread. But to save you the clicking and scrolling to look for it :
I have read the posts above with interest. In my opinion, the discussion of AGW has moved from the scientific to the political. Not that scientific work does not continue, it is just that the damage is being done in the policy sphere where, as Henry Waxman so aptly put it, they really don’t know what’s in the legislation.
I support the good fight with actual data and real observations that are based on systems that are not corrupt. However, in order to be effective in opposing the draconian changes in the pipeline, I believe we must resist effectively in the political sphere. The Heartland series is a great beginning. I have seen numerous mentions of it, if only to denigrate it. But even the denigrations are positive. In my previous life in marketing I knew that you never mention the competition because all you do is bring attention to their product. Such mentions by name are usually an indication that the competition’s efforts are being effective and are undermining the value of your brand.
It is in this light that I cited the ‘framing’ paper. The AGW camp wants to ‘name and shame’ us. By disputing these names we remain on the defensive. By ‘naming and shaming’ them we take the offensive and force them to respond. The importance of the framing paper lies in the authors emphasis on the emotional content of the words chosen. It is not adequate from this standpoint to simply label AGW’ers ‘poo-poo heads’ or some such, because the meaning evokes the wrong emotions in the listener. Instead, the name must be carefully thought out to provoke a response in the public that matches AGW’ers potential for damage to personal liberty.
I am not saying that Puritans is the perfect word. In my opinion it is a strong one because it provokes an image of intolerance, fanatical devotion to a religious cause, inflexibility, arrogance, and hypocrisy. I apologize here to the person whose ancestors were Puritans – I beieve their strengths led in large part to the success of America and our unique national character. But I beieve the current meaning to the average person at this point is a negative one for the reasons I laid out. It is anathema to those in the AGW camp.
That the emotional content fits AGW’ers actions is a plus. Al Gore urges us to live a spartan existence, but owns a houseboat to rival the barge of a Pharaoh. He flies everywhere, presumably because his presence is urgently needed – are you telling me the father of the internet can’t figure out a conference call? Barack Obama tells us we will have to be happy with less while spewing carbon out the back of Air Force One on a publicity shoots. EU MPs want to limit airfare and then jet by the thousands to vacations spots for confabs where they enjoy lobster bisque and finger sandwiches while they deciding how to constrain energy. I ask one simple question – if their goal was to destroy the west and industrialized civilization, how would they behave differently?
These days I am a math teacher in a public school while I finish my doctorate. Gore’s strategy of sending millions of free DVD’s to teachers was an act of genius – teachers love new things they can show their kids. But I can tell you that there is hope. The kids are looking for reassurance from adults. I educate them on the falsity of relative risk studies and they feel better about the avalanche of things they’ve heard will cause them to get cancer, shorten their lives, or God forbid put on weight. There has never been a better time to be a kid in the entire history of the world and they need to hear that. So carry on the good fight, please. Anthony and Steve (sorry for the first name basis, but you may as well be family at this point) and others like the late great John Daly give me encouragement, and better yet ammunition, to carry the fight to the forces of darkness and ignorance (and I use these words every carefully becuase that is exactly what I believe we are fighting here). I just encourage everyone to take action in the public sphere as well to whatever extent they can. Please.
Everyone,
Leif is correct. My statement was inaccurate. The reconstruction trend using a linear model and OLS IS positive. Not “may be positive”; not “a 75% chance it is positive”; it IS positive. There is no ambiguity there. Whether you calculate it in MatLab, R, Excel, or graph it by hand, using a linear model and OLS, the trend IS positive.
Leif’s point is that my statement of statistical significance does not apply to the trend itself, it applies when comparing the trend to something else. In this case, that point of comparison – or the null hypothesis – is that the population has a zero trend.
The statement of statistical significance means, based on the number of degrees of freedom (as yet uncorrected for serial autocorrelation) and the resulting R^2 value from a linear fit to the data and the assumption that said residuals are gaussian, that there is more than a 2.5% chance that the “real” trend is zero. Remember that the trend is calculated from a sample of the population. The sample trend may not equal the population trend.
This means that while the observed trend (calculated from the sample) is not zero, the real trend (the actual trend of the entire population) has some finite chance of being zero or less.
Is this not unlike the stratospheric cooling “trend” claimed to affirm AGW; the misuse (bastardization?) of statistics to promote an agenda.
Nasif Nahle (08:57:51) :
Perhaps the errors were due to Plamaspheric Hiss, or perhaps to a careless engineering, we don’t know.
No careless stuff there. This is just a difficult measurement, but we are getting better at it with time. Now the precision is measured in parts per million and we have to worry about such things as calculating the Sun-Earth distance not when the observation is made but 8 minutes earlier when the photons actually left the sun. [I actually have an argument with the experimenters that they should use half of that, i.e. 4 minutes, but that is another story]
Hot Air Hysterics
For a change I managed to follow the statistics and therefore found the article very convincing. But when I tried to find out more about Ryan O – except for earlier papers, the GPS and his possible connection with the University of the West Indies – I was stymied. The same applied for Steven Goddard a couple of weeks ago. It would be helpful if those who produce such excellent work were less shy about their profiles and let you or another site publish them.
The fact that their profiles are so difficult to find permits the deniers blogs to query not only their qualifications but even their existence.
[REPLY – It was even rumored that St. God. and I are one and the same. (Also that neither one of us existed.) ~ Evan]
‘…you have people here speaking several languages at once.’
Sounds like a veritable Babel to me.
Leif Svalgard;
“If you believe AGW is bunk and that the trend SHOULD be zero, then you can only argue that the observed trend is significantly different from zero if you know what the observed spread is, which we may or may not know.”
Sorry, but this is NON Sequitur, because there could be GW which is not AGW. Problem is that proponents of AGW first dismissed cooling on Antarctica as argument against AGWon the basis that models showed “exactly that”, and then still performed this Hockey-stick like fine tuning to “prove” warming, claiming that warming now was exactly what models predicted.
“Another way to obscure the debate is to question how to measure ‘the trend’. Over what interval of time? 30 years, 10 years, whatever? And here people tend to pick a value that fits their need.”
Only if you chose (should I say cherry-pick) relatively cold years 1957-1965 you obtain slight warming. With any other start data after that and very likely before that (because 1930s and 1940s were likely much warmer than 1950s on Antarctica) you obtain cooling. So, situation with Antarctica is crystal clear – exactly according to your own criterion that every trend measured is significant, you have long-term cooling trend, except you cherry pick exceptionally cold years 1957-1965 as your starting point.
This is probably off topic, but interesting nonetheless:
http://www.wired.com/wiredscience/2009/05/astronauts-spot-mysterious-ice-circles-in-worlds-deepest-lake/
Astronauts aboard the International Space Station noticed two mysterious dark circles in the ice of Russia’s Lake Baikal in April. Though the cause is more likely aqueous than alien, some aspects of the odd blemishes defy explanation.
The two circles are the focal points for ice break-up and may be caused by upwelling of warmer water in the lake. The dark color of the circles is due to thinning of the ice, which usually hangs around into June. Upwelling wouldn’t be strange in some relatively shallow areas of the lake where hydrothermal activity has been detected, such as where the circle near the center of the lake (pictured below) is located. Circles have been seen in that area before in 1985 and 1994, though they weren’t nearly as pronounced. But the location of the circle near the southern tip of the lake (pictured above) where water is relatively deep and cold is puzzling.
The lake itself is an oddity. It is the largest by volume and the deepest (5370 feet at its deepest point), as well as one of the oldest at around 25 million years. The photo above was taken by an astronaut from the ISS. The photo below was taken by NASA’s MODIS satellite instrument.
Thank you Ryan at (09:49:57). Yours is a very fair statement. Another factor to consider is the inherent limitation of frequentist analysis. A Bayesian analysis would further inflate the uncertainty.