Steig et al – falsified

Smearing around data or paint - the results are similar
Smearing around data or paint - the results are similar

Jeff Id of The Air Vent emailed me today inviting me to repost Ryan O’s latest work on statistical evaluation of the Steig et al “Antarctica is warming” paper ( Nature, Jan 22, 2009) I thought long and hard about the title, especially after reviewing the previous work from Ryan O we posted on WUWT where the paper was dealt a serious blow to “robustness”. After reading this latest statistical analysis, I think it is fair to conclude that the paper’s premise has been falsified.

Ryan O, in his conclusion, is a bit more gracious:

I am perfectly comfortable saying that Steig’s reconstruction is not a faithful representation of Antarctic temperatures over the past 50 years and that ours is closer to the mark.

Not only that, Ryan O did a more complete job of the reconstruction than Steig et al did, he mentions this in comments at The Air Vent:

Steig only used 42 stations to perform his reconstruction. I used 98, since I included AWS stations.

The AWS stations have their problems, such as periods of warmer temperatures due to being buried in snow, but even when using this data, Ryan O’s analysis still comes out with less warming than the original Steig et al paper

Antarctica as a whole is not warming, the Antarctic peninsula is, which is signficantly removed climatically from the main continent.

Click for a larger image
Click for a larger image

It is my view that all Steig and Michael Mann have done with their application of RegEm to the station data is to smear the temperature around much like an artist would smear red and white paint on a pallete board to get a new color “pink” and then paint the entire continent with it.

It is a lot like “spin art” you see at the county fair. For example, look (at left) at the different tiles of colored temperature results for Antarctica you can get using Steig’s and Mann’s methodology. The only thing that changes are the starting parameters, the data remains the same, while the RegEm program smears it around based on those starting parameters. In the Steig et al case, PC and regpar were chosen by the authors to be a value of 3. Chosing any different numbers yields an entirely different result.

So the premise of the Steig et al paper paper boils down to an arbitrary choice of values that “looked good”.

I hope that Ryan O will write a rebuttal letter to Nature, and/or publish a paper. It is the only way the Team will back down on this. – Anthony

UPDATE: To further clarify, Ryan O writes in comments:

“Overall, Antarctica has warmed from 1957-2006. There is no debating that point. (However, other than the Peninsula, the warming is not statistically significant. )

The important difference is the location of the warming and the magnitude of the warming. Steig’s paper has the warming concentrated on the Ross Ice Shelf – which would lead you to entirely different conclusions than having a minimum on the ice shelf. As far as magnitude goes, the warming for the continent is half of what was reported by Steig (0.12 vs. 0.06 Deg C/Decade).

Additionally, Steig shows whole-continent warming from 1967-2006; this analysis shows that most of the continent has cooled from 1967-2006. Given that the 1940’s were significantly warmer in the Antarctic than 1957 (the 1957-1960 period was unusually cold in the Antarctic), focusing on 1957 can give a somewhat slanted picture of the temperature trends in the continent.”

Ryan O  adds later:  “I should have said that all reconstructions yield a positive trend, though in most cases the trend for the continent is not statistically significant.


Verification of the Improved High PC Reconstruction

Posted by Jeff Id on May 28, 2009

There is always something going on around here.

Up until now all the work which has been done on the antarctic reconstruction has been done without statistical verification. We believed that they are better from correlation vs distance plots, the visual comparison to station trends and of course the better approximation of simple area weighted reconstructions using surface station data.

The authors of Steig et al. have not been queried by myself or anyone else that I’m aware of regarding the quality of the higher PC reconstructions. And the team has largely ignored what has been going on over on the Air Vent. This post however demonstrates strongly improved verification statistics which should send chills down their collective backs.

Ryan was generous in giving credit to others with his wording, he has put together this amazing piece of work himself using bits of code and knowledge gained from the numerous other posts by himself and others on the subject. He’s done a top notch job again, through a Herculean effort in code and debugging.

If you didn’t read Ryan’s other post which led to this work the link is:

Antarctic Coup de Grace

——————————————————————————–

Fig_1

Fig. 1: 1957-2006 trends; our reconstruction (left); Steig reconstruction (right)

HOW DO WE CHOOSE?

In order to choose which version of Antarctica is more likely to represent the real 50-year history, we need to calculate statistics with which to compare the reconstructions. For this post, we will examine r, r^2, R^2, RE, and CE for various conditions, including an analysis of the accuracy of the RegEM imputation. While Steig’s paper did provide verification statistics against the satellite data, the only verification statistics that related to ground data were provided by the restricted 15-predictor reconstruction, where the withheld ground stations were the verification target. We will perform a more comprehensive analysis of performance with respect to both RegEM and the ground data. Additionally, we will compare how our reconstruction performs against Steig’s reconstruction using the same methods used by Steig in his paper, along with a few more comprehensive tests.

To calculate what I would consider a healthy battery of verification statistics, we need to perform several reconstructions. The reason for this is to evaluate how well the method reproduces known data. Unless we know how well we can reproduce things we know, we cannot determine how likely the method is to estimate things we do not know. This requires that we perform a set of reconstructions by withholding certain information. The reconstructions we will perform are:

1. A 13-PC reconstruction using all manned and AWS stations, with ocean stations and Adelaide excluded. This is the main reconstruction.

2. An early calibration reconstruction using AVHRR data from 1982-1994.5. This will allow us to assess how well the method reproduces the withheld AVHRR data.

3. A late calibration reconstruction using AVHRR data from 1994.5-2006. Coupled with the early calibration, this provides comprehensive coverage of the entire satellite period.

4. A 13-PC reconstruction with the AWS stations withheld. The purpose of this reconstruction is to use the AWS stations as a verification target (i.e., see how well the reconstruction estimates the AWS data, and then compare the estimation against the real AWS data).

5. The same set of four reconstructions as above, but using 21 PCs in order to assess the stability of the reconstruction to included PCs.

6. A 3-PC reconstruction using Steig’s station complement to demonstrate replication of his process.

7. A 3-PC reconstruction using the 13-PC reconstruction model frame as input to demonstrate the inability of Steig’s process to properly resolve the geographical locations of the trends and trend magnitudes.

Using the above set of reconstructions, we will then calculate the following sets of verification statistics:

1. Performance vs. the AVHRR data (early and late calibration reconstructions)

2. Performance vs. the AVHRR data (full reconstruction model frame)

3. Comparison of the spliced and model reconstruction vs. the actual ground station data.

4. Comparison of the restricted (AWS data withheld) reconstruction vs. the actual AWS data.

5. Comparison of the RegEM imputation model frame for the ground stations vs. the actual ground station data.

The provided script performs all of the required reconstructions and makes all of the required verification calculations. I will not present them all here (because there are a lot of them). I will present the ones that I feel are the most telling and important. In fact, I have not yet plotted all the different results myself. So for those of you with R, there are plenty of things to plot.

Without further ado, let’s take a look at a few of those things.

Fig_2Fig. 2: Split reconstruction verification for Steig reconstruction

You may remember the figure above; it represents the split reconstruction verification statistics for Steig’s reconstruction. Note the significant regions of negative CE values (which indicate that a simple average of observed temperatures explains more variance than the reconstruction). Of particular note, the region where Steig reports the highest trend – West Antarctica and the Ross Ice Shelf – shows the worst performance.

Let’s compare to our reconstruction:

Fig_3Fig. 3: Split reconstruction verification for 13-PC reconstruction

There still are a few areas of negative RE (too small to see in this panel) and some areas of negative CE. However, unlike the Steig reconstruction, ours performs well in most of West Antarctica, the Peninsula, and the Ross Ice Shelf. All values are significantly higher than the Steig reconstruction, and we show much smaller regions with negative values.

As an aside, the r^2 plots are not corrected by the Monte Carlo analysis yet. However, as shown in the previous post concerning Steig’s verification statistics, the maximum r^2 values using AR(8) noise were only 0.019, which produces an indistinguishable change from Fig. 3.

Now that we know that our method provides a more faithful reproduction of the satellite data, it is time to see how faithfully our method reproduces the ground data. A simple way to compare ours against Steig’s is to look at scatterplots of reconstructed anomalies vs. ground station anomalies:

Your browser may not support display of this image.

Fig_4Fig. 4: 13-PC scatterplot (left); Steig reconstruction (right)

The 13-PC reconstruction shows significantly improved performance in predicting ground temperatures as compared to the Steig reconstruction. This improved performance is also reflected in plots of correlation coefficient:

Fig_5Fig. 5: Correlation coefficient by geographical location

As noted earlier, the performance in the Peninsula , West Antarctica, and the Ross Ice Shelf are noticeably better for our reconstruction. Examining the plots this way provides a good indication of the geographical performance of the two reconstructions. Another way to look at this – one that allows a bit more precision – is to plot the results as bar plots, sorted by location:

Fig_6Fig. 6: Correlation coefficients for the 13-PC reconstruction

Fig_7Fig. 7: Correlation coefficients for the Steig reconstruction

The difference is quite striking.

While a good performance with respect to correlation is nice, this alone does not mean we have a “good” reconstruction. One common problem is over-fitting during the calibration period (where the calibration period is defined as the periods over which actual data is present). This leads to fantastic verification statistics during calibration, but results in poor performance outside of that period.

This is the purpose of the restricted reconstruction, where we withhold all AWS data. We then compare the reconstruction values against the actual AWS data. If our method resulted in overfitting (or is simply a poor method), our verification performance will be correspondingly poor.

Since Steig did not use AWS stations for performing his TIR reconstruction, this allows us to do an apples-to-apples comparison between the two methods. We can use the AWS stations as a verification target for both reconstructions. We can then compare which reconstruction results in better performance from the standpoint of being able to predict the actual AWS data. This is nice because it prevents us from later being accused of holding the reconstructions to different standards.

Note that since all of the AWS data was withheld, RE is undefined. RE uses the calibration period mean, and there is no calibration period for the AWS stations because we did the reconstruction without including any AWS data. We could run a split test like we did with the satellite data, but that would require additional calculations and is an easier test to pass regardless. Besides, the reason we have to run a split test with the satellite data is that we cannot withhold all of the satellite data and still be able to do the reconstruction. With the AWS stations, however, we are not subject to the same restriction.

Fig_8Fig. 8: Correlation coefficient, verification period, AWS stations withheld

With that, I think we can safely put to bed the possibility that our calibration performance was due to overfitting. The verification performance is quite good, with the exception of one station in West Antarctica (Siple). Some of you may be curious about Siple, so I decided to plot both the original data and the reconstructed data. The problem with Siple is clearly the short record length and strange temperature swings (in excess of 10 degrees), which may indicate problems with the measurements:

Fig_9Fig. 9: Siple station data

While we should still be curious about Siple, we also would not be unjustified in considering it an outlier given the performance of our reconstruction at the remainder of the station locations.

Leaving Siple for the moment, let’s take a look at how Steig’s reconstruction performs.

Fig_10Fig. 10: Correlation coefficient, verification period, AWS stations withheld, Steig reconstruction

Not too bad – but not as good as ours. Curiously, Siple does not look like an outlier in Steig’s reconstruction. In its place, however, seems to be the entire Peninsula. Overall, the correlation coefficients for the Steig reconstruction are poorer than ours. This allows us to conclude that our reconstruction more accurately calculated the temperature in the locations where we withheld real data.

Along with correlation coefficient, the other statistic we need to look at is CE. Of the three statistics used by Steig – r, RE, and CE – CE is the most difficult statistic to pass. This is another reason why we are not concerned about lack of RE in this case: RE is an easier test to pass.

Fig_11Fig. 11: CE, verification period, AWS stations withheld

Your browser may not support display of this image.

Fig_12Fig. 12: CE, verification period, AWS stations withheld, Steig reconstruction

The difference in performance between the two reconstructions is more apparent in the CE statistic. Steig’s reconstruction demonstrates negligible skill in the Peninsula, while our skill in the Peninsula is much higher. With the exception of Siple, our West Antarctic stations perform comparably. For the rest of the continent, our CE statistics are significantly higher than Steig’s – and we have no negative CE values.

So in a test of which method best reproduces withheld ground station data, our reconstruction shows significantly more skill than Steig’s.

The final set of statistics we will look at is the performance of RegEM. This is important because it will show us how faithful RegEM was to the original data. Steig did not perform any verification similar to this because PTTLS does not return the model frame. Unlike PTTLS, however, our version of RegEM (IPCA) does return the model frame. Since the model frame is accessible, it is incumbent upon us to look at it.

Note: In order to have a comparison, we will run a Steig-type reconstruction using RegEM IPCA.

There are two key statistics for this: r and R^2. R^2 is called “average explained variance”. It is a similar statistic to RE and CE with the difference being that the original data comes from the calibration period instead of the verification period. In the case of RegEM, all of the original data is technically “calibration period”, which is why we do not calculate RE and CE. Those are verification period statistics.

Let’s look at how RegEM IPCA performed for our reconstruction vs. Steig’s.

Fig_13Fig. 13: Correlation coefficient between RegEM model frame and actual ground data

As you can see, RegEM performed quite faithfully with respect to the original data. This is a double-edged sword; if RegEM performs too faithfully, you end up with overfitting problems. However, we already checked for overfitting using our restricted reconstruction (with the AWS stations as the verification target).

While we had used regpar settings of 9 (main reconstruction) and 6 (restricted reconstruction), Steig only used a regpar setting of 3. This leads us to question whether that setting was sufficient for RegEM to be able to faithfully represent the original data. The only way to tell is to look, and the next frame shows us that Steig’s performance was significantly less than ours.

 Fig. 14: Correlation coefficient between RegEM model frame and actual ground data, Steig reconstructionFig. 14: Correlation coefficient between RegEM model frame and actual ground data, Steig reconstruction

The performance using a regpar setting of 3 is noticeably worse, especially in East Antarctica. This would indicate that a setting of 3 does not provide enough degrees of freedom for the imputation to accurately represent the existing data. And if the imputation cannot accurately represent the existing data, then its representation of missing data is correspondingly suspect.

Another point I would like to note is the heavy weighting of Peninsula and open-ocean stations. Steig’s reconstruction relied on a total of 5 stations in West Antarctica, 4 of which are located on the eastern and southern edges of the continent at the Ross Ice Shelf. The resolution of West Antarctic trends based on the ground stations alone is rather poor.

Now that we’ve looked at correlation coefficients, let’s look at a more stringent statistic: average explained variance, or R^2.

Fig. 15: R2 between RegEM model frame and actual ground dataFig. 15: R^2 between RegEM model frame and actual ground data

Using a regpar setting of 9 also provides good R^2 statistics. The Peninsula is still a bit wanting. I checked the R^2 for the 21-PC reconstruction and the numbers were nearly identical. Without increasing the regpar setting and running the risk of overfitting, this seems to be about the limit of the imputation accuracy.

Fig_16Fig. 16: R^2 between RegEM model frame and actual ground data, Steig reconstruction

Steig’s reconstruction, on the other hand, shows some fairly low values for R^2. The Peninsula is an odd mix of high and low values, West Antarctica and Ross are middling, while East Antarctica is poor overall. This fits with the qualitative observation that the Steig method seemed to spread the Peninsula warming all over the continent, including into East Antarctica – which by most other accounts is cooling slightly, not warming.

CONCLUSION

With the exception of the RegEM verification, all of the verification statistics listed above were performed exactly (split reconstruction) or analogously (restricted 15 predictor reconstruction) by Steig in the Nature paper. In all cases, our reconstruction shows significantly more skill than the Steig reconstruction. So if these are the metrics by which we are to judge this type of reconstruction, ours is objectively superior.

As before, I would qualify this by saying that not all of the errors and uncertainties have been quantified yet, so I’m not comfortable putting a ton of stock into any of these reconstructions. However, I am perfectly comfortable saying that Steig’s reconstruction is not a faithful representation of Antarctic temperatures over the past 50 years and that ours is closer to the mark.

NOTE ON THE SCRIPT

If you want to duplicate all of the figures above, I would recommend letting the entire script run. Be patient; it takes about 20 minutes. While this may seem long, remember that it is performing 11 different reconstructions and calculating a metric butt-ton of verification statistics.

There is a plotting section at the end that has examples of all of the above plots (to make it easier for you to understand how the custom plotting functions work) and it also contains indices and explanations for the reconstructions, variables, and statistics. As always, though, if you have any questions or find a feature that doesn’t work, let me know and I’ll do my best to help.

Lastly, once you get comfortable with the script, you can probably avoid running all the reconstructions. They take up a lot of memory, and if you let all of them run, you’ll have enough room for maybe 2 or 3 more before R refuses to comply. So if you want to play around with the different RegEM variants, numbers of included PCs, and regpar settings, I would recommend getting comfortable with the script and then loading up just the functions. That will give you plenty of memory for 15 or so reconstructions.

As a bonus, I included the reconstruction that takes the output of our reconstruction, uses it for input to the Steig method, and spits out this result:

Fig_17Fig. 17: Steig reconstruction using the 13-PC reconstruction as input.

The name for the list containing all the information and trends is “r.3.test”.

—————————————————————-

Code is here Recon.R

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

225 Comments
Inline Feedbacks
View all comments
jorgekafkazar
May 30, 2009 12:57 pm

David L. Hagen (07:56:02) : “‘…We found that the heat flux underneath the ice sheet varies from 40 to 185 megawatts per square meter and that areas of high heat flux coincide with known current volcanism and some areas known to have ice streams.'”
[from abstract of Heat Flux Anomalies in Antarctica Revealed by Satellite Magnetic Data Cathrine Fox Maule, Michael E. Purucker, Nils Olsen, Klaus Mosegaard, Science Express on 9 June 2005; Science 15 July 2005: Vol. 309. no. 5733, pp. 464 – 467 DOI: 10.1126/science.1106888]
Obviously a misprint. Let’s see the direct quote from the article itself.
REPLY: It is probably “square kilometer” rather than “square meter” – Anthony

May 30, 2009 1:02 pm

Leif Svalgaard (10:04:24) :
…Sun-Earth distance not when the observation is made but 8 minutes earlier when the photons actually left the sun. [I actually have an argument with the experimenters that they should use half of that, i.e. 4 minutes, but that is another story]

I’d like to know what your argument is. It sounds interesting; at least for me because I’ve been digging a little on the solar photon stream and some kind of interactions between photons and molecules.

Paul Coppin
May 30, 2009 1:13 pm

Solomon Green (10:27:11) :
[…]
The fact that their profiles are so difficult to find permits the deniers blogs to query not only their qualifications but even their existence.
And your point would be? While ascribing blame or credit is appropriate, its the content that matters, not the authorship. It should be about the science (or math, or stats, as the case may be). We already have the problem where the authorship is taking precedence over the content. Currently, its called “anthropogenic global warming”, and its promulgated by such authors as Al Gore, James Hansen, Michael Mann, The IPCC, ad nauseum, and even, it appears, Dr. Steig and the journal Nature. We probably won’t know who the actual first person who discovered the earth was round (more or less…), but the important thing is we don’t fall off the edge, isn’t it? As I recall, the IPCC of the day insisted it was flat…

DaveE
May 30, 2009 1:18 pm

“Nasif Nahle (13:02:52) :
Leif Svalgaard (10:04:24) :
…Sun-Earth distance not when the observation is made but 8 minutes earlier when the photons actually left the sun. [I actually have an argument with the experimenters that they should use half of that, i.e. 4 minutes, but that is another story]
I’d like to know what your argument is. It sounds interesting; at least for me because I’ve been digging a little on the solar photon stream and some kind of interactions between photons and molecules.”
I’m equally interested as to why not the point at which the stream is measured as surely that is the actual distance travelled.
DaveE

May 30, 2009 1:19 pm

David L. Hagen (07:56:02) :
We found that the heat flux underneath the ice sheet varies from 40 to 185 megawatts per square meter and that areas of high heat flux coincide with known current volcanism and some areas known to have ice streams.

Nataf and Ricard found it was 43 mW/m^2. The evaluation made by Maule et al is correct; it’s just that it’s wider than Nataf-Ricard’s evaluation. The average of geothermal heat flux over the whole Earth is ≈85 mW/m^2.
The units megaWatts/square meter are correct for geothermal flux of energy underneath the ice crust. It’s a hilly amount of heat.

May 30, 2009 1:22 pm

Hahaha… I made the same mistake!
The units megaWatts/square meter are correct for geothermal flux of energy underneath the ice crust. It’s a hilly amount of heat.
Wrong! It should have said:
The units milliWatts/square meter are correct for geothermal flux of energy underneath the ice crust. It’s a hilly amount of heat.
I’m so sorry! 🙂

jorgekafkazar
May 30, 2009 1:49 pm

chip (09:20:43) : “…in order to be effective in opposing the draconian changes in the pipeline, I believe we must resist effectively in the political sphere.
“By ‘naming and shaming’ them we take the offensive and force them to respond…In my opinion [Puritans] is a strong [term] because it provokes an image of intolerance, fanatical devotion to a religious cause, inflexibility, arrogance, and hypocrisy….
“Al Gore urges us to live a spartan existence, but owns a houseboat to rival the barge of a Pharaoh…I ask one simple question – if their goal was to destroy the west and industrialized civilization, how would they behave differently?
“…I just encourage everyone to take action in the public sphere as well to whatever extent they can. Please.”
Well, you’re right, of course. The power of using loaded words for propaganda can’t be dismissed. The term “Denialist” was chosen with deliberate malice by the Warmist Weather Willies. I’ve captured all sixty of the suggested dysphemisms for “AGW proponent.” Perhaps a prize would be in order for the most popular? Or is that too much like a consensus? Maybe I should just pick the one(s) I like. Let me think about it and await any input from Anthony.

Steve (Paris)
May 30, 2009 2:30 pm

I’m wondering whether the potential identification of a new type of cloud formation doesn’t confirm the solar story. If we are indeed entering a lull in solar activity a hundred years or so since cloud formation records began, is it so surprising that we would see types of cloud that have ‘never been seen before’?
http://www.telegraph.co.uk/scienceandtechnology/science/sciencenews/5411412/New-type-of-cloud-found.html

Just Want Truth...
May 30, 2009 2:46 pm

Ivan (11:50:25) :
We could make 1000 years ago a starting point. Starting points are an issue.
But I think everyone can, or at least should, agree that manmade co2 is rising rapidly, faster than was predicted it would, yet global temperatures are in a cooling trend.

Shane
May 30, 2009 3:08 pm

I like gullible warmers too.
I guess that means that the Caitlin expedition can be called Gullible’s Travels.
S

May 30, 2009 4:30 pm

Ivan (11:50:25) :
Problem is that proponents of AGW first dismissed cooling on Antarctica as argument against AGWon the basis that
Who cares what they think or dismiss? Reasonable analysis [as Ryan’s] should be predicated on what some people think. The data speak for themselves.
DaveE (13:18:27) :
“Nasif Nahle (13:02:52) :
I’m equally interested as to why not the point at which the stream is measured as surely that is the actual distance travelled.
Our intuitive notion of distance travelled becomes a little fuzzy when it comes to things travelling at the speed of light. The problem is this: It is reasonably thought that intensity of light falls of with the distance squared. But which distance? For a photon the distance is zero [relativistic contraction]. For us [the observer] the photon leaves the Sun at a certain time interval before we see the photon, which we calculate to be the distance to the Sun divided by the speed of light = ~500 seconds [if the orbit was circular]. As the photon speeds toward us, we are also changing our distance to the Sun [because the orbit is an ellipse], either moving towards the photon or away from it. Say towards the Sun, so after 500 seconds the photon will have zoomed past us already as we have now moved closer. I admit that my argument about the 4 minutes is not well expressed, simply because I can’t figure out precisely what it should be as these things are tricky and counterintuitive. Perhaps some reader can think this through and educate us. I feel in my bones that the 8 minutes is not right, but can’t really explain why. My point was that our measurements of TSI are now so precise that we need to worry about such exotic details.

May 30, 2009 4:32 pm

Leif Svalgaard (16:30:29) : Your comment is awaiting moderation
Ivan (11:50:25) :
Problem is that proponents of AGW first dismissed cooling on Antarctica as argument against AGWon the basis that
Who cares what they think or dismiss? Reasonable analysis [as Ryan’s] should NOT be predicated on what some people think. The data speak for themselves.
Perhaps a moderator can catch this….

May 30, 2009 5:09 pm

Leif Svalgaard (16:30:29) :
DaveE (13:18:27) :
“Nasif Nahle (13:02:52) :
I’m equally interested as to why not the point at which the stream is measured as surely that is the actual distance travelled.
More on this: When we observe the photon at Earth [or to be precise at the satellite – because we also must take into account the varying distance due to the satellite’s orbit around what ever body it is orbiting] let the Earth [and the photon] be at position X1, Y1, Z1 [one can now begin to discuss in which coordinate system…]. When the photon left the Sun [8 minutes ago], the Sun [and the photon] was at position X0, Y0, Z0 [in the same coordinate system…calculated for a time 8 minutes earlier than the time when the photon was observed]. One could then calculate the distance travelled as D = SQRT((X1-X0)^2+(Y1-Y0)^2+(Z1-Z0)^2), and calculate ‘real’ TSI as ‘real’ TSI = ‘observed’ TSI * D^2, if D is expressed in Astronomical Units [very, very nearly the semi-major axis of the Earth’s orbit – but not quite – other story]. This is the argument the TSI-folks use. But why does TSI fall off with the square of D? The quick argument goes like this: imagine two spheres with different radius both with their center at the center of the Sun, then the number of photons though the outer sphere should be equal to the number of photons through the inner sphere, hence fall off as D squared [if we put the inner sphere as radius = 1]. Except that D [as calculated above] does not seem to me to be the radius of an outer sphere centered on the Sun defined by the position of all photons that left the Sun [forget for the moment about the complication that the Sun is not a point source] at time D/c [with the origin of time being when the photon was observed]. At this point my head begins to spin and I’m lose the train of thought. Suffice, perhaps, it to say that the very fact that our precision of TSI measurements is so high that we need even to worry about this shows us what tremendous strides we have made the last thirty years.

May 30, 2009 5:22 pm

Leif Svalgaard (17:09:17) :
Leif Svalgaard (16:30:29) :
DaveE (13:18:27) :
“Nasif Nahle (13:02:52) :
I’m equally interested as to why not the point at which the stream is measured as surely that is the actual distance travelled.
More on this: …

I’m still just thinking out loud here, so bear with me. I’m trying to understand this myself, and have the hubris to believe that somebody else might be interested in my thoughts… [or spot the error(s), if any].
As I read the description of how the TSI-experts calculate D I think they say: “We calculate the distance, D, between the Sun and the Earth at time T-8 minutes [the ‘8’ varies slightly though the year] and multiply the TSI measured at time T by D^2 to correct TSI for the distance to the Sun”. This seems fishy, but I can’t put my finger on the flaw [if any]. It seems to me that since the Earth has moved in the meantime that D must be different from the value calculated as described. Or perhaps I misunderstood what they said or meant. It may come down to looking at the actual code. [I might want to do that while at the SPD-meeting in Boulder next month].

Dave Wendt
May 30, 2009 7:50 pm

Leif Svalgaard (17:22:13) :
Your musings brought to mind a question I have been struggling to wrap my brain around for some time and as it is somewhat related, if only very tangentially, to what you were discussing I was hoping you might help me find the hole in my logic. Every time I see another story about how the Hubble has discovered another celestial body Xteen billion lightyears distant and the light it is capturing originated shortly after the Big Bang I have a hard time resolving the paradox. From what I understand of the Big Bang, which is obviously precious little, all the “stuff” that makes up our cozy little solar system was also back in the vicinity of the Big Bang and beginning its’ long journey to the vicinity we now occupy traveling at a speed that is extremely fast, but nowhere near the speed of light. It seems to me that if we and the light both left from nearly the same place, the light would have whistled through here long ago and be well on its way out of the universe, but the Hubble is obviously observing something, and as I said, I can’t get my mind around what it is. Probably the main reason theoretical physics was neer a viable vocational path for me.

May 30, 2009 8:04 pm

Shane (15:08:40) :
I guess that means that the Caitlin expedition can be called Gullible’s Travels.
Excellent!

May 30, 2009 8:06 pm

Dave Wendt (19:50:42) :
occupy traveling at a speed that is extremely fast, but nowhere near the speed of light. It seems to me that if we and the light both left from nearly the same place, the light would have whistled through here long ago and be well on its way out of the universe, but the Hubble is obviously observing something

The light speed limit is for travelling through space, but does not apply to the speed with which space itself expands. There is no limit on that. The light from where we came from is indeed far from us [like 13 billion light years], but we are just seeing light from other places.

Bill P
May 30, 2009 8:48 pm

Just Want Truth… (09:16:36) :
RE: Henrik Svensmark on Global Warming, Part I
Is there a Part II? Maybe it is there that Svensmark attributes some of his thinking to predecessors? I alway wonder what part of his work merits the claims of “new theory”, since (I believe) others have referred to a sun / cosmic ray / climate connection.

Dave Wendt
May 30, 2009 9:01 pm

Leif, thanks for the response, that’s kind of what I thought, but I guess I should just admit that when it comes to the Big Bang I’m probably never going to be able to really grok the concept.

a jones
May 30, 2009 9:02 pm

Oh dear.
I may have misunderstood the problem but if not lets start from the beginning.
The earth travels in an elliptical orbit around the sun.
If we treat the sun as a point source, which it is not, we could construct a two dimensional model or plot which would show, based on the change in the distance from the the sun to the power of two, the incident e/m radiation we would receive if the solar output of e/m radiation was constant.
Of course the speed of light, which is a bit less then the speed of light, is enormous compared to the speeds of motion of the heavenly bodies.
Even so it takes around eight minutes for the light from the sun to reach us.
Thus the TSI we receive is eight minutes late but makes no difference to us. What we see is what we get as it were.
However suppose the sun’s output of e/m radiation varies very quickly over a very short period: seconds rather than minutes.
Can we measure this?
Yes of course.
We know where we are now in the orbit and what the TSI is at the moment.
We know where we were eight minutes ago and what the TSI was as we recorded it then.
To measure the variation we only need to take the TSI as we see it now and adjust it by the change in the orbit by distance to the second power and then compare that result to what we measured back then: to know what the change in solar output was in those eight minutes.
And a fat lot of good it will do you: IMHO.
The errors in the calculation are far greater than the measurement, the sun is not a point source and so on.
I do not say you cannot calculate all this but why would you want to?
The astronomers who deal with real point sources and vast distances worked out all this years ago.
Without any need for computers or statistical analysis.
So I am bemused , baffled and bewildered.
Kindest Regards

May 30, 2009 9:47 pm

a jones (21:02:03) :
To measure the variation we only need to take the TSI as we see it now and adjust it by the change in the orbit by distance to the second power and then compare that result to what we measured back then: to know what the change in solar output was in those eight minutes.
It is not the change in solar output that is important [because it is probably too small], but the fact that in eight minutes the distance changes so much that the change in TSI just due to the change in distance is greater than the measurement error. And you slipped in the word ‘distance’ there, but the distance computed when is the issue. Since the 8 minutes is not really 8 minutes all the time, but varies between 8:10.6 min and 8:27.4 min, not taking it into account introduces an annual variation of TSI which is actually observed. So, the ‘oh dear’ is a bit misplaced. There is a real effect, and a real problem.

May 31, 2009 9:21 am

Leif Svalgaard (17:09:17) :
DaveE (13:18:27) :
“Nasif Nahle (13:02:52) :
When we observe the photon at Earth [or to be precise at the satellite – because we also must take into account the varying distance due to the satellite’s orbit around what ever body it is orbiting] let the Earth [and the photon] be at position X1, Y1, Z1 [one can now begin to discuss in which coordinate system…]. When the photon left the Sun [8 minutes ago], the Sun [and the photon] was at position X0, Y0, Z0 [in the same coordinate system…calculated for a time 8 minutes earlier than the time when the photon was observed]. One could then calculate the distance travelled as D = SQRT((X1-X0)^2+(Y1-Y0)^2+(Z1-Z0)^2), and calculate ‘real’ TSI as ‘real’ TSI = ‘observed’ TSI * D^2, if D is expressed in Astronomical Units [very, very nearly the semi-major axis of the Earth’s orbit – but not quite – other story]…

I think I got the idea. The change of position of a photon, if it follows a linear trajectory, is determined by D = √Nl^2. This assumes the trajectory of the photons is straight and it’s following the Earth’s movement, i.e. the photons are entangled with the Earth. However, at the next 240 seconds the Earth gets a new position and finds the stream of photons which were already emitted from the surface of the Sun, so the time is reduced to one half because those photons had been released from the solar surface 8 minutes before the Earth reached its new position.
I apologize for the delayed answer; I have six days waiting for the technician comes to fix the problem with my conection. As a good friend of mine from England said, Mexico is the land of “tomorrow”.

May 31, 2009 9:27 am

a jones (21:02:03) :
Of course the speed of light, which is a bit less then the speed of light, is enormous compared to the speeds of motion of the heavenly bodies.

Perhaps you meant “the speed of a photon is a bit less than the speed of light in vacuum”? We can express it also as “the speed of light is a bit less than c

a jones
May 31, 2009 11:46 am

LS
OK I misunderstood you the first time. Let me see if I have now understood you correctly. So:
1] That the variation of TSI due to variation in solar output is too small to measure.
2. By [1] above I presume you mean the variation in solar output over the very short time that it takes for light from the sun to reach the earth: since I imagine you do observe measure variations of TSI due to solar activity over much longer times such as years.
3] That there is a very much larger variation in TSI observed from the earth due the elliptical orbit of the earth: and the consequent change in the distance between the earth and the sun.
4] That any change in distance between the earth and the sun due to orbit of the earth also changes the time it takes, albeit by a small amount, for the light from the sun to reach the earth.
5] That the degree of precision of your methods of measuring TSI are so high that this difference in time becomes a problem.
6] Which means you cannot use a simple back plot as I suggested but need to take account of both the change in distance and in the time taken for the light to reach the earth at the different positions in the earth’s orbit: even though these positions are very close together.
7] I assume from your original comments that you are using a simple numerical computer based approximation to do this: but are not entirely happy with it.
I am sorry to be so pedantic but if you don’t understand the problem you can’t find an answer. So please correct me if the above is wrong.
NN.
No I meant that the group velocity of light is necessarily always less than the phase velocity.
Ever since I first came aross be Broglie and the duality of nature at age sixteen I was entranced and remain his number one fan.
The idea that a wave might be a particle or conversely a particle a wave appealed to a much younger me and indeed although today in an age of particles and powerful computers and their numerical analyses it is largely viewed as a fascinating anachronism, it is also very useful: especially in some of the odder corners of physics.
I may be long retired but I try to keep my hand in.
Mind when I did my first degree examiners were fond of adding a comic question to the paper to add a little levity to the proceedings.
In physics back then it was often the musical ornithologist proceeding on his bicyle at X feet per second observing a cuckoo some distance away which cucked, if that is the right word, in as I recall in the ratio of a lower third, which was specified for those of us unmusical folk. Some event, in my case a small nuclear explosion nearby, changed the air temperature and caused the next cuck to be reversed into the ratio of an upper third. Please calculate etc.
But around then the Institute of Chemists bowled a googly by asking candidates to work out the wavelength of a cricket ball of such a weight and at such a speed etc.
By the way these questions carried full marks but were subtler than they appeared since beside calculation they often included something which could not be calculated without explaining that it could not, so the candidate was expected to spot the fallacy without prompting, and say so: as well as providing any other observations that might occur.
All a very long time ago before pocket calculators. Oh how I miss my slipstick.
Kindest Regards

May 31, 2009 12:21 pm

a jones (11:46:46) :
7] I assume from your original comments that you are using a simple numerical computer based approximation to do this: but are not entirely happy with it.
I am sorry to be so pedantic but if you don’t understand the problem you can’t find an answer. So please correct me if the above is wrong.

All of the above are a correct interpretation, with the possible exception of [7]. we use the highest precision, most exhaustive, most sophisticated, etc means available for this. You have to when working at the ppm level. The process is described here: http://www.leif.org/research/TSI-SORCE%20Friday%20Effect.pdf which also shows what happens when one does not strive for the utmost precision ALL THE TIME. There is no Friday effect, of course. It is the result of ‘simple analysis’, so the first part just just that you cannot back down from complete rigor. The second part describes some of the problems touched upon in my posts here. And I’m not happy with my lack of understanding of this.
In the end, after understanding has been achieved, one can revert to ‘slipstick’ calculations, because one now knows what is important and what is fluff and can make simple back-of-the-envelope calculations again [this is what understanding brings: being able to extract the essence and successfully ignore everything else and knowing why]

Verified by MonsterInsights