The Search for a Short Term Marker of Long Term Climate Sensitivity
By Dr. Roy Spencer. October 4th, 2009
[This is an update on research progress we have made into determining just how sensitive the climate system is to increasing atmospheric greenhouse gas concentrations.]

While published studies are beginning to suggest that net feedbacks in the climate system could be negative for year-to-year variations (e.g., our 2007 paper, and the new study by Lindzen and Choi, 2009), there remains the question of whether the same can be said of long-term climate sensitivity (and therefore, of the strength of future global warming).
Even if we find observational evidence of an insensitive climate system for year-to-year fluctuations in the climate system, it could be that the system’s long term response to more carbon dioxide is very sensitive. I’m not saying I believe that is the case – I don’t – but it is possible. This question of a potentially large difference in short-term and long-term responses of the climate system has been bothering me for many months.
Significantly, as far as I know, the climate modelers have not yet demonstrated that there is any short-term behavior in their models which is also a good predictor of how much global warming those models project for our future. It needs to be something we can measure, something we can test with real observations. Just because all of the models behave more-or-less like the real climate system does not mean the range of warming they produce encompasses the truth.
For instance, computing feedback parameters (a measure of how much the radiative balance of the Earth changes in response to a temperature change) would be the most obvious test. But I’ve diagnosed feedback parameters from 7- to 10-year subsets of the models’ long-term global warming simulations, and they have virtually no correlation with those models known long-term feedbacks. (I am quite sure I know the reason for this…which is the subject of our JGR paper now being revised…I just don’t know a good way around it).
But I refuse to give up searching. This is because the most important feedbacks in the climate system – clouds and water vapor – have inherently short time scales…minutes for individual clouds, to days or weeks for large regional cloud systems and changes in free-tropospheric water vapor. So, I still believe that there MUST be one or more short term “markers” of long term climate sensitivity.
Well, this past week I think I finally found one. I’m going to be a little evasive about exactly what that marker is because, in this case, the finding is too important to give away to another researcher who will beat me to publishing it (insert smiley here).
What I will say is that the marker ‘index’ is related to how the climate models behave during sudden warming events and the cooling that follows them. In the IPCC climate models, these warming/cooling events typically have time scales of several months, and are self-generated as ‘natural variability’ within the models. (I’m not concerned that I’ve given it away, since the marker is not obvious…as my associate Danny Braswell asked, “What made you think of that?”)
The following plot shows how this ‘mystery index’ is related to the net feedback parameters diagnosed in those 18 climate models by Forster and Taylor (2006). As can be seen, it explains 50% of the variance among the different models. The best I have been able to do up to this point is less than 10% explained variance, which for a sample size of 18 models might as well be zero.
Also plotted is the range of values of this index from 9 years of CERES satellite measurements computed in the same manner as with the models’ output. As can be seen, the satellite data support lower climate sensitivity (larger feedback parameter) than any of the climate models…but not nearly as low as the 6 Watts per sq. meter per degree found for tropical climate variations by us and others.
For a doubling of atmospheric carbon dioxide, the satellite measurements would correspond to about 1.6 to 2.0 deg. C of warming, compared to the 18 IPCC models’ range shown, which corresponds to warming of from about 2.0 to 4.2 deg. C.
The relatively short length of record of our best satellite data (9 years) appears to be the limiting factor in this analysis. The model results shown in the above figure come from 50 years of output from each of the 18 models, while the satellite range of results comes from only 9 years of CERES data (March 2000 through December 2008). The index needs to be computed from as many strong warming events as can be found, because the marker only emerges when a number of them are averaged together.
Despite this drawback, the finding of this short-term marker of long-term climate sensitivity is at least a step in the right direction. I will post progress on this issue as the evidence unfolds. Hopefully, more robust markers can be found that show even a stronger relationship to long-term warming in the models, and which will produce greater confidence when tested with relatively short periods of satellite data.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

RR Kampen (07:54:29) :
Re: philincalifornia (07:36:28) :
“I asked you to post numbers that support your theory that Arctic sea ice volume is declining.
I think we can agree on the fact that you can’t.”
‘Think’? O well, I think we can agree on the fact that you cannot produce numbers to show the Artic ice has increased over this century – see what I mean!
I guess you can find the database containing e.g. the Soviet measurements as from about 1930 somewhere…
Well, I posted a more serious reply above.
As for the relation between thickness and age of Arctic sea ice, it is enough to corroborate my statement. Literature on this abound.
————————-
So, your argument has now become:
RR Kampen can show that Arctic ice volume is decreasing because some guy he’s arguing with on a blog can’t prove that it’s increasing ??
You should see if you can get that published in Nature.
Re: philincalifornia (11:42:46) :
As for the relation between thickness and age of Arctic sea ice, it is enough to corroborate my statement. Literature on this abound.
So, your argument has now become:
RR Kampen can show that Arctic ice volume is decreasing because some guy he’s arguing with on a blog can’t prove that it’s increasing ??
You should see if you can get that published in Nature.
If you find a contradiction, you should check your hypotheses. You will invariably find at least one of them to be wrong.
The argument was: As for the relation between thickness and age of Arctic sea ice, it is enough to corroborate my statement.
If you lack the knowledge (or pretend to be lacking it), you cannot discuss. The situation is like this:
It is known that Pi (3.141592…) is an irrational number. Now:
– People who don’t know wat ‘Pi’ means, cannot argue this theorem.
– People who don’t know what ‘irrational’ means, cannot argue this theorem.
– Most people have never seen the mathematical proof of this theorem. These people cannot argue the theorem.
Opening of twentieth century ignorant people like this have tried to define Pi by law to be a certain rational number in the US.
And you?
This is different from the statement that a global temperature does not make sense. It does make sense even if we don’t have a good measure of it.
This is a statement of faith that you cannot support with actual logic or science. As I said, the planet could have infinitely different temperature conditions that effect life but still show up as having an unchanged average temperature using the integration that you suggest. The simple fact is that you can’t come up with anything meaningful on the basis of a made up temperature construction using any particular algorithm.
You seem to imply that a construct created by averaging temperature samples that come from a non-equilibrium field can be called the temperature of the whole and that we can simplify the picture by thinking of the earth as having one temperature. But many scientists have pointed out that the earth is not in global thermodynamic equilibrium with its surroundings or within itself so it cannot have a single number that we call temperature.
A long time ago, there was great debate about the distance to the Sun and the estimates [measurements] varied greatly. Yet, the real distance did not, of course. So the concept of a global temperature makes sense.
It does not because it makes no sense thermodynamically. I hate to rely on the old cliché but it makes about as much sense as an average global telephone number.
That we don’t have a good measure of it is another matter and is not even worth discussing.
Of course it is. The field of climate study is treated like the hard sciences but in the hard sciences we know what it is to really know something well. In the physical sciences we set up clear experiments that control various factors and make very accurate measurements that tell us if our hypothesis makes sense. Little of this takes place in climate science most of the people who do the research are looking at a non-linear dynamic system that is never in equilibrium and take snap shots in ever changing conditions. That puts those climate scientists much closer to the social or political scientists than to chemists or physicists.
And if you can’t come up with an accurate temperature because you don’t have the data you are simply guessing about what might have been and cannot make claims of certainty as the AGW proponents do. To be a scientist you have to do real science, and that requires accuracy and precision that is missing in this case. We just saw the dendro people get killed because their statistical manipulation of the data created a temperature profile that could not withstand external scrutiny. We also saw the keepers and gatherers of the data miss the fact that around 90% of the stations that provided the data were biased 2C or more, about three times the claimed warming since the end of the LIA. We saw them miss monthly data errors that were easily discovered by outsiders who happened to be paying attention and curious about why the readings were so far off. We saw them keep record high readings even though they came from a faulty sensor that had to be replaced because it was off by 2C. We saw a divergence between the surface data and the satellite and radiosonde measurements. but what we have had a hard time seeing is the data that is used to come up with the reported average. First the data was unavailable due to copyright claimed by the people handling it. Next, it was unavailable due to country agreements that were not found or provided. Finally, it was unavailable because it was ‘lost’ during an office move. But the rules of science, we have to reject the constructs not supported by actual ‘unadjusted’ data and go back to the fundamentals. If we accept a construction as valid we have to use a transparent method that is accessible to all and list all of the assumptions and data issues. Only after all that has been done can we have a temperature profile that can be examined and evaluated for meaning.
If you want to point out to people that we don’t have a good measure of the global temperature, the wrong way of doing that is to deny that the concept makes sense.
First, you agreed that we don’t have a good measure. Second, you have yet to make an argument that it makes sense. Below is a paper that argues that it does not.
http://www.uoguelph.ca/~rmckitri/research/globaltemp/globaltemp.html
That tunes them out right there and you are labeled a crank.
I think that cranks are people who pretend that they know far more than they actually do and accept aggregate results as fact when they know that the data used to create them was deficient and incomplete. Those people certainly are not practising science because they accept something as fact that cannot be shown to be true.
Vangel says:
That paper is hogwash! First of all, the world consists of more things than have a rigorous thermodynamic definition. (Technically speaking, by the way, no real system is ever in complete thermodynamic equilibrium and hence temperature is never a completely precise thermodynamically-defined concept and yet we still find it a useful thing to measure.) The global temperature is a metric and while, like most metrics, it doesn’t tell you everything about the system, it does good one a rough idea of how the climate of the earth is changing.
Really the only substance of the paper is where they try to show that their argument has some practical application to measuring global temperature trends with an example involving monthly temperature records from twelve weather stations. They do this by defining averages based doing an arithmetic average of different moments n of the temperature (and then taking the appropriate nth-root). This is reasonable enough if you stick to moments that one could make some vague physical justification for, like r = 1 (normal average), r = 2 (root-mean squared), r = 4 (average radiance of the temperature of the planet). However, what they do is plot their result for the trend found from r=-125 to r=125. The large negative values of r correspond to an average that basically just gives all the weight to the lowest temperature for each month while the large positive values of r correspond to an average that basically just gives all the weight to the highest temperature for each month…Clearly, a very silly definition of an average!
If you stick to reasonable values of r, the dependence of the trend on r is very small and the only possible practical result of their philosophical musings goes away. And, my conjecture is that for a denser network of weather stations than the twelve stations that they used, the dependence of the trend on r would be even weaker! (This is because the affect of large positive or negative values of r, as I noted, is essential to put all of the weight on one station value for each month while ignoring the rest. And, as you get a denser network of stations, the difference in temperature between, say, the coldest station and several other stations will tend to be less, which means you will need a larger magnitude for r in order to be essentially including only one…or very few…stations in the average with any significant weight. This is particularly true since they appear to have chosen twelve stations spanning a very broad range of climate from Antarctica to the tropics, which means the effect of adding more stations would be to fill in the temperature range more densely without significantly increasing the standard deviation of the temperatures. [In fact, they chose such a broad range of climates that a denser station network may very well have a lower standard deviation of temperatures.])
Oh, it is also worth noting that what scientists measure are temperature anomalies rather than absolute temperatures. The reason is that the anomaly field has nicer properties than temperature itself…In particular, temperature anomalies tend to be correlated over fairly large distances whereas for surface temperatures themselves this isn’t so true. (As an extreme example, think of the temperature on top of Mt. Washington vs the temperature in a nearby valley.) See discussion here on anomalies vs absolute temperatures http://data.giss.nasa.gov/gistemp/ for more details.
danappaloupe (00:59:28) :
Also, must I explain why surface area of ice is a weak indicator of total volume of ice?
——————-
Go for it – with sentences that contain numbers please.
—–
Why do you need numbers?
Volume is three dimensional (x, y, z), area is two dimensional (x,y).
Have fun retaking geometry.
As for multiple claims that no one (or not many, or not the majority, etc) do not try to use one year of weather data to examine climate…
The publisher of this blog eluded to this fallacy with the post of the 28.7% increase “article” Which was then followed by a bunch of people saying climate theories are BS because of one years worth of data.
That was all you guys. None of you, who speak up now, spoke up then, nor did anyone else for that matter. Its called “Group Think”.
You should try to participate in real science, I think you all would really enjoy it. I was at a conference this weekend that evaluated publications prior to peer review, it is an intense and fruitful endeavor, all reasonable comments are taken into consideration in due time and it is an entirely apolitical affair.
You are not very convincing. First, rigorous thermodynamic definitions matter and cannot be replaced with ambiguous statements that lack the necessary precision to deal with the issue. Second, you claim that the global temperature is a metric but you cannot explain how one would come up with that metric and why that particular method is any more meaningful than methods that would yield a different value. As I said, one can come up with a an average global telephone number but it won’t have any particular meaning. There are limitless material changes that can happen in heavily populated areas of the world without changing the ‘average global temperature’ that is created by any of the possible methods. That means that many people can experience major changes in climate without changing the average temperature. And if that can happen, the constructed average temperature is not really meaningful.
Then we have the practical factors in this debate. The bottom line is that we do not have accurate surface records to permit an accurate reconstruction of the average temperature even if we could agree on one method that should be used. As Anthony showed, the USHCN stations overwhelmingly show a warming bias that is larger than the claimed warming since the end of the LIA. He has also showed that those stations do not have continuous records and have been moved without proper documentation that would allow one to guess about the effects of the moves. (Given the fact that a move of 100 meters can mean a difference of 0.5C or more the lack of proper documentation is very harmful to any reconstruction.) Then we have the global data set, which Phil Jones has claimed to have lost during an office move. The global set is even worse than the American data. Unlike the US, most of the world has gone through several wars that have impacted the data gathering process. That means that most of the original stations have been destroyed or moved and that records are very incomplete. Add to that poor funding and instruments and data gathering methods that were never designed to do the job that we are trying to get them to do and there is a serious credibility problem for anything that the reconstructions can produce.
The bottom line is that even if we could decide on a meaningful definition of a global temperature average, agree on its meaning, and come up with a method that is defensible, we don’t have the data to come up with anything accurate. This means that the data keepers and crunchers have a major influence on the final numbers and temperature profiles, which make the claim of scientific rigour and certainty a joke. As I said above, GISS/NASA have admitted that for the US the 1930s were warmer than the 1990s. Given the fact that the US had the best data and that the global data set is not available for an independent assessment I cannot see how any objective observer can claim that there is a major warming problem.
It makes no difference if you take an absolute reading or come up with an anomaly by subtracting it from an average if you have trouble getting accurate readings and don’t have complete data sets. When your surface station readings are biased by artificial sources of heat and you can’t account for the UHI it is hard to come up with a meaningful picture of what is going on.
And let me restate that the US does not show the temperature profile reported for the globe. According to Hansen, Ruedy and Sato, all of whom are cited on the link that you provided, “The U.S. has warmed during the past century, but the warming hardly exceeds year-to-year variability. Indeed, in the U.S. the warmest decade was the 1930s and the warmest year was 1934.” (http://www.giss.nasa.gov/research/briefs/hansen_07/) It is clear that the US data set does not show any dangerous warming trend so all we have is the global data set. But the global data set is not available for review because Phil Jones claims that he lost it. Sadly, Dr. Jones refused to allow anyone to review the data set when it was not lost so we cannot accept his reconstructed global temperature profile as being scientifically valid.
Joel Shore (15:02:42) :
Really the only substance of the paper is where they try to show that their argument has some practical application to measuring global temperature trends with an example involving monthly temperature records from twelve weather stations. They do this by defining averages based doing an arithmetic average of different moments n of the temperature (and then taking the appropriate nth-root). This is reasonable enough if you stick to moments that one could make some vague physical justification for, like r = 1 (normal average), r = 2 (root-mean squared), r = 4 (average radiance of the temperature of the planet).
Well, the difference is not that small if one uses r^4 which is the connection of temperature to radiant energy.
from wikipedia
” Stefan–Boltzmann law
This law states that amount of thermal radiation emitted per second per unit area of the surface of a black body is directly proportional to the fourth power of its absolute temperature. That is
j* = σ T^4,
where j* is the total energy radiated per unit area per unit time, T is the temperature in kelvins, and σ = 5.67×10−8 W m−2 K−4 is the Stefan–Boltzmann constant.
Some numbers.
Take a desert. It is 50C in the day and 0C at night, that is 273K at night and 333 in the day. The linear average between maximum and minimum will be 25C.
The weighted by T^4 average will be:34.4C, because the hotter the more radiation.
So 9 degrees errors/differences in calculating an average is not trivial.
Consider winter hemisphere and summer hemisphere, poles/tropics there will be equally large differences.
It is disingenuous to talk of large powers in averaging in order to discredit a valid peer reviewed observation.
I have said before that I think the T^4 weighted average should be used for a global averaging to have a chance of being rational.
We are told that September was hot from anomalies calculated linearly. The anomaly came from siberia and regions where the temperatures were below zero anyway. If the T^4 averaging were used there would be no warming, which what most of the people of the northern hemisphere have experienced ( that is what I mean by rational) .
anna v says:
But, you have failed to show that any of this is relevant to computing temperature trends. It is irrelevant for our purposes whether we can accurately define an “average global temperature” if the trend of all the various reasonable physical averages are about the same. (Hansen et al have already emphasized the reasons why the look at temperature anomalies rather than absolute temperatures. And, there is no real reason to need to know what the global average temperature is. It is just a good metric to detect how the climate is CHANGING.
Right…It is disingenuous of them to show a plot that looks dramatic because they are plotting from r = -125 to 125 when it would not look very dramatic at all if we limited r to values that we could give some physical justification. That is exactly my point. I am not the one who chose to show this large range of values; they are…and the reason they chose to do so is simply because their point about this being relevant disappears if you stick to a reasonable range.
Sorry, I screwed up the formatting in the above. The final two paragraphs are my response, not part of anna v’s comment.
Joel Shore (13:26:21) :
So really, I don’t see what it is new that you have learned here other than the fact that more information gives you more information than less information does.
When I focus on the fact that radiation has a T^4 dependence I learn that averaging energy linearly gives exponential weight to anomalies in cold areas for PR purposes. It is not temperature that is important, it is radiation.
A cloud cover in the tropics has an exponential effect in the radiation balance with respect to a cloud cove at the poles. Global temperature is a red herring as far as radiation balance goes and is used for PR purposes. There should be a 1C+/- systematic error on all these global curves,, which would make them meaningless.
anna v: You can use any metric you want; it’s not going to change the underlying physics. Global temperature trends will be a little bit different depending on which metric you use but in actually practice those differences are likely to be small. (I invented a quite extreme case above where all the warming occurred in very cold parts of the planet and none over the rest. This is a worst case scenario for seeing a difference in trends between the T-average metric and the T^4-average metric.)
I agree that the radiation emitted will be proportional to the T^4 average. However, that doesn’t automatically mean it is the better metric to use for everything. Like I said, the best thing to do is to look at the temperature anomaly computed across the globe, but people like to have one number that serves as a useful metric to summarize thing…and the global temperature anomaly is such a metric.
“but people like to have one number that serves as a useful metric to summarize thing…”
And will accept any number given without giving any consideration to error bars, and hence meaning of the number.
RR Kampen (01:44:35) :
Re: philincalifornia (11:42:46) :
The argument was: As for the relation between thickness and age of Arctic sea ice, it is enough to corroborate my statement.
——————
That wasn’t my argument. My argument was that in the simple mathematical equation a – b = c, if you don’t know a and b, then you don’t know c. That’s all. So stop acting like you do.
Wave your hands around as much as you want about Arctic ice volume, but the data is not in yet, nor will it be for years. When it does come in (if at all) in five years, I suspect that you will not like the answer, which is a rather strange phenomenon that I have observed among warmists. You seem to want desperately for the Arctic ice to melt.
danappaloupe (00:05:54) :
danappaloupe (00:59:28) :
Why do you need numbers?
Volume is three dimensional (x, y, z), area is two dimensional (x,y).
Have fun retaking geometry.
You should try to participate in real science, I think you all would really enjoy it.
—————————
For the first question, see my response to RR. If you are sure that one number is bigger than another number, it usually helps to know what the numbers are.
When you have published 200 peer-reviewed papers and reviewed probably another 200 by other authors, get back to me, and we’ll compare notes.
Sandy says:
True enough, which is why it is so silly to see “skeptics” talking about negative or near-zero temperature trends over (cherrypicked) short time periods, whereas a proper computation with errorbars shows that the data over such time periods is compatible with a significant positive trend in addition to zero trend.
However, the trend for global temperature over, say, the 20th century is given with errorbars. For example, the IPCC gives the 100-year global surface temperature trend to 2006 as being 0.74 +/- 0.18 with 90% confidence interval.
By the way:
danappaloupe (00:59:28) :
Why do you need numbers?
———————-
If I suggested this as quote of the week, could I be accused of cherrypicking warmists ??
Some of us skeptics think it’s “silly” to apply linear regressions to nonlinear functions… Like temperature anomaly series.
If I apply a linear regression to a partial Sin wave… I can get a heck of a positive trend… Partial Sin Wave
If I break the UAH temperature data down into band pass filtered components, It essentially has no “trend”… Just a series of nested harmonic components…
UAH Filtered
If I perform a band pass filter on the HadCRUT3 series (high pass 2, low pass 8) and compare it to a low pass 3rd harmonic of the JISAO PDO, I get an astoundingly strong “wavelet” correlation…
HadCRUT3 v PDO
I used the Wood For Trees data and Fourier analysis tools to make the above graphs. The “tails” at the ends of the low frequency time series are not “real” because the time series don’t start and end at zero. But the frequency components do “add up” to yield the original data.
When applying a linear regression to a nonlinear function, cherry picking is in the eyes of the beholder. What is the trend of Moberg’s 2,000-year climate reconstruction?
Moberg
Why is the linear up-trend for the late 20th century more significant than the linear down-trend from 1942-1976 and 2001-2009?
Where does statistical significance reside? A linear trend line from the Cretaceous to the present would be strongly negative. A linear trend line from the Wisconsin glaciation would be strongly positive.
philincalifornia (11:02:17) :
Re: philincalifornia (11:42:46) :
——————
That wasn’t my argument. My argument was that in the simple mathematical equation a – b = c, if you don’t know a and b, then you don’t know c. That’s all. So stop acting like you do.
Wave your hands around as much as you want about Arctic ice volume, but the data is not in yet, nor will it be for years. When it does come in (if at all) in five years, I suspect that you will not like the answer, which is a rather strange phenomenon that I have observed among warmists. You seem to want desperately for the Arctic ice to melt.
Your argument is correct but irrelevant. The argument was: As for the relation between thickness and age of Arctic sea ice, it is enough to corroborate my statement. By the way, did you know the new expanses of open sea in autumn Arctic were preceded by a strong thinning of the ice?
I would desperately have preferred cooling because I hate any temperature above +15° C. So I hate the melting of Arctic sea ice but unfortunately I have to accept the facts.
philincalifornia (14:55:53) :
By the way:
danappaloupe (00:59:28) :
Why do you need numbers?
———————-
If I suggested this as quote of the week, could I be accused of cherrypicking warmists ??
What are you getting at… Volume is more important and area, no intelligent person needs numbers to figure out why that is a true statement.
PS I just saw a Republican Senator say that ‘yes some glaciers are melting, but the last couple of summer have been the warmest on record’… as his reason for not believing in climate change. How can you expect someone who says that to respect science and research…
philincalifornia (07:36:28) :
I asked you to post numbers that support your theory that Arctic sea ice volume is declining.
I think we can agree on the fact that you can’t.
In other words, you have reached the conclusion you wanted to reach with zero experimental data.
OK, here
http://www.agu.org/sci_soc/prrl/2009-19.html
In the case that you don’t like satellites. the second graph from the bottom, is data collected by military submarines that gathered data for reasons much different than climate.
http://nsidc.org/sotc/sea_ice.html
I asked you to post numbers that support your theory that Arctic sea ice volume is declining.
First, I hate to bring this little fact up but the extent of the ice cover is determined by a number of factors with wind and current being the main ones. Second, when the Polar 5 survey measured the ice thickness it was found that it was 100% thicker than what the satellite people were saying.
But neither of the above are as important as the fact that the global ice cover is not all that far above the satellite era average. That means that this debate is primarily about normal variation where the argument is about noise rather than signal. The bottom line is that the global ice cover data is not cooperating with the AGW proponents and things are not what they claim them to be. Given the fact that this debate is about climate change and the ice cover and thickness data cannot conclusively move the needle in either direction I don’t see the point in arguing about it.
I half agree. Many predictions call for in increase in ice caps due to increased precip. Same go for glaciers in certain areas as well.
The only reason I bring it up is that I have heard many points from AWG skeptics, on this site, even the author, about how an increase in ice area is reason to refute AGW.
Honestly it is really hard to keep track of the factions of AWG skeptics… some deny there is any warming at all, some say it is within historical trends, others contest focus on CO2, if it can in fact cause warming, or how closely it follows changes in temp…
I tend to stick with NASA research….
REPLY: Well at least Dr. Spencer has the courage to put his actual name to his words. Many of the AGW crowd are cowards in that regard, preferring to snipe from behind the bushes of anonymity. – Anthony
Vangel (17:20:50) :
Given the fact that this debate is about climate change and the ice cover and thickness data cannot conclusively move the needle in either direction I don’t see the point in arguing about it.
danappaloupe (16:48:54) :
—————
As I’m sure you are aware Vangel, I wasn’t just arguing about sea ice volume. I was trying to point out that if someone makes a statement of quantitation it is a necessity to have numbers. Parroting two links that also do not have quantitation of the subject under discussion (Arctic sea ice volume) doesn’t cut it either at this level.
This argument could be about the number of coins in danappaloupe’s pocket yesterday versus today.
I think we will see what I am talking about when the results of the clownish Catlin expedition hits the news tomorrow.
I’m hoping, and somewhat confident, based on what I’ve read, that the Wegener Institute Polar 5 results can be used to set a baseline, at 2009, for Arctic sea ice volume.
Author: danappaloupe
Comment:
I half agree. Many predictions call for in increase in ice caps due to increased precip. Same go for glaciers in certain areas as well.
The only reason I bring it up is that I have heard many points from AWG skeptics, on this site, even the author, about how an increase in ice area is reason to refute AGW.
I think that you may misunderstand the intent. What the sceptics are pointing out is that the AGW proponents have misused the ice data. They did not look at the global ice cover and when posted numbers from various reporting groups other measurements did not agree with those numbers. Whether we like it or not, the Polar 5 survey showed that the satellites did not come up with very accurate results. The real world data using more accurate measurement methods and equipment showed that the satellites reported half the thickness amount that was actually measured. That is a serious issue that we cannot gloss over.
Frankly, I have a problem with the field because I do not see how it measures up to the level of the physical sciences. Making all kinds of assumptions that are not necessarily valid, using computers to fill in missing data, adjusting raw data without proper justification, and creating algorithms that are not always reliable is not real science and cannot really tell us enough to know the subject as well as we need to in order to come up with meaningful decisions.
Honestly it is really hard to keep track of the factions of AWG skeptics… some deny there is any warming at all, some say it is within historical trends, others contest focus on CO2, if it can in fact cause warming, or how closely it follows changes in temp…
I tend to stick with NASA research….
But the NASA research shows that the 1930s were the warmest decade for the US. It clearly shows no massive warming problem even though the raw data is ‘adjusted’ to make the present warmer than the measurements suggest. NASA does not use the studies that show the true impact of the UHI effect on temperature measurements over the years but sticks with the claim made by Phil Jones (yes, the same guy who lost the CRU global data set) on the basis of Chinese and Russian stations that were supposedly in good shape but were found to have been moved a number of times and to be lacking complete data. And let me note that when the people at the National Aeronautics and Space Administration refrain from using space based satellite systems, which provide continuous and complete data, and go with unreliable surface measurements that come from a network in which 89% of stations are biased by more than 2C, we need to examine both motives and competence.
The competence of NASA is a serious issue because its well funded program failed to discover what Anthony and the people that helped him found when they audited the climate network. How NASA can claim to be making valid
‘adjustments’ to the raw data when it did not know that there was a 2C bias to the upside is something that needs to be investigated. We also need to have access to both the raw data and the algorithms that make all of the adjustments to it to see if they make sense. And we also need an independent audit of NASA’s quality control system. If they missed the station network problems and missed when the global data set added measurements from the wrong month that created a major false warming signal what else is being missed that we should know about?
As for motivation, the simplest answer is the obvious. Scientists are human beings and subject to the same desires and wants as the rest of us. When governments offer money to show a warming problem caused by humans and the media makes stars of those that can show that the problem is real it makes sense that some would be drawn to the money and recognition that comes with offering up scare stories and adjusting incomplete and inaccurate data sets to paint a picture to support that view. They can always claim that their adjustments make sense because other data show a similar profile, even though the other data is also invalid because it is cherry picked or adjusted to provide a false profile.
The bottom line is that the real data does not support the AGW claims. As I pointed out above, even NASA admits that for the data that it has gathered for the US shows that the 1930s were the warmest decade. How you can take that to mean that we have an AGW problem is something that you have to look into.