UPDATE: 8/18 10:30AM I spoke with Dr. Judith Curry by telephone today, and she graciously offered the link to the full paper here, and has added this graphic to help clarify the discussion. I have reformatted it to fit this presentation format (side by side rather than top-bottom) While this is a controversial issue, I ask you please treat Dr. Curry with respect in discussions since she is bending over backwards to be accommodating. – Anthony
===========================================================
[Update] My thanks to Dr. Curry for showing the graphic above, as well as for her comment below and her general honesty and willingness to engage on these and other issues. She should be a role model for AGW supporters. I agree totally with Anthony’s call for respect and politeness in our dealings with her (as well as with all other honest scientists who are brave enough to debate their ideas in the blogosphere). I also commend the other author of the study, Jiping Liu, for his comments below.
However, as my Figure 2 below clearly shows, any analysis of the HadISST data going back to 1950 is meaningless for the higher Southern latitudes. The HadISST data before about 1980 is nonexistent or badly corrupted for all latitude bands from 40°S to 70°S. As a result, although the HAdISST graphic above looks authoritative, it is just a pretty picture. There are five decades in the study (1950-1999). The first three of the decades contain badly corrupted or nonexistent data. You can’t make claims about overall trends and present authoritative looking graphics when the first three-fifths of your data is missing or useless. – willis
===========================================================
Guest Post by Willis Eschenbach
Anthony has posted here on a new paper co-authored by Judith Curry of Georgia Tech, entitled “Accelerated warming of the Southern Ocean and its impacts on the hydrological cycle and sea ice”. The Georgia Tech press release is here. Having obtained the paper courtesy of my undersea conduit (h/t to WS once again), I can now comment on the study. My first comment is, “show us the data”. Instead of data, here’s what they start with:
Kinda looks like temperature data, doesn’t it? But it is not. It is the first Empirical Orthogonal Function of the temperature data … the original caption from the paper says:
Figure 1. Spatial patterns of the first EOF mode of the area-weighted annual mean SST south of 40 °S. Observations: (A) HadISST and (B) ERSST for the period 1950–1999. Simulations of CCSM3 (Left) and GFDL-CM2.1 (Right): (C, D) 50-year PIcntrl experiment (natural forcing only),
Given the title of “Accelerated warming”, one would be forgiven for assuming that (A) represents an actual measurement of a warming Southern Ocean. I mean, most of (A) is in colors of pink, orange, or red. What’s not to like?
When I look at something like this, I first look at the data itself. Not the first EOF. The data. The paper says they are using the Hadley Centre Sea Ice and Sea Surface Temperature (HadISST) data. Here’s what that data looks like, by 5° latitude band:
Figure 2. HadISST temperature record for the Southern Ocean, by 5° latitude band. Data Source.
My first conclusion after looking at that data is that it is mostly useless prior to about 1978. Before that, the data simply doesn’t exist in much of the Southern Ocean, it has just been shown as a single representative value.
So if I had been a referee on the paper my first question would be, why do the authors think that any analysis based on that HadISST data from 1950 to 1999 has any meaning at all?
Next, where is the advertised “Accelerated warming of the Southern Ocean”? If we look at the period from 1978 onwards (the only time period with reasonable data over the entire Southern Ocean), there is a slight cooling trend nearest Antarctica, and no trend in the rest of the Southern Ocean. In other words, no warming, accelerated or otherwise.
Finally, I haven’t even touched on the other part of the equation, the precipitation. If you think temperature data is lacking over the Southern Ocean, precipitation data is much worse. The various satellite products (TRMM, SSM/i, GPCC) give widely varying numbers for precipitation in that region, with no significant correlation between any pair (maximum pairwise r^2 is 0.06).
My conclusion? There is nowhere near enough Southern Ocean data on either side of the temperature/precipitation equation to draw any conclusions. In particular, we can say nothing about the period pre-1978, and various precipitation datasets are very contradictory after 1978. Garbage in, you know what comes out …



[no large cutting and pasting ~ ctm]
Oakden Wolf says:
August 18, 2010 at 9:28 pm
…..An article on this theme: How can annual average temperatures be so precise?
_________________________________________________________
So if I follow this article correctly, when the number of reporting stations was drastically cut so was the precision and therefore the global temperature data we see now is less precise that prior to 1910. http://diggingintheclay.files.wordpress.com/2010/04/canadadt.png
Of course it is still garbage in garbage out when the measurements are not done according to specification and the data is progressively adjusted:
http://surfacestations.org/
http://jonova.s3.amazonaws.com/graphs/giss/hansen-giss-1940-1980.gif
“Judith Curry says:
August 19, 2010 at 11:56 am
Steven Wilde, the term accelerated hydrologic cycle is generally associated with an increase in precipitation. The scale of this can be referred to as global, or regional.”
Of course, but more precipitation requires more evaporation to maintain global humidity (as demostrated by that fixed optical depth of 1.87 for over 60 years) and the net effect is faster upward energy transport so more CO2 giving more downward IR may be negated altogether rather than warming the ocean.
You have recognised that process regionally. If you recognise it globally then you need to do some explaining to supprt AGW and your models at all.
You need to demonstrate that after the extra downward IR has hit the water surface there is a net surplus of additional energy in non latent form after the enhanced rate of evaporation has taken place.
I don’t believe that anyone can demonstrate such a net surplus because evaporation is a net cooling process.
Your paper and specifically your reference to a variable speed for the hydrological cycle potentially puts a boot into AGW theory.
As pointed out above the paper contains this remark:
“the decrease of upward ocean heat transport as a result of the enhanced hydrological cycle”
If one has an enhanced hydrological cycle how does that cause upward heat transport to decrease. It must by definition cause it to increase. Faster evaporation always sucks energy out of the water faster and carries it towards the tropopause.
On the face of it that sounds bizarre pending Judith’s clarification.
Stephen Wilde says:
August 19, 2010 at 1:53 pm
“As pointed out above the paper contains this remark:
“the decrease of upward ocean heat transport as a result of the enhanced hydrological cycle”
If one has an enhanced hydrological cycle how does that cause upward heat transport to decrease. It must by definition cause it to increase. Faster evaporation always sucks energy out of the water faster and carries it towards the tropopause.
On the face of it that sounds bizarre pending Judith’s clarification.”
Very good point Stephen. In fact the whole premise of this piece of work seems bizarre…
The paper uses modelled temperature data to create another model showing the effects on Antarctic sea ice extent, then claims that the hypothesis is consistent with observation, which actually show that the Antarctic has cooled!
As one would expect, a cooling Antarctic produces more sea ice, which is a far simpler explanation of what happens, and no paradox exists. No surprise that sceptics feel that climatology has lost touch with reality and has become just a cargo cult science, working to reinforce a political agenda.
From a book (“Godforsaken Sea”) about the Vendee Globe sailboat race, through the Southern Ocean.
I quote:
“Below forty degrees south there is no law;
below fifty degrees south there is no God.
— Old sailors saying—-
@Judith Curry
‘check this out, the UK Met Office is pondering the challenge of surface temperature data sets for the 21st century’
I’m sure they are, and they probably should be.
However why are so few pondering the apparent challenge of why surface temperature is so important, since most observed temperature readings are land based, but the planet consist of about two thirds of, well lets face, no land. Put it another way, why are so few climatologist, climate researcher, et al what ever, interested in what’s going on in the oceans at the least if the frakktard called the sol is not enough on it’s own.
The only thing about oceans that seem important to climate-what ever people is the precipitation falling on land. The only time you guys care when it falls in the lakes is when the water in the lakes makes a dash for freedom, like over the brim and far far away.
“”” Michael Larkin says:
August 19, 2010 at 1:58 am
George E. Smith says:
August 18, 2010 at 2:20 pm
“Well let me cast my self adrift on the thin ice (Southern Ocean style) where only fools (like me) don’t fear to tread.
For those non mathematicians who have no idea what Orthogonal Functions are; here’s my stick on a sandy beach explanation.”
Many thanks for this, George. It advances my understanding that little bit further. “””
Michael, I hope the rather sketchy image I created was still intelligible. For simple repetitive cyclic finctions, such as square waves or triangle waves the synthesis as a sum of sinusoidal harmonically related functions is quite simple to implement; but just a tad laborious so easiest to look up the result in a book. When the signal to be synthesized is a transient or non repetitive function, then things get a bit more complicated but still not to difficult to graps and one has to use things like Fourier Integrals rather than Fourier Series; or else use othey types of functions; but it is somewhat imperitive that the chosen functions be “Orthogonal” in the sense that I described, in order to simplify the extraction of the coefficients for each of the terms in the series.
Often the geometry of the function dictates the kind of functions to use for the synthesis. For example if one has a cylindrical Geometry with co-ordinates z, r, theta, then one would choose one sort of functions; but if the geometry was spherical with an r, theta phi, set of co-ordinates, one would choose a different expansion set of functions. My gray matter is a bit grey these days so I would have to look up the preferred functions; but as a wild guess, I would say that the cylindrical geometry calls for a set of Bessel functions; and I believe it is the Laguerre Polynomials that would be used in the Spherical Case; but I would have to look back in the books to verify that.
But if for example one wanted to synthesize say a square wave which we would define as:-
For 0<x<pi, F(x) = +1; For pi<x<2pi, F(x) = -1; For x = 0 or pi, F(x) = 0
We could define F(x) = A1 sin(x) + A2sin(2x) + A3sin (3x) ……..
But we don't know what any of the Ai values are.
If we wanted to find A2, we would multiply everything by sin(2x), and then integrate it all for 0<x<2pi.
Now sin(2x) is going to go through one full cycle for x = 0 to pi, and another full cycle for x = pi to 2pi; but our square wave flips sign at x = pi so the product of the square wave and sin 2x doesn't have any net area, so the integral is zero, and hence A2 must also be zero. Well common snese tells us that the same would be true for any sin (ix) if i is an even number; so A2 = A4 = A6…= 0 and there are no even coefficients.
Well you can then go on and figure out the odd coefficients by multiplying by each odd sin (ix) and integrating from 0 to 2pi, and if my memory serves the answere is that Ai is simply 1/i.
So our square wave is: F(x) = sin(x) + (1/3)sin(3x) + (1/5 )sin(5x)….
Well if I screwed it up, I am sure Phil can straighten me out; but you get the idea.
Now the Empirical Orthogonal Funtions (EOFs) that Dr Curry mentions in the paper are more complex because the function is now a function of two variable; space and time; but sadly I have no idea just what the Empirical part means.
Well I guess you are supposed to learn more stuff to get a PhD; and I didn't learn that part I guess.
The orthogonal part can be though of in a geometrical sense if one thinks of the normal axes of Euclidean geometry as being three orthogonal directions; in the sense that they are at right angles to each other. Well our Orthogoanl functions are somewhat akin to that but in many more dimensions than three, and that fact of each being at right angles to all of the others even with 123 different dimensions (terms in the series) is what lets you isolate them one at a time to evaluate the coefficients; well it is something like that; but EOFs are over my head.
George
RE: cal: Old Thread (August 17, 2010 at 1:43 am) “The temperature of the tropopause is 220K everywhere e.g. over the poles and over the equator or tropics. So the radiation at these wavelengths will always be on the 220K line as the charts show.”
Given that this statement is approximately true, I would think it should be possible to estimate the effective heat-sinking capacity of the tropopause by calculating the tropopause altitude change as a function of the latitude-dependent average energy that is being convected up to that level.
While the Arctic and Antarctic regions may each be independently and driven by their own regional dynamics considering the fact that they are almost negative images of each other as regards land-sea configuration, nevertheless, the reported near constant net sum ice extent for both regions over the last thirty years leads me to suspect that there might be some sort of ‘Trans-Arctic’ linkage between these two polar areas.
Coalsoffire says:
August 19, 2010 at 11:23 am
“Yabbut if you don’t have any data for real places can’t you just use made up data from GCM’s and get the result you want? It might seem silly to the rest of us, but (apparently)… that’s climate science!”
Dilbert of course has something about accurate and made up numbers…
(Sadly this is my concept of what many climate scientists are up to):
http://dilbert.com/strips/comic/2008-05-08/
Bob Tisdale says:
August 19, 2010 at 4:01 am
I got up this morning, and I was so not looking forward to doing a full analysis of the various Southern Ocean temperature datasets, and voila! Bob Tisdale has done a stunning job of it. Many thanks.
My takeaway message from his analysis is that for the area relevant to the Antarctic sea ice, the area south of 60°S, all of the datasets say the same thing post 1980. They all show cooling for the last thirty years. Cooling from 1980 to 1990. Cooling from 1990 to 2000. And cooling from 2000 to the present. Thirty years of cooling, does that qualify it as climate?
I’d like to return to something Judith Curry said before, which I misunderstood completely. She said losing the pre-1980 data didn’t invalidate their hypothesis.
Per the paper, their hypothesis is:
She is correct that losing that data doesn’t invalidate their hypothesis. However, it means that the analysis they have done (taking the EOF of the 1950-1999 data) is not valid. The pre-1980 data is imaginary, so that analysis is specious.
In addition, there does not appear to be enough data to support their claim of “strong warming in the middle latitudes of the Southern Ocean with weak cooling in the high latitudes”. It certainly has not been true for the last thirty years. And before that, we simply don’t know. While this does not invalidate the hypothesis, the fact that there is no strong warming in the middle latitudes of the Southern Ocean casts doubt on the existence of the underlying phenomena.
It would be interesting to see their method applied solely to the valid post-1980-to-present data. It would also be interesting to see which if any of the models showed the 1980-1999 cooling, since that trend continues through 2010 and thus appears to be climate rather than weather.
Regarding the thirty year cooling trend 1980-present, this does invalidate a part of their claims. They say:
Since the area has been cooling for 30 years, as verified by all sea surface temperature datasets and by the UAH MSU satellite lower atmosphere temperatures for the area, more ice is not a paradox requiring explanation. This invalidates that particular line of argument (but as Judith says, does not invalidate the underlying hypothesis).
C&L’s hypothesis can stand IIF it is hypothetical. That is, if there had been warming, then …. etc. That there has been no warming is thus irrelevant, because the consequences might well have occurred if there had been. We’ll just have to wait until some occurs, I guess, to find out!
LOL.
I still don’t think albedo has much of an effect at all. If it did, a winter like last year where much of the NH was covered in snow and ice would cause a runaway effect from which we’d never recover. All it took was a little heat released from the oceans, or a slight change in the angle of attack from the sun and it all went away.
It shows me that slight orbital/rotational changes CAN drastically impact climate globally.
Getting back to the Empirical Orthogonal Function issue: I appreciate the comments people have contributed.
I do think that these types of modeling contribute to understanding phenomena, but we have to recognize how they can let us get ahead of ourselves. I could be wrong, and I welcome correction, but the glorified PCA has two limits worth mentioning: First, the “orthogonality” issue: the equation strives to maximally explain observed variance by maximally weighting each of the various data sources entered (“the data sources entered” may account for the “empirical” term, but I am just guessing; it could indicate that this is more akin to a path analysis than a latent variable model; I just don’t know). The contrast to “orthogonality” is “obliqueness;” the maximizing function, if allowed to be oblique, can strive to weight inputs while having the resulting “componets” or “factors” be co-related; i.e., two inputs such as rain and cloud cover can each predict temp, but they also have influence upon each other; an “orthogonal” assumption, in my understanding, would dictate, from the outset, before any data are run, that the solution will strive to be one that maximally explains variance while striving to minimize the influence of any resulting component, or factor, upon another.
Usually, models run either way do not differ much: the varaince between inputs is what it is, and only so much can be made of the similarities and differences between one line of msmt (i.e., rain) and another (i.e., cloud cover). But with these climate forecasting models, as I mentioned before, being off by one percent in a weighting will take you far off-course across a long time. And we know that aspects of weather are all inter-related. so, it is possible that an orthognal model ,if that means what I am wondering that it means, might not be the most fitting, and the modeling could be improved by allowing the analysis to find a solution with some inter-relation between the factors it might find.
The second issue is this: A factor aanlysis proceeds by developing a first factor that has the greatest ability to explain the greatest amount of variation among the inputs. It then strives to develop a second factor that is allowed the opportunity to have its input be weighted so as to explain remaining variance. Across nearly every PCA I have ever seen, or run myself, the resultign factors always follow the same pattern: a first factor explains the lion’s share of variance, and the portion explained by the remaing factors is in a decreasing order, giving the familiar scree plot. It must be recognized that this is fitting for some data analysis needs, such as estimating what premium to charge someone for life insurance (i.e., how few measures -age, blood pressure, smoking status — do you need to maximally predict a person’s life span in order to develop an estimate of how likely the peson is to die in the next 10 years for a 10-year policy) but this is not the way that natual phenomena, such as the weather, or climate, work.
You can run a factor analysis to keep developing factors until it can no longer make any more with, say, eigenvalue greater than .5 or whatever. Or you can set a pre-determined number of factors, and let the maximizing function run that way. But the method itself will not stive to make factors that, overall, maximally explain/predcit variance, while making each facotr relatively equal in it portion of variance accounted for; one factor always “gets there first,” and so gets to claim common variance that will eventually be there between factors.
And, what abt Error: All the input measures are co-related, and so will share common variance. The first factor developed accomodates, mathematically, a great deal of the shared variance — including shared error variance!! The shared error variance is a big reason why models need to be developed on one set of data, then “replicated” or validated on another parallel data set – this could be split-half data sets, or taking the model for Antarctica and seeing how well it works for Arctica. In any case: the main issue with the factor-development order is that this is a mathematical approach, not an approach that follows the natural phenomena.
We all come from different disciplines where similar techniques may have different terms. I have been word-y to try to get over that barrier.
George Smith explained “cyclic functions:” in my conjecture, neither the inputs nor the resulting factor weightings are “cyclic functions.” I could be wrong. I am mulling that over.
Bob Tisdale said: “…aren’t you really performing an EOF analysis of the statistical methods NCDC and Hadley used to infill the data. In other words, if you use HADSST2 data, which does not infill all of that missing data, would you get the same results?”
–To the degree that the input data are limited, they will have truncated variance compared to reality. If they are used to develop weightings for predictions, projections, alternate scenarios, estimations, and so on, the result will be that the weightings will be over-estimates, and the projections will have estimations that are too big. This is one of the recognized limits of Mann 1998, where the model was “trained” on limited data, then used to predict the future.
RE: George E. Smith: (August 19, 2010 at 6:46 pm) “… but it is somewhat imperitive that the chosen functions be ‘Orthogonal’ in the sense that I described, in order to simplify the extraction of the coefficients for each of the terms in the series.”
As a minor aside, I would like to comment that the *intelligent* use of automatic mathematical optimization applications such as the Microsoft Excel Solver Add-In utility which use a hunt and seek method to find a best solution to a given mathematical problem can, in many cases, obviate the requirement to use truly orthogonal function sets for data representation.
The primary issue with this technique is making sure the solution found is not a minor local optimum. With the power of modern computers it is possible to use techniques to solve a problem that would take multiple millennia if done by hand.
The paper (Curry&Liu) p 8 says:
“the increased precipitation in the high latitudes of the Southern Ocean is mainly in a form of snow.
This would increase the ice albedo, reducing absorbed solar radiation and encouraging ice growth. As a consequence, the reduced upward ocean heat flux and increased snowfall associated with the enhanced hydrological cycle tends to maintain the Antarctic sea ice”
This is a basic assumption in the article and I think it has to be wrong.
Snow on ice will never encourage sea or lake ice growth, it`s the other way around. Snow (especially cold dry snow) is very effective as a thermal insulator and will drasticallly reduce ice growth rate. For someone who have tried to “build” and maintain ice roads this is basic knowledge.
“”” Spector says:
August 20, 2010 at 12:05 pm
RE: George E. Smith: (August 19, 2010 at 6:46 pm) “… but it is somewhat imperitive that the chosen functions be ‘Orthogonal’ in the sense that I described, in order to simplify the extraction of the coefficients for each of the terms in the series.”
As a minor aside, I would like to comment that the *intelligent* use of automatic mathematical optimization applications such as the Microsoft Excel Solver Add-In utility which use a hunt and seek method to find a best solution to a given mathematical problem can, in many cases, obviate the requirement to use truly orthogonal function sets for data representation.
The primary issue with this technique is making sure the solution found is not a minor local optimum. With the power of modern computers it is possible to use techniques to solve a problem that would take multiple millennia if done by hand. “””
No quarrel from me Spector; I’ve already indicated that I am totally ignorant on EOFs and indeed of much of what you just wrote here.
So my post on orthogonal Functions was simply a feeble attempt to show non-mathematicians what “orthogonal functions” are and how they can be used to synthesize other functions; and why orthogonality is a useful property in such syntheses.
So let me repeat; I’m quite ignorant of EOFs and their use; but you can bet that I am mighty curious to learn about them now.
George
By the way Spector,
I spend most of the working part of my day actually using intelligent optimising software; in my case in optimising the “merit Function” I have defined for some Optical system. I have no knowledge of how the program’s Optimisation routines work in detail; except that the simplest of them does do the continuous down hill walk thing; which finds a local optimum.
And they have other more powerful routines that will search for a global optimum; which can take eons; and lead to optical structures that nobody would ever try to turn into “glass and brass”.
But that is simply a tool for me; and I know nothing about the algorithms; except they work very well.
George
“”” TheLastDemocrat says:
August 20, 2010 at 10:36 am
George Smith explained “cyclic functions:” in my conjecture, neither the inputs nor the resulting factor weightings are “cyclic functions.” I could be wrong. I am mulling that over. “””
No need to mull anything Demo; that post was nothing more than a simple back of the envelope description of what “Orthogonal Functions” are in one of the simplest such environments; namely repetitive (cyclic) functions that can be represnted by a Fourier series; such as a complex sound waveform for example. It was not intended to relate directly to the climate discussion at hand; simply for the lay person who did not understand orthogoanal functions. And I didn’t want to muddy the water; by expanding into Fourier integrals for non cyclic functions; and as I explained now several tiems; I’m quite ignorant of EOFs and how they are used; although I am fully cognisant of functions of multiple variables and the sampled data consequences for such functions; but I dod not routinely deal with multivariable data maps such as the global time varying Temperature map which presumably if properly sampled in time and space would in fact yield a true value for a mean global temeprature; but for which any available data set fails the Nyquist criterion on both temporal and spatial sampling; rendering any knowledge of the mean global temperature of the earth (over say a year) as unobtainable. Well other than a WAG value.
So Judith, your protege seems to have explained Antarctic ice build-up and retreat wholly on natural intrinsic oscillations. How does that square with an analysis that seems to be looking more towards AGW forcings or at least speaks of AGW as a wrench in the cogs?
RE: George E. Smith: (August 20, 2010 at 3:43 pm) “I’ve already indicated that I am totally ignorant on EOFs and indeed of much of what you just wrote here.”
My comment was an aside relative to the calculation simplification that can be achieved by using a set of orthogonal functions that have the property that the integral or sum of any possible cross product of two of these functions over a given standard range must be zero. That simplification may not be needed, in some cases, with the modern computational methods now available.
Empirical Orthogonal Functions (EOF) appear to be best-guess estimates of the natural states of a given dynamic system, which are analogous to the vibration modes of a sphere, cylinder, or violin string. I am also relatively ignorant of the technical details of this process.
From the Wikipedia: “In statistics and signal processing, the method of empirical orthogonal function (EOF) analysis is a decomposition of a signal or data set in terms of orthogonal basis functions which are determined from the data. It is the same as performing a principal components analysis on the data, except that the EOF method finds both time series and spatial patterns.”