LIVERMORE, California — Using ocean observations and a large suite of climate models, Lawrence Livermore National Laboratory scientists have found that long-term salinity changes have a stronger influence on regional sea level changes than previously thought.
“By using long-term observed estimates of ocean salinity and temperature changes across the globe, and contrasting these with model simulations, we have uncovered the unexpectedly large influence of salinity changes on ocean basin-scale sea level patterns,” said LLNL oceanographer Paul Durack, lead author of a paper appearing in the November issue of the journal Environmental Research Letters (link is external).
Sea level changes are one of the most pronounced effects of climate change impacts on the Earth and are primarily driven by warming of the global ocean along with added water from melting land-based glaciers and ice sheets. In addition to these effects, changes in ocean salinity also can affect the height of the sea, by changing its density structure from the surface to the bottom of the ocean.
The team found that there was a long-term (1950-2008) pattern in halosteric (salinity-driven) sea level changes in the global ocean, with sea level increases occurring in the Pacific Ocean and sea level decreases in the Atlantic. These salinity-driven sea level changes have not been thoroughly investigated in previous long-term estimates of sea level change. When the scientists contrasted these results with models, the team found that models also simulated these basin-scale patterns, and that the magnitude of these changes was surprisingly large, making up about 25 percent of the total sea level change.
“By contrasting two long-term estimates of sea level change to simulations provided from a large suite of climate model simulations, our results suggest that salinity has a profound effect on regional sea level change,” Durack said. “This conclusion suggests that future sea level change assessments must consider the regional impacts of salinity-driven changes; this effect is too large to continue to ignore.”
Other collaborators include LLNL’s Peter Gleckler, along with Susan Wijffels, an oceanographer from Australia’s Commonwealth Scientific and Industrial Research Organization (CSIRO). The study was conducted as part of the Climate Research Program at Lawrence Livermore National Laboratory through the Program for Climate Model Diagnosis and Intercomparison, which is funded by the Department of Energy’s Regional and Global Climate Modeling Program.
###
Long-term sea-level change revisited: the role of salinity
OPEN ACCESS
Paul J Durack, Susan E Wijffels and Peter J Gleckler
Abstract
Of the many processes contributing to long-term sea-level change, little attention has been paid to the large-scale contributions of salinity-driven halosteric changes. We evaluate observed and simulated estimates of long-term (1950-present) halosteric patterns and compare these to corresponding thermosteric changes. Spatially coherent halosteric patterns are visible in the historical record, and are consistent with estimates of long-term water cycle amplification. Our results suggest that long-term basin-scale halosteric changes in the Pacific and Atlantic are substantially larger than previously assumed, with observed estimates and coupled climate models suggesting magnitudes of ~25% of the corresponding thermosteric changes. In both observations and simulations, Pacific basin-scale freshening leads to a density reduction that augments coincident thermosteric expansion, whereas in the Atlantic halosteric changes partially compensate strong thermosteric expansion via a basin-scale enhanced salinity density increase. Although regional differences are apparent, at basin-scales consistency is found between the observed and simulated partitioning of halosteric and thermosteric changes, and suggests that models are simulating the processes driving observed long-term basin-scale steric changes. Further analysis demonstrates that the observed halosteric changes and their basin partitioning are consistent with CMIP5 simulations that include anthropogenic CO2 forcings (Historical), but are found to be inconsistent with simulations that exclude anthropogenic forcings (HistoricalNat).
Full PDF: http://iopscience.iop.org/1748-9326/9/11/114017/pdf/1748-9326_9_11_114017.pdf
Article: http://iopscience.iop.org/1748-9326/9/11/114017/article
Figure 1. Long-term trends in 0–2000 dbar total steric anomaly (left column; (A1)–(C1)), thermosteric anomaly (middle column; (A2)–(C2)) and halosteric anomaly (right column; (A3)–(C3)). Units are mm yr−1. Observational maps show results from (A) Ishii and Kimoto (2009; 1950–2008), (B) Durack and Wijffels (2010; 1950–2008) and (C) the CMIP5 Historical multi-model mean (MMM; 1950–2004). Stippling is used to mark regions where the two observational estimates do not agree in their sign ((A1)–(A3), (B1)–(B3)) and where less than 50% of the contributing models do not agree in sign with the averaged (MMM) map obtained from the ensemble ((C1)–(C3)). Results presented in columns 1–3 (above) relate to equations (1)–(3) presented in section 2
Mark Idle,
He’s in astoundingly good company.
does the paper record salinity at all depths? The ocean is like a layer cake of temps, salinity, and currents. iow, it would seem to me that ocean salinity sampling is a bridge to far.
The premises and conclusions of this study should be taken with more than a grain of salt.
mpainter,
Not only is that phrase not found in the paper itself, it’s not even found in the text of Anthony’s writeup.
As my lead comment suggests, validating models against observation is simply not allowable … though it’s beginning to look like such a thing even defies comprehension.
Brandon Gates:
As per my comment above, the whole study presumed increased salinity via global warming as modeled in the GCM’s. The whole concept of higher SST due to AGW is invalid, according to basic principles of radiative physics, that is, the absorbency properties of water with respect to IR.
You cannot validate a model that is fundamentally invalid.
Any scientist who is informed of the science will not go any further than Tisdale and the rest of us did.
mpainter,
Your comment above, “Who needs to go any further than this?” implies that you’ve not read the whole study. That “A suite of climate models tell us….” is not found in the paper itself looks to be confirmation. If I’m not wrong, I find it curious that you’d characterize a whole study not having first read it.
Highlighting the importance of checking model output against observation. Surely we both agree that proper science demands doing so rigorously on a regular frequent basis.
Brandon,
I did not read the study; it was not necessary to do so to evaluate the worth of it. This seems to escape you.
Your response shows that you missed the import of my comment.
davidmhoffer,
Not at all. I’d be happy to answer them if they were actually relevant to the topic.
By all means treat me as your humble student, eager to learn. How are my questions not relevant?
davidmhoffer,
Because they are the kinds of questions which are relevant to a discussion of model output in the form of ensembles as published, say, in IPCC ARs. It should be apparent from the excerpt I posted in my very first comment that the study is about anything BUT that.
In fairness, your first question (which models) is relevant, and is easily answered in Table 1 on p. 3 of the paper: http://iopscience.iop.org/1748-9326/9/11/114017/pdf/1748-9326_9_11_114017.pdf
And from the bottom right on p. 2: The CMIP5 models assessed in this study are a subset of the full suite, as drift correction in the deeper ocean was necessary. Consequently, 26 independent models (rather than 42 in a previous study: Durack et al 2014) were assessed and specific details on the model simulations used in this analysis are contained in table 1.
So that’s the beginnings of answering your second and third questions, which though loaded, are also in fairness somewhat relevant. I think though that someone who is truly eager to learn does not ask others to do such trivial homework for them.
mpainter,
And likely always will since for the life of me I can’t imagine ever being able to evaluate the worth of something before coming to at least a basic understanding of what the thing itself is first. Kudos to you for having figured out how to do that though.
Goodness gracious Brandon, don’t you see that I do have the understanding?
Are you unaware of the deficiencies of the GCM’s? One such deficiency is germane to this study: that AGW increases SST. The study has it thus:
AGW—>higher SST==>more evaporation===>higher salinity
Here is the truth: CO2 has nothing to do with SST. The models are fundamentally wrong.
Any validation that you infer to a GCM from a data set is spurious.
mpainter,
Painfully aware. I learned a few weeks ago that some of them leak energy. That gave me the warm fuzzies I tell ya’.
If I were running things, for sure I’d want to validate GCMs from observations, not the other way ’round. I thought we agreed on that already.
Brandon Gates;
So that’s the beginnings of answering your second and third questions, which though loaded, are also in fairness somewhat relevant.
Sigh. You know very well that the questions were rhetorical, and were phrased to make a point. You’re dodging, twisting, and weaving to avoid having to deal with that point. The models are wrong, by admission of the IPCC itself, so using them as a reference to draw the conclusions in the paper is ridiculous on its face. Using 26 models, is a farce. If they were in agreement with one another, there would be no value to using so many, and as they are in disagreement with one another, averaging them is a farce upon a farce. They can’t all be right, at most only one of them can be right, and the most likely scenario is that they are ALL wrong. So, results predicated upon them are, by extension, also most likely to be wrong, and at best just very inaccurate. The scorn is deserved, and your defense of the paper in adequate. When you’ve got an argument that shows the models to be a valid method of calibration for this type of study, by all means tell us what it is. But quit blowing smoke up our keesters pretending that you’ve got valid reasons for not engaging in a meaningful fashion.
davidmhoffer,
Strange argument in this context.
Your logic is impeccable. The unavoidable conclusion is that someone really ought to step away from the computer for once, go out into the field to collect some data, and quantify how well (or not) their video games mesh with reality.
My defense of the paper is non-existent. I can see why you’d think I’m doing a horrible job of it.
Why would I want to talk to you about that when it would be so much more topical to talk about which methods of observation are best for calibrating models?
erg … fouled up a blockquote tag …
Why would I want to talk to you about that when it would be so much more topical to talk about which methods of observation are best for calibrating models?
The topic, I shall remind you, was in regard to it being reasonable to shun the paper due to its reliance on models. For the reasons discussed in this thread, it is.
davidmhoffer,
I’d remind you of the topic of the paper, but you still clearly have not read it yet. You don’t have to since I quoted the relevant bit in my very first post, but by all means do please continue to ignore all that so as not to get off point.
Brandon Gates November 22, 2014 at 6:02 pm
davidmhoffer,
I’d remind you of the topic of the paper, but you still clearly have not read it yet.
>>>>>>>>>>>>>
If I provided a paper to you which was based on the premise that 2+2 = 5, would you read it?
davidmhoffer, of course not.