Insufficient Forcing Uncertainty

insufficient-force-catIt seems depending on who you talk to, climate sensitivity is either underestimated or overestimated. In this case, a model suggests forcing is underestimated. One thing is clear, science does not yet know for certain what the true climate sensitivity to CO2 forcings is.

There is a new Paper from Tanaka et al (download here PDF) that describes how forcing uncertainty may be underestimated. Like the story of Sisyphus, an atmospheric system with negative feedbacks will roll heat back down the hill. With positive feedbacks, it gets easier to heatup the further uphill you go. The question is, which is it?

Insufficient Forcing Uncertainty Underestimates the Risk of High Climate Sensitivity

click for larger image
click for larger image

ABSTRACT

Uncertainty in climate sensitivity is a fundamental problem for projections of the future climate. Equilibrium climate sensitivity is defined as the asymptotic response of global-mean surface air temperature to a doubling of the atmospheric CO2 concentration from the preindustrial level (≈ 280 ppm). In spite of various efforts to estimate its value, climate sensitivity is still not well constrained. Here we show that the probability of high climate sensitivity is higher than previously thought because uncertainty in historical radiative forcing has not been sufficiently considered. The greater the uncertainty that is considered for radiative forcing, the more difficult it is to rule out high climate sensitivity, although low climate sensitivity (< 2°C) remains unlikely. We call for further research on how best to represent forcing uncertainty.

CONCLUDING REMARKS

Our ACC2 inversion approach has indicated that by including more uncertainty in

radiative forcing, the probability of high climate sensitivity becomes higher, although low climate sensitivity (< 2°C) remains very unlikely. Thus in order to quantify the uncertainty in high climate sensitivity, it is of paramount importance to represent forcing uncertainty correctly, neither as restrictive as in the forcing scaling approach (as in previous studies) nor as free as in the missing forcing approach. Estimating the autocorrelation structure of missing forcing is still an issue in the missing forcing approach. We qualitatively demonstrate the importance of forcing uncertainty in estimating climate sensitivity – however, the question is still open as to how to appropriately represent the forcing uncertainty.

h/t and thanks to Leif Svalgaard

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

170 Comments
Inline Feedbacks
View all comments
AlexB
July 20, 2009 6:55 pm

Just finished reading the paper. Can I say from the outset, please consider the environment before printing those 20 pages… ooops! Basically by freeing up the constraints of forcing from the preindustrial era before CO2 started changing you can allow CO2 to take a more dominant role in the explanation of the industrial era as it started to increase with temperature. By adding all that uncertainty to the pre-industrial era you are heavily weighting the model to look at the industrial period. In a nutshell it is ‘correlation is causation’ dressed in fancy maths. Another distraction from actual science.

Joel Shore
July 20, 2009 7:02 pm

Bill Illis says:

Let’s look at 450 million years ago. CO2 4,500 ppm, 4 doublings – Solar forcing 5% lower than today or 4.5 watts/m^2 or 1.2C – Estimated Temp at the time +2.0C from today or just 0.8C per doubling netting out the CO2 and solar changes.
Let’s look at 35 million years ago. CO2 1,400 ppm, 2.5 doublings – Estimated temp at the time +2.0C from today or 0.8C per doubling again.

Bill, this is ludicrous. You can’t just do a simple comparison with today when you are talking about geological times like that. There are all sorts of factors that vary over those timescales, including the location of the continents and mountain ranges and the output of the sun!! Perhaps you should study the actual serious science that has been done on paleoclimate.

Joel Shore
July 20, 2009 7:11 pm

Sorry, I realized after posting that Chris V. had already made the point in my last post.
Bill Illis says:

Those numbers indicate that CO2 was responsible for 1.9C of the 5.0C change in temperatures during the ice ages.
So the majority of the temperature change, 3.1C, is due to other factors.
How do we know that the other factors are not in fact responsible for 4.0C and CO2 was only responsible for 1.0C (or the same 1.5C per doubling).

We know that because we have estimates for the radiative forcing of each component. Don’t think of the LGM as showing us directly that doubling CO2 causes ~3 C rise. The more fundamental point is that it shows us that the climate sensitivity is ~0.75 C per (W/m^2) of forcing. Then, the known value of the forcing due to doubling CO2 implies this translates into 3 C per doubling.
M. Simon says:

Quite right. There is a lot of confusion between amplification and positive feedback. Climate “science” is full of it. (and you can take that in several ways and for most of those currently involved true).

There is no confusion except to the extent that climate science apparently uses the term “feedback” in a somewhat different sense than some engineering disciplines do. This is a not uncommon occurrence in science (that different fields use terminology somewhat differently). The confusion arises only when these engineers think that they can use their understanding from how the term is used in their field.

Philip_B
July 20, 2009 7:29 pm

JFD, I just checked your calculation and got 2.2 mm per annum, but you probably used a different value for the oceans.
If your groundwater estimate is correct then you have just accounted for 2/3rds of the ocean rise trend.
It makes me wonder what effect dam filling has on the sea level rise. Several large dams are being filled at present such as the 3 Gorges. Maybe that accounts for the recent levelling off.
I always found the sea level rise the most persuasive argument for a warming climate. Now you have provided an alternative explanation.
BTW, I think the residence time and GH effect of water vapour from irrigation is seriously under estimated, because they use values from the humid tropics, whereas irrigation is in arid areas. So it is much less likely to form clouds and then rain.
Where I live in Western Australia, we get days when the humidity is high(er) near the ground, but not higher up in the atmosphere, so no clouds form. These days are 5C to 10C hotter than low(er) humidity days. The effect of humidity near the ground is very large.

Joel Shore
July 20, 2009 7:30 pm

Without having yet read the Tanaka et al. paper in any detail, I think the reason why their approach constrains the lower bound on the climate sensitivity better than the upper bound is as follows. The climate sensitivity (or really a transient climate response) is estimated from the modern temperature record as
(delta T) / (F_GHG + F_aerosols_etc)
where (delta T) is the temperature change, F_GHG is the radiative forcing due to greenhouse gases (which is known quite accurately…+-10%) and F_aerosols_etc is the forcing due to aerosols and other things (which is not known very accurately). F_aerosols_etc is believed to be negative (or at worst very weakly positive), so this gives a pretty good lower bound on the sensitivity; however, it could be so negative that it essentially cancels out the forcing due to GHGs so that the net forcing is very small, which would then imply that the climate sensitivity is very large.
However, I think scientists like James Annan, who believe that the climate sensitivity is close to 3 C and that we can rule out very much higher climate sensitivities (and very much lower ones), would argue that the historical temperature record is not all we have…and in fact is not the best constraint we have on climate sensitivity. He would say that combining other constraints from paleoclimate (like the Last Glacial Maximum) and the climate response to the Mt. Pinatubo eruption allows one to put significantly tighter constraints on the climate sensitivity…and, in particular, rule at the values much larger than 3 C.

H.R.
July 20, 2009 7:44 pm

Joel Shore(19:11:05)
“There is no confusion except to the extent that climate science apparently uses the term “feedback” in a somewhat different sense than some engineering disciplines do.”
Then what’s all the fuss about a tipping point if it’s just amplification and not feedback?
The only evidence of climate tipping points and feedback I see, given the locations of the continents for the past couple of million years, is that the earth has a nasty tendency to plunge into 100,000-year periods of glaciation and jump back out of them for a stretch of 10k or so years.

David
July 20, 2009 7:46 pm

Joel Shore (19:11:05) :
“The more fundamental point is that it shows us that the climate sensitivity is ~0.75 C per (W/m^2) of forcing.”
OK, but how is that shown?

Bill Illis
July 20, 2009 9:11 pm

The 0.75C per watt/m^2 is a mythical guess based on the guess-estimated long-term equilibrium CO2 sensitivity. Nobody knows what this number should be.
All the climate models actually use a figure of about 0.3C per watt/m^2 (and 0.15C for volcanoes) rather than the 0.75C commonly quoted.
I meant to post on this too in terms of the uncertainty. There is actually two (and three) components to the uncertainty.
The “forcing” of GHGs and other drivers like aerosols (quoted in Watts/m^2) and then the Temp C response per Watt/m^2 change (and the lag effect of how long this takes before the full temp response occurs).
Temp C = Forcing (watts/m^2) * TempC response/watt/m^2 * Time
The estimates for the Temp response per Watt/m^2 ranges from 0.1C to 1.0C (a factor of 10) and the length of time for the lag element noone can really figure out yet (it used to be several years, more recently 30 years and now Hansen is pushing 100 to 1500 years for the lag).
Generally, the climate models over-estimate both the forcing and the TempC response and underestimate the lag effect going by the ocean heat content data which indicates there is really little lag at all.
All of this (including the Paleoclimate data) points to a lower CO2 sensitivity figure.

F. Ross
July 20, 2009 9:28 pm

Peer reviewed no doubt!
/snide off

tallbloke
July 20, 2009 11:28 pm

Philip_B (19:29:00) :
Several large dams are being filled at present such as the 3 Gorges. Maybe that accounts for the recent levelling off.

The Three Gorges dam built up an extra 30 cubic kilometers to 2003
Rise due to expansion alone was 5400Km^3 1993-2003

Darell C. Phillips
July 21, 2009 12:40 am

David (11:50:20) :
Well, I did warn you…

Sandy
July 21, 2009 1:29 am

Can an estimate be made of the ‘wind chill’ factor of the trade winds. Following Willis’s idea then in seems that the main cooling effect of tropical thunderstorms is the evaporation of the sun-warmed ocean into the trade-winds which feed the monster cu-nims, and continue well into the night.
Surely a 40 knot wind over sun-warmed ocean will strip away a significant proportion of the incident energy before it has a chance to raise the water temp. ?
Since the majority of the evaporated water will fall as rain from the cu-nim there may be a detectable salinity difference between the cu-nim zone and 20 deg North and South?
Could this give us a macro-estimate of W/m^2 of the ocean cooling effect?

Allan M
July 21, 2009 2:23 am

Joel Shore
“There is no confusion except to the extent that climate science apparently uses the term “feedback” in a somewhat different sense than some engineering disciplines do. This is a not uncommon occurrence in science (that different fields use terminology somewhat differently). The confusion arises only when these engineers think that they can use their understanding from how the term is used in their field.”
The misunderstanding comes from the fact that engineers and their colleagues have to work with the real world. Some of us do not qualify as climate pseudo-scientists because we lack the correct jewellery. We have to rely on a long working knowledge of amplification and feedback, not on GIGO computer models with the sort of dubious maths of the likes of M. Mann & co.

Geoff Sherrington
July 21, 2009 3:00 am

Sisypuss. As revealed at the source blog for the image,
http://pictureisunrelated.com/2009/03/03/this-cat-defies-logic/#comments
Here, we call greenies watermelons because they are green on the outside and red underneath. Like you NH folk we also have other names for wet cats.
Sisyphus was a Greek mythical man doomed forver to roll a large stone up a hill, but he never made it, it always rolled down and he had to start again. Bad dude, insulted the Gods and thought he was better than Zeus. The symbolism in the picture is that one cannot go lower than the base of the hill, but one can, with various degrees of ingenuity, go further up the hill, ie greater sensitivity is possible at upper levels but one cannot go lower. To further the simile, Sisyphus was doomed to roll forever, the analogy being plausible that one cannot cherry pick time dates. Like these authors did.

Highlander
July 21, 2009 3:07 am

The question was:
—————-
Sandy (01:29:49) :
Can an estimate be made of the ‘wind chill’ factor of the trade winds. Following Willis’s idea then in seems that the main cooling effect of tropical thunderstorms is the evaporation of the sun-warmed ocean into the trade-winds which feed the monster cu-nims, and continue well into the night.
—————-
I do not believe that one could rightly refer to the ‘wind chill factor’ for the simple reason it nought but a reference to human physiology.
.
In the main, it refers to the amount of perceived cooling as a result of a wind blowing upon expose flesh in direct relation to the cooling caused by insensate perspiration.
.
From the American Heritage English Dictionary:
——-
wind-chill factor
n.
The temperature of windless air that would have the same effect on exposed human skin as a given combination of wind speed and air temperature.
——-

Steve Keohane
July 21, 2009 6:55 am

Thirty years of inexorable GHG increase, UAH anomaly for the past two years is .13°C, a .43°C/century trend. No need to fantasize about increasing the sensitivity in the climate video games.

July 21, 2009 7:03 am

George E. Smith (13:08:22) :
“”” Robert Austin (11:23:07) :
David (08:56:20) :
The role of water in earths climate is certainly the most central and least understood factor. Water exists on earth in three phases, it has a complex absorption spectrum, it stores and releases vast quantities of heat energy and it has great effects on albedo in the form of clouds , snow and ice. Without the assumption of a positive feedback from water, the vaunted computer models show no frightening scenarios. CO2 is a simple gas and an climatological open book by comparison. Conclusion: nobody is close to pinning down the role of water in climate regulation. “””
“Least Understood factor ? ” By whom ?
I would say the water factor is well understood; so I suggest you reduce the number of people you choose to include in your nobodys who aren’t close to pinning the role of water; well you need to reduce it by at least one. And I’m not responsible for anybody else’s lack of understanding.
George
________________________________________________________________
George, thanks for pointing out my gross ignorance in stating that of the role of water in climate regulation is poorly understood. I did not realize that great strides have been made in this area of climate science and that at least one person (you) has this understanding. I will follow your future posts assiduously so that I may learn from the master.

Joel Shore
July 21, 2009 7:38 am

David says:

“The more fundamental point is that it shows us that the climate sensitivity is ~0.75 C per (W/m^2) of forcing.”
OK, but how is that shown?

The estimate obtained from the last glacial maximum was outlined by Hansen in this Scientific American piece: http://www.met.tamu.edu/class/old_atmo629/hansen.pdf (see Fig. 3).
Bill Illis says:

The 0.75C per watt/m^2 is a mythical guess based on the guess-estimated long-term equilibrium CO2 sensitivity. Nobody knows what this number should be.

It is not a “mythical guess” and it is not just based on response to CO2. It is an estimate based on the climate response to different events in the past.

All the climate models actually use a figure of about 0.3C per watt/m^2 (and 0.15C for volcanoes) rather than the 0.75C commonly quoted.

You are confusing the sensitivity before feedbacks are included (which is about 0.3 C per W/m^2) and the one after feedbacks (which is estimate to be about 0.75 C). I am not sure where you volcanoes number comes from.

Temp C = Forcing (watts/m^2) * TempC response/watt/m^2 * Time

This equation makes no sense as even the units don’t work out. What you call “Time” should probably be a dimensionless function that ranges from 0 in the limit of very short time to 1 in the limit of long time. Note that for comparing the Last Glacial Maximum to now, the time lag issue isn’t really relevant because the timescales are long enough. The time lag issue is more relevant when estimating the sensitivity from the instrumental temperature record but this doesn’t give us very strong constraints anyway, mainly because of the uncertainty in the aerosol forcing.

Joel Shore
July 21, 2009 7:44 am

Steve Keohane says:

Thirty years of inexorable GHG increase, UAH anomaly for the past two years is .13°C, a .43°C/century trend.

There are several errors here:
(1) The base-point for the anomaly isn’t 1979. I think it is the mean over 1979 to 1998 or something like that.
(2) When you look at the past 2 years, you are cherry-picking a particularly cold period because of the La Nina.
(3) Furthermore, the La Nina / El Nino oscillations tend to show up more dramatically in the satellite temperatures than the surface record further exaggerating the effect of this cherry-pick. (By choosing the UAH analysis, you are also cherry-picking the data set with the lowest trend for the satellite record.)

July 21, 2009 10:06 am

M. Simon says:
Quite right. There is a lot of confusion between amplification and positive feedback. Climate “science” is full of it. (and you can take that in several ways and for most of those currently involved true).
There is no confusion except to the extent that climate science apparently uses the term “feedback” in a somewhat different sense than some engineering disciplines do. This is a not uncommon occurrence in science (that different fields use terminology somewhat differently). The confusion arises only when these engineers think that they can use their understanding from how the term is used in their field.
CO2 cause the atmosphere to get warmer – this causes a higher concentration of water vapor in the atmosphere – which causes the atmosphere to get warmer – which causes more evolution of CO2 from the oceans – this causes a higher concentration of water vapor in the atmosphere – which causes the atmosphere to get warmer – this causes a higher concentration of water vapor in the atmosphere – which causes the atmosphere to get warmer – this causes a higher concentration of water vapor in the atmosphere – which causes the atmosphere to get warmer – which causes more evolution of CO2 from the oceans – this causes a higher concentration of water vapor in the atmosphere –
Since WV is more effective than CO2 it looks like the atmosphere will saturate with WV from all that positive feedback.
OTOH if WV (due to clouds) is a negative feedback (with lags) the whole series settles nicely – after a lag.
But perhaps you would explain to me the difference between electrical positive and negative feedback and the use of the same terms in climate.
BTW if the climate model folks are not using electrical analogs for their models they are way behind the times.

David
July 21, 2009 10:09 am

Why would climate sensitivity remain static?

July 21, 2009 10:16 am

Joel Shore(19:11:05),
Let me put it to you direct. Explain the difference between positive and negative feedback in climate science and electronic design.

George E. Smith
July 21, 2009 10:17 am

“”” Robert Austin (07:03:32) :
George E. Smith (13:08:22) :
“”” Robert Austin (11:23:07) :
David (08:56:20) :
The role of water in earths climate is certainly the most central and least understood factor. Water exists on earth in three phases, it has a complex absorption spectrum, it stores and releases vast quantities of heat energy and it has great effects on albedo in the form of clouds , snow and ice. Without the assumption of a positive feedback from water, the vaunted computer models show no frightening scenarios. CO2 is a simple gas and an climatological open book by comparison. Conclusion: nobody is close to pinning down the role of water in climate regulation. “””
“Least Understood factor ? ” By whom ?
I would say the water factor is well understood; so I suggest you reduce the number of people you choose to include in your nobodys who aren’t close to pinning the role of water; well you need to reduce it by at least one. And I’m not responsible for anybody else’s lack of understanding.
George
________________________________________________________________
George, thanks for pointing out my gross ignorance in stating that of the role of water in climate regulation is poorly understood. I did not realize that great strides have been made in this area of climate science and that at least one person (you) has this understanding. I will follow your future posts assiduously so that I may learn from the master. “””
I don’t recall saying anything about any ignorance on your part. I agree with your assertion that a lot of people don’t understand the water connection; that’s the whole problem. But I believe if you read “How Much more Rain will Global Warming Bring ?” by Wentz et al, in Science for July 7 2007, you will find the clues to all that anyone needs to know about the water linkage.
And the paper makes it clear that at least the modellers don’t uunderstand it, since their models are off by a factor of as much as seven. Couple that with John Christy’s paper in Jan 2001 Geophysical Research letters; showing that oceanic water and air temperatures are not correlated, and therefore the true lower troposphere history cannot be recovered from the meaningless ocean water measurments; when it was blindly assumed that the air and water temperatures were the same. Then it becomes clear that before the Argo bouys, and the polar orbit satellites; much of the global “climate data” was worthless junk; so our real knowledge of earth climate doesn’t go back much before about 1980.
And following me would not be the right thing to do; after all I have to depend on others to gather and present the data. Unfortunately, I spend most of the time trying to find out under what sort of controlled conditions that data was gathered.
As you point out water exists in the atmosphere in three phases; you’ll probably find at least 12 locations in Anthoiny’s archives, where I have stressed that simple fact; and pointed out that in the vapor form, water produces positive feedback warming, to keep us from being a frozen ball; but in the other two phases of liquid and solid, where water forms clouds; water generates negative feedback cooling. And the Wentz paper shows the extent of that effect, although they didn’t specifically cite that; but it is inescapable if you believe their data.
I am sure that there is a lot of fill in detail to obtain; but you don’t have to resort to Quantum Chromodynamics to see how water controls the earth’s mean surface temperatures.
But those who are dead set in convincing people that carbon is the devil incarnate; since it leads to their political agenda, will never see the water role.
George

July 21, 2009 11:02 am

(3) Furthermore, the La Nina / El Nino oscillations tend to show up more dramatically in the satellite temperatures than the surface record further exaggerating the effect of this cherry-pick. (By choosing the UAH analysis, you are also cherry-picking the data set with the lowest trend for the satellite record.)
But the surface record is not spatially or chronologically (min max) well distributed.
I’m told models use 15 minute time chunks. Fine. The temp records only give you min-max without an actual time when those temps were recorded closer than 24 hours. Now if you are going to initialize your model properly with the temp record you need simultaneous records (i.e. all the starting temps should be at say 0000 GMT)
Otherwise Lorenz chaos diverges your model from reality.
==
http://powerandcontrol.blogspot.com/2009/01/strange-attractors.html
Complex deterministic systems suffer not only from sensitive dependence on initial conditions but also from possible sensitive dependence on the differences between Nature and the models employed in representing it. The apparent linear response of the current generation of climate models to radiative forcing is likely caused by inadvertent shortcomings in the parameterization schemes employed. Karl Popper wrote (see my essay on his views):
“The method of science depends on our attempts to describe the world with simple models. Theories that are complex may become untestable, even if they happen to be true. Science may be described as the art of systematic oversimplification, the art of discerning what we may with advantage omit.”
If Popper had known of the predictability problems caused by the Lorenz paradigm, he could easily have expanded on this statement. He might have added that simple models are unlikely to represent adequately the nonlinear details of the response of the system, and are therefore unlikely to show a realistic response to threshold crossings hidden in its microstructure. Popper knew, of course, that complex models (such as General Circulation Models) face another dilemma.
I quote him again: “The question arises: how good does the model have to be in order to allow us to calculate the approximation required by accountability? (…) The complexity of the system can be assessed only if an approximate model is at hand.”
From this perspective, those that advocate the idea that the response of the real climate to radiative forcing is adequately represented in climate models have an obligation to prove that they have not overlooked a single nonlinear, possibly chaotic feedback mechanism that Nature itself employs.

Clouds are non-linear and the modelers admit they don’t do those well. And how do you handle an upwelling cell 1km on a side in a model with 100 km grid squares? Well you paramterize them. Are the parameters correct to sufficient accuracy? We do know that the models did not predict the cooling from 2000 until 2020. Why 2020? That is the latest IPCC estimate. So some parameter in the models is wrong. Maybe many.
So at least for now the models can’t be trusted because their output has not matched the data – i.e. CO2 sensitivity is too high. Because some forcing is now overriding CO2. Or to put it another way. Some other forcing (internal variability?) has been aliased for CO2 forcing.

JFD
July 21, 2009 11:58 am

Philip_B (19:29:00) :
Here is the complete background and calculations that I put in my comments to EPA’s carbon dioxide endangerment finding;
There are two basic ways to add new water to the atmosphere:
1. Production of ground water from slow to recharge aquifers. This water is not in equilibrium with the atmosphere so it will take one evaporation-condensation cycle, about 10 days, to come into near equilibrium with the hydrological cycle. The amount of new water added will be the production volume minus the recharge volume (which will be very small). The added new water will slightly increase the depth and surface area of the oceans each day. Due to the impact of tides, winds and gravity the increase in sea level will not be observable on a daily basis, but over time the average sea level will rise measurably. Let’s calculate the increase in ocean levels due to new groundwater added to the atmosphere.
a. Global ground water production estimates from technical literature sources range from 670 up to 830 cubic kilometers per year in 1995; for one example go to: http://www.iwmi.cgiar.org/EWMA/files/papers/Energyprice_GW.pdf
b. Available data indicates that ground water production basically follows the increase in world population times a factor about 1.20 to account for a higher rate of increase in industrial use. At some point the cost of energy and consequence loss of productivity will limit the higher growth in industrial use. Use of a groundwater production growth rate of 2.5% per year to take into account population and industrial growth since 1995, results in a current groundwater production of 1090 cubic kilometers per year based on the average of the range. I will use 900 cubic kilometers per year of groundwater production, so as to not overstate the conclusion.
c. Technical literature aquifer recharge rate estimates range from 5 to 11% of groundwater production. As noted above, recharge rates are highly variable depending on things such as geologic history, rainfall, soils and average temperature. I will use 8% as the average recharge rate.
d. Groundwater is used 9% for domestic, 21% industry and 70% agriculture, see, “Ecosystems and human well being: current state & trends” by Millennium Ecosystem Assessment.
e. The surface area of the world’s oceans is 335,258,000 square kilometers. Using one year as the measuring time, the 900 cubic kilometers per year of new water would increase the level of the oceans on average by: 900 x 10 to the 3rd power m3/year divided by 335.258 x 10 to the 6th power m2 = 0.0026845 m/year rise on a 100% return basis (see below for recharge discount). Converting to mm/year = 1000 mm/m x 0.0026845 m/year = 2.6845 mm per year rise in average ocean level on a 100% return basis.
f. The calculated value of 2.7 mm per year for ocean rise on a 100% return basis must be discounted as follows: 70% of the condensed new water falls into the oceans as rainfall after having completed one evaporation-condensation cycle; 8% of land rainfall results in recharge of aquifers. Therefore percent precipitated groundwater that winds up in the oceans = 70% + (30% – 8%) = 92%. The actual rise in ocean level per year from groundwater production is then .92 x 2.6845 = 2.47 inches per year
g. Finally, we can compare the 2.5 mm per year ocean level rise from new water added from groundwater production using various literature sources relative to ocean level rise: Sea level rise from January 1870 was 1 to 2 mm per year; Sea level rise from 1900 to 2000 was 1.7 mm plus or minus .3 mm per year; Sea level rise since 1993 was 3.1 plus or minus .7 mm per year.
2. New water is also chemically produced during the burning of fossil fuels using the hydrogen from the fuel and oxygen from the combustion air. My calculated estimate is that this new water is about 7% of the volume of the produced groundwater and would also result in sea level rise. Thus, the two basic ways of introducing new water into the atmosphere would bring the anthropogenic caused total sea level rise to 1.07 x 2.47 mm per year = 2.6 mm per year. This amount agrees completely with the observed ocean level rise.
————-
I think produced groundwater from slow to recharge aquifers is a very important point in the global warming discussions. The input data can be verified and the calculation is straight forward. I drew several other definitive conclusions for EPA and will try to add them On Topic as the opportunities arise
JFD