Richard Betts heads the Climate Impacts area of the UK Met Office. The first bullet point on his webpage under areas of expertise describes his work as a climate modeler. He was one of the lead authors of the IPCC’s 5th Assessment Report (WG2). On a recent thread at Andrew Montford’s BishopHill blog, Dr. Betts left a remarkable comment that downplayed the importance of climate models.
Dr. Betts originally left the Aug 22, 2014 at 5:38 PM comment on the It’s the Atlantic wot dunnit thread. Andrew found the comment so noteworthy he wrote a post about it. See the BishopHill post GCMs and public policy. In response to Andrew’s statement, “Once again this brings us back to the thorny question of whether a GCM is a suitable tool to inform public policy,” Richard Betts wrote:
Bish, as always I am slightly bemused over why you think GCMs are so central to climate policy.
Everyone* agrees that the greenhouse effect is real, and that CO2 is a greenhouse gas. Everyone* agrees that CO2 rise is anthropogenic Everyone** agrees that we can’t predict the long-term response of the climate to ongoing CO2 rise with great accuracy. It could be large, it could be small. We don’t know. The old-style energy balance models got us this far. We can’t be certain of large changes in future, but can’t rule them out either. So climate mitigation policy is a political judgement based on what policymakers think carries the greater risk in the future – decarbonising or not decarbonising.
A primary aim of developing GCMs these days is to improve forecasts of regional climate on nearer-term timescales (seasons, year and a couple of decades) in order to inform contingency planning and adaptation (and also simply to increase understanding of the climate system by seeing how well forecasts based on current understanding stack up against observations, and then futher refining the models). Clearly, contingency planning and adaptation need to be done in the face of large uncertainty.
*OK so not quite everyone, but everyone who has thought about it to any reasonable extent
**Apart from a few who think that observations of a decade or three of small forcing can be extrapolated to indicate the response to long-term larger forcing with confidence
As noted earlier, it appears extremely odd that a climate modeler is downplaying the role of—the need for—his products.
“…WE CAN’T PREDICT LONG-TERM RESPONSE OF THE CLIMATE TO ONGOING CO2 RISE WITH GREAT ACCURACY”
Unfortunately, policy decisions by politicians around the globe have been and are being based on the predictions of assumed future catastrophes generated within the number-crunched worlds of climate models. Without those climate models, there are no foundations for policy decisions.
“…CLIMATE MITIGATION POLICY IS A POLITICAL JUDGEMENT BASED ON WHAT POLICYMAKERS THINK CARRIES THE GREATER RISK IN THE FUTURE – DECARBONISING OR NOT DECARBONISING”
But policymakers—and more importantly the public who elect the policymakers—have not been truly made aware that there is great uncertainty in the computer-created assumptions of future risk. Remarkably, we now find a lead author of the IPCC stating (my boldface):
… we can’t predict the long-term response of the climate to ongoing CO2 rise with great accuracy. It could be large, it could be small. We don’t know.
I don’t recall seeing the simple statement “We don’t know” anywhere in any IPCC report. Should “we don’t know” become the new theme of climate science, their mantra?
“THE OLD-STYLE ENERGY BALANCE MODELS GOT US THIS FAR”
Yet the latest and greatest climate models used by the IPCC for their 5th Assessment Report show no skill at being able to simulate past climate…even during the recent warming period since the mid-1970s. So the policymakers—and, more importantly, the public—have been misled or misinformed about the capabilities of climate models.
For much of the year 2013, we presented those model failings in dozens of blog posts, including as examples:
- Will their Failure to Properly Simulate Multidecadal Variations In Surface Temperatures Be the Downfall of the IPCC?
- Models Fail: Land versus Sea Surface Warming Rates
- Polar Amplification: Observations versus IPCC Climate Models
- Model-Data Comparison: Hemispheric Sea Ice Area
- Model-Data Precipitation Comparison: CMIP5 (IPCC AR5) Model Simulations versus Satellite-Era Observations
- Model-Data Comparison with Trend Maps: CMIP5 (IPCC AR5) Models vs New GISS Land-Ocean Temperature Index
In other words, the climate models presented in the IPCC’s 5th Assessment Report cannot simulate what many persons would consider the basics: surface temperatures, sea ice area and precipitation.
Shameless Plug: These and other model failings were presented in my ebook Climate Models Fail.
“APART FROM A FEW WHO THINK THAT OBSERVATIONS OF A DECADE OR THREE OF SMALL FORCING CAN BE EXTRAPOLATED TO INDICATE THE RESPONSE TO LONG-TERM LARGER FORCING WITH CONFIDENCE”
A few? In effect, that’s all the climate models used by the IPCC do with respect to surface temperatures. Figure 1 shows the annual GISS Land-Ocean Temperature Index data and linear trend (warming rate), for the Northern Hemisphere, from 1975 to 2000, a period to which climate models are tuned. The linear trend of the data has also been extrapolated until 2100. Also shown in the graph is the multi-model ensemble member mean (the average of all of the individual climate model runs) of the simulations of Northern Hemisphere surface temperature anomalies for the climate models stored in the CMIP5 archive. The CMIP5 archive was used by the IPCC for their 5th Assessment Report.
Figure 1
The model simulations of 21st Century surface temperature anomalies and their trends have been broken down into thirds to show that there was little increase in the expected warming rate through two-thirds of the 21st Century with the constantly increasing forcings. In other words, the models simply follow the extrapolated data trend through about 2066, in response to the increased forcings. See Figure 2 for the forcings.
Figure 2
So, Dr. Betts’s “a few” appears to, in reality, be the consensus of the climate science community…the central tendency of mainstream thinking about climate dynamics…the groupthink.
And the problem with the groupthink was that the climate science community tuned their models to a naturally occurring upswing in surface temperatures. See Figure 3.
Figure 3
Should the modelers have anticipated another cycle or two when making their pre-programmed prognostications of the future? Of course they should have. The models are out of phase with reality.
But why didn’t they tune their models to the long-term trend? If they had tuned their models to the long-term trend, there’s nothing alarming about a 0.07 deg C warming rate in Northern Hemisphere surface temperatures. Nothing alarming at all.
A NOTE
You may be wondering why I focused on Northern Hemisphere surface temperatures. Well, it’s well known that climate models can’t simulate the warming that took place in the Southern Hemisphere during the recent warming period. See Figure 4. The models almost double the warming that took place there since 1975.
Figure 4
CLOSING
Dr. Betts noted:
A primary aim of developing GCMs these days is to improve forecasts of regional climate on nearer-term timescales (seasons, year and a couple of decades) in order to inform contingency planning and adaptation (and also simply to increase understanding of the climate system by seeing how well forecasts based on current understanding stack up against observations, and then futher refining the models).
In order for the climate science community to create forecasts of regional climate on decadal timescales, the models will first have to be able to simulate coupled ocean-atmosphere processes. Unfortunately, with their politically driven focus on CO2, they are no closer now at being able to simulate those processes than they were two decades ago.
SOURCE
The GISS LOTI data and the climate model outputs are available through the KNMI Climate Explorer.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.




Could it also be that we are experiencing the 700-1000 year overturning circulation that has brought up CO2 from the Medieval Warming Period (there was a LOT of vegetation back then) that sank to the bottom and slowly meandered along the bottom pathway way back then until it was brought to the surface once again? It seems like such an incredibly steady outgassing that is usually not seen in noisy nature or predicted on the up and down industrialization of the world. Things got pretty cold after the MWP and I would imagine we had a lot of CO2 enriched surface water sinking to the abyssal “snail’s pace” bottom current. The timing is about right for that stuff to reappear.
I’ve been saying that for years.
It sounds right.
But I can’t prove it.
That is a plausible mechanism which fits all the data.
I especially liked this paragraph from Betts comments:
‘A primary aim of developing GCMs these days is to improve forecasts of regional climate on nearer-term timescales (seasons, year and a couple of decades) in order to inform contingency planning and adaptation (and also simply to increase understanding of the climate system by seeing how well forecasts based on current understanding stack up against observations, and then futher refining the models). Clearly, contingency planning and adaptation need to be done in the face of large uncertainty.’
Sounds really great (in theory). Part of what I do for a living is surface hydrography and catchment hydrologic modeling. For example, critical to that work is a clear understanding of levels of underlying PET (Potential Evapotranspiration) and actual ET (Evapotranspiration) applying. Yet I can easily prove that here is Australia, in age of satellites and sophisticated sensing technology there are (still) veritable gulfs in agreement on the details of how these are most accurately measured or estimated (regionally) between the two major bodies which are actively engaged in the development and application of GCMs (BOM and CSIRO). Furthermore, there are also groups of scientists associated with various universities who may also be associated with BOM and/or CSIRO who differ widely between themselves or their groupings on how this may best be done. When I was younger and more naive I used to think such irritating gulfs didn’t really matter in science. As the years have gone and I have periodically weathered the mutual admiration/masturbation group dynamics and prejudices that can so boringly often pass under the guise of ‘peer review’ I have come to the reluctant conclusion that science – in particular climate science, is a deeply flawed field populated by groups of tribally arranged, often fanatically prejudiced, individuals. These groups can display about as high a level of mature agreement and consensus on major issues as a bunch of squabbling central african states…
‘…improve forecasts of regional climate on nearer-term timescales (seasons, year and a couple of decades) in order to inform contingency planning and adaptation (and also simply to increase understanding of the climate system by seeing how well forecasts based on current understanding stack up against observations, and then further refining the models).
Phooey I say! In your dreams!
Committed Warming called, he says now he’s not so sure.
I am reminded of one of my very, very, very favorite articles from The Onion. Methinks Richard Betts has given up, but doesn’t quite know how to say so.
http://www.theonion.com/articles/amish-give-up,19587/
The models are bound to have wrong results, because model INPUT is incomplete, there
are always 5 climate macrodrivers missing in the input. As long as those 5 macrodrivers are not
taken into account [http://www.knowledgeminer.eu/climate_papers.html] all warmist papers
are straight for the dustbin……
What an amazing thing to say. Perhaps he is trying to say that he is a good guy, but it is other people who are saying bad things which are hurting his reputation as a living human being.
The hide of the human to say there is no on-flow from his work and his work can be ignored because of this and that.
Reading my local news source which has bought into the green politics of this issue it is easy to prove that his research does matter:
http://www.smh.com.au/environment/climate-change/climate-change-may-disrupt-global-food-system-within-a-decade-world-bank-says-20140827-108w8x.html
“”Unless we chart a new course, we will find ourselves staring volatility and disruption in the food system in the face, not in 2050, not in 2040, but potentially within the next decade,” she said, according to her prepared speech” [snip] A two-degree warmer world – which may occur by the 2030s on current emissions trajectories – could cut cereal yields by one-fifth globally and by one-half in Africa, she said.
All based off his work! So to deny that his work is bring upon doom and gloom articles guilt tripping us into changing our values to the current flavour-of-the-day diets, changes based off his work’s predictions, speaks volumes…
Scute
August 26, 2014 at 4:30 pm
Could you explain why it is that, despite referring to the temperature rise matching the models so faithfully “for the last 50 years”, Oreskes uses a graph that stops in 1994, fully 20 years before the date of her lecture (May 2014)?
Propaganda disguised as science. Richard Betts seems to imply, it’s merely a skeptics’ idea that climate models are central for policy making.
If that was true, then ‘gigantic communication failure’ would probably be the understatement of the century.
Someone posted a link to Oreskes TED talk in the comments to Bob’s posts about the Rigbey paper, if that was you then thanks for bringing it up.
Bob,
Take care with Richard Betts’ statements. Just recently he has been involved in a conference in Oxford promoting a 4 Deg. C rise in global temperatures by 2100. So, on one hand he attempts to engage skeptics at Bishop Hill, and on the other he promotes warming way above most estimates of climate sensitivity. He also justifies it all with the usual “we all believe CO2 is a GHG” etc. That is fine but the feedbacks are what might make it dangerous and the data suggest that is not happening.
Ferdinand Engelbeen
Your post at August 26, 2014 at 1:33 pm says in total
Ferdinand, you have repeatedly made that false assertion to me and I have repeatedly told you it is wrong over the years. This time I don’t need to explain why Henry’s Law is not applicable because others have already done it in this thread.
I make especial reference to the post of Jeff Glassman at August 26, 2014 at 4:30 pm because it provides his explanation of issues I have been saying for years. This link jumps to his post. And Bart makes a similar point at August 26, 2014 at 5:59 pm.
You and I have argued these issues for many years. For much of that time I was ‘a voice crying in the wilderness’ when I pointed out that the recent rise in atmospheric CO2 may have an anthropogenic cause, a natural cause, or some combination of anthropogenic and natural causes: almost everybody ‘KNEW’ the cause is anthropogenic.
I am pleased that things have moved so many people other than me are now willing to assess the assumptions underlying the attribution of the recent rise in atmospheric CO2 concentration to human activities. The human emissions may be the cause but other effects could be the cause and recovery from the Little Ice Age (LIA) is the most likely cause.
This matter goes to the crux of this thread.
The climate models downplayed by Richard Betts use atmospheric CO2 concentration as the ‘control knob’ of climate temperature; refs.
Courtney RS ‘An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre’ Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999
and
Kiehl JT, ‘Twentieth century climate model response and climate sensitivity’. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, (2007).
If the cause of the rise in atmospheric CO2 is not anthropogenic then the models’ predictions and projections of anthropogenic warming cannot be correct.
Richard
PS tonyb, As this post demonstrates, I stand by my view that Matt’s post is the most important in the thread but have decided not to immediately leave WUWT (I will wait to see if it is our host or my heart failure or my lung failure which is first to remove me).
Are computer models reliable?
Yes. Computer models are an essential
tool in understanding how the climate will
respond to changes in greenhouse gas
concentrations, and other external effects,
such as solar output and volcanoes.
Computer models are the only reliable
way to predict changes in climate. Their
reliability is tested by seeing if they are able
to reproduce the past climate, which gives
scientists confidence that they can also
predict the future.
Met Office Publication dated 2011
Martin A:
Thanks for that hilarious quote.
I especially enjoyed this piece of nonsense concerning the climate models
No! It only gives pseudoscientists “confidence that they can also predict the future”.
Scientists know that an ability to model the past indicates nothing about ability to predict the future.
There is an infinite number of ways to model the one past that happened
and
there are an infinite number of possible futures that may happen
but
there is only one future that will happen.
A prediction of the future is a forecast. And forecast skill can only be determined by comparing a series of predictions to outcomes which subsequently happened. No climate model has existed for sufficient time for it to have demonstrated forecast skill.
No climate model has any demonstrated forecast skill; none, zilch, nada.
Richard
Counterexample: FIO-ESM shows skill, but only globally. It gets regions completely wrong.
Scott Basinger
You say
The global variation is fixed as an input of arbitrary cooling attributed to aerosols. Failure to get regional variation right indicates no ability to forecast.
I repeat that no climate model has existed for sufficient time for it to have provided a series of forecasts so no model has demonstrated forecast skill.
Richard
Their reliability is tested by seeing if they are able to reproduce the past climate.
That is a necessary, but by far insufficient condition for their reliability… It is very easy to reproduce the past if you have a lot of control knobs which allow you to fit the curve. Only with the (human) aerosols/CO2 sensitivity tandem you can have the same fit of the past, while the future warming may be halve the middle estimate of the IPCC.
See: http://www.ferdinand-engelbeen.be/klimaat/oxford.html
Yes. I have a lotus 1-2-3 spreadsheet that reproduces the past precisely (by reading from a table). But it cannot predict anything at all.
Seeing if the models can reproduce the past is the fallacy known as “testing on the training data” (since the past was also used in the parameterisation of the models).
jorgekafkazar
August 26, 2014 at 4:17 pm
Comparing a laboratory beaker to the entire ocean is an extrapolation.
Confirmed by over 2 million direct measurements of oceanic waters…
Basing your argument on a single volcano also lower its credibility.
That single volcano emitted 100 times as much debris (and CO2) as any other outbreak of the past century. If that doesn’t give some extra CO2, you can forget the rest. Volcanic vents are at maximum just after an outbreak and go down over time.
Anyway, it would be a hell of a coincidence that all volcanoes over the world are suddenly emitting more CO2 completely in parallel with human emissions over the past 160 years, while there is not the slightest indication that they did do something similar over the past 800,000 years…
Jeff Glassman
August 26, 2014 at 4:30 pm
Jeff, we have been there before. You are misinterpreting a lot of points:
(3) …IPCC puts the flux of CO2 from burning fossil fuels at about 6 GtC/yr into the air, but only 3 GtC/yr back to — where, all of the above? Why is the fate of natural CO2 emissions different than the fate of manmade emissions?
The IPCC doesn’t say or imply that the fate of the human emissions is different from natural emissions, they only say that the natural cycle is negative: more sink than source and therefore the human contribution is the (near) only cause of the increase.
Same for (4).
(5) IPCC says the rate, net or not, of anthropogenic emissions matches the rate of increase in the atmosphere, therefore the former is the Cause and the latter the Effect.
Not only the rates match, the anthro emissions are about double the increase in the atmosphere. Seems quite clear to me which causes what. I don’t think that the increase in the atmosphere causes the human emissions…
(6) Agreed, but the IPCC also agrees with that.
(7) IPCC experts say the surface of the ocean is a bottleneck to the absorption of CO2
Which is proven by a few longer series of measurements: the increase in DIC (dissolved inorganic carbon) in the ocean surface at Bermuda and Hawaii is about 10% of the CO2 increase in the atmosphere. That is the Revelle/buffer factor. It doesn’t matter if the increase is by nCO2 or aCO2. From the isotopic changes one can see that the isotopic composition of the ocean surface simply follows the isotopic composition of the atmosphere:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/sponges.jpg
(8) Atmospheric CO2 is the Effect of temperature changes in the surface layer, not the Cause. This is demonstrated in the paleo record from Vostok.
The long effect of Henry’s law on the dynamic deep ocean – atmosphere system is 17 ppmv/K. Including the opposite effect on vegetation, the average over the past 800,000 years is 8 ppmv/K as seen in ice cores. The warming since the LIA is not more than 1 K, thus the maximum increase of CO2 in the atmosphere caused by temperature is not more than 8 K. Not the 100+ increase which is measured…
(9) The diagram showing the T dependency of the buffer factor was omitted now in order not to confuse the reader.
It doesn’t matter if the buffer factor changes with temperature, a factor 8 or 12 makes no difference for the fact that the oceans are net absorbers for CO2, not net emitters.
(10) Conclusions: atmospheric ACO2 is no more a Long Lived GHG than is nCO2. Neither ACO2 nor nCO2 is “well-mixed” in the atmosphere.
The IPCC does fully agree with the first sentence. aCO2 and nCO2 are well mixed: a difference of 2% of full scale in 95% of the atmosphere, while the seasonal flux is 20% of all CO2 in and out, in my opinion is well mixed.
Ferdinand Engelbeen
August 26, 2014 at 1:33 pm
It is applicable for the static conditions for which it is intended. Not for the dynamic flows of the ocean, however.
It is applicable for all static and dynamic conditions. The dynamic fluxes in and out of the oceans are directly in ratio with the partial pressure differences of CO2 in seawater and the atmosphere. The partial pressure of CO2 in seawater is directly influenced by temperature as by Henry’s law (all other influences being the same). Thus the pressure difference and hence the fluxes directly respond to temperature. That makes some 3% difference at the main sink/source places for 1 K change in temperature:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/upwelling_temp.jpg
which levels out at a 17 ppmv change in the atmosphere, the same for the dynamic as for the static equilibrium.
Reasserting it does not make it so. Note Pamela’s post above.
The CO2 levels during the MWP were ~285 ppmv. During the LIA ~280 ppmv. A difference of ~6 ppmv for ~0.8°C cooling. That is all difference you may get back from 500-1500 years ago, if the waters didn’t mix with the rest of the deep oceans. Seems quite unlikely to cause the current 100+ ppmv extra…
BTW, how do you think that the oceans react on increased CO2 levels in the atmosphere? Not at all?
Nobody really knows the CO2 levels during the MWP. And, even if we did, we do not know the dynamics globally which prevailed. The CO2 level in the atmosphere is a complex interplay of sinks and sources.
Bart, increased pressure of CO2 in the atmosphere increases the uptake of CO2 by the oceans and decreases the releases. Nothing complex, simple solubility and the direct application of Henry’s law. And I haven’t heard of any plants which are growing less fast with more CO2…
If ocean content is homogenous, of which there is no guarantee. You have a number of unstated, and unsupported, assumptions built into your model.
The negative feedback of the oceans (and vegetation) to increases of CO2 in the atmosphere is independent of the source of the extra CO2. If CO2 in the atmosphere increases, the oceans will increase its uptake and decrease its releases. If the extra CO2 is from more upwelling (either concentration or ocean flux or both) or increased temperatures, the oceans (and vegetation) counteract that. That is because both uptake and release are directly proportional to the pCO2 difference between atmosphere and ocean waters.
That makes that for any disturbance in the balance, the CO2 levels in the atmosphere will integrate to a new level that compensates for the disturbance. In the case of increased upwelling to halve the change in influx (compensated by increased outflux), in the case of increased temperature to 17 ppmv/K increase in the atmosphere.
If the MWP/LIA CO2 levels were the cause, they should have been 200+ ppmv higher than pre-industrial level, but there is not the slightest indication that the upwelling increased over the past decade(s) to give the current 100+ ppmv increase…
There is.
curve fitting, not based on any real world process…
Oh, come on Ferdinand. That is some incredibly detailed correlation. The odds of it just happening randomly are vanishingly small.
Bart, you can have exactly the same curve by simply adding the CO2 increase caused by human emissions to the response of CO2 to any temperature variability.
The response of CO2 in the atmosphere to temperature changes is a simple integration to a new setpoint, due to the pCO2 response of the oceans (or vegetation) to the changed temperature. That makes that the CO2 response is always a 90 deg. shift after a temperature shift, regardless of frequency.
You misinterpreted that as a Bode response for a gain/feedback system, but there is no practical feedback of CO2 on temperature to make that true. The response of CO2 on temperature is simply straightforward, with a negative feedback on its own change.
I have bought Matlab (before WUWT announced the free course of R…) and I am experimenting with it, will show the first results of the above tomorrow…
I will be out of town for the weekend, and probably will not check back before Monday at the earliest.
You can get a 90 deg phase shift with a simple lag response, but not across all frequencies. Basically, what that means is, the system is acting as an integrator over intervals shorter than the dominant time constant. To get a 90 deg phase shift for very low frequency, that time constant has to become very long. If the dominant time constant is very long, then over any relatively short interval, the outcome is indistinguishable from a pure integration.
That is what we are dealing with, and what you will need to demonstrate to yourself. You will not be able to get a 90 deg phase shift across the entire frequency spread unless you effectively have an integrator for the given span of data. And, what we see in the data is a 90 deg phase shift even down to the lowest observable frequencies.
And, no, you cannot get the same curve – it will not have the appropriately scaled and shifted variations.
The time constant to reach a new equilibrium is in the decades, most of the temperature/CO2 variability is within 2-3 years. Thus no problem to maintain the 90 deg. shift for the variability (mainly caused by -tropical- vegetation), while the increasing trend is (proven) from a different process.
Nonsense. The CO2 variability has prominent components in the range of 3-30 years and longer.
Steve Oregon
August 26, 2014 at 6:07 pm
I hate to keep repeating myself but if it is true and accurate that the “Anthropogenic (man-made) CO2 contributions cause only about 0.117% of Earth’s greenhouse effect, (factoring in water vapor)”
A full answer needs a lot of pages, but let’s focus on the reference, which contains a lot of problems:
Table 1 shows a column of “natural additions”, but there is no column for “natural removals”, which are higher than the natural additions for all GHGs mentioned. Thus (near) 100% of the additions since the pre-industrial era are man-made…
Which makes that the 0.117% is wrong too…
‘Thus (near) 100% of the additions since the pre-industrial era are man-made…’
It never takes long for the long ago discredited pseudo-mass balance argument to rear its head again.
Bart, if the natural uptake of CO2/CH4/etc. is larger than the natural release and the authors only look at the releases, but “forget” to mention the uptake, who is fooling who? But anyway a mass balance still is valid, except if you have proof that the only alternative explanation: a fourfold increase in circulation of CO2/CH4 etc. completely paralleling human emissions did take place…
We do not know that “natural removals” would be higher than “natural additions” if anthropogenic inputs were not causing expansion of the natural sinks. This is the basic fallacy of the “mass balance” argument. I do not know why you continually fail to see it.
Over the past 800,000 years, there was a remarkable linear natural equilibrium (with a variable lag) between temperature and atmospheric CO2 levels. Thus over long(er) time frames the natural sinks were in equilibrium with natural sources. Natural sinks expand on total CO2 in the atmosphere above (temperature driven) equilibrium, regardless of the cause (anthro or natural). The current ~100 ppmv (~210 GtC) above pre-industrial equilibrium causes some 2 ppmv/year (4.5 GtC/year) extra uptake. Or an e-fold removal of slightly over 50 years or a half life time of ~40 years. The removed quantity is less than halve the yearly human emissions. Which means that the full increase is from human emissions…
So, you have imposed a constraint that the natural system cannot be responsible for the rise, and concluded that the rise is therefore not due to nature. Circular reasoning at its finest.
It can not be from natural causes, as that would add to the human releases with as result an increase higher than human emissions alone. Which isn’t what is observed. There is also not the slightest indication of increased releases from the oceans and the biosphere is a proven sink.
Incorrect. Even now, the rise isn’t as much as the virtual accumulation of human emissions, which says that sinks are active. If they are very active, and to all indications, they are, then they can take out everything that we add, and leave behind only the remnant of the much more powerful natural forcings.
Bart, the response of the natural cycle on the increase in the atmosphere (currently over 100 ppmv above the temperature dictated equilibrium) is between 10% and 90% of the human emissions. Thus while the sinks are quite variable, they didn’t take away all human emissions (in mass) over the past 55 years…
You don’t know that, Ferdinand. You are making unsupported assertions.
george e. smith
August 26, 2014 at 6:14 pm
My answer to Steve was based on vapor only, all other things being equal, but I fully agree that clouds are the most important problem in climate models: mainly seen as positive feedback, while the are mainly negative feedbacks as Willis has often repeated…
Pamela Gray
August 26, 2014 at 6:54 pm
Could it also be that we are experiencing the 700-1000 year overturning circulation that has brought up CO2 from the Medieval Warming Period
Hardly of influence: seawater temperatures might have been 1°C warmer during the MWP, but CO2 levels ~6 ppmv higher, so compensating each other in part. The same for LIA waters: colder, but less CO2 in the atmosphere. Even if that isn’t mixed with the rest of the deep waters, when it reaches the surface again, the difference may be at maximum 17 μatm at the upwelling places. At equilibrium that will give ~8 ppmv increase/decrease in the atmosphere…
All based on dodgy ice core data which cannot be verified, and assuming that no other processes occur in the deep oceans to modulate the concentration during their journey.
Not only based on ice core date (which are btw quite accurate, but averaged over some period): the carbon content of the deep ocean currents are enhanced by the fallout of carbonate shells from shell bearing plankton (coccoliths). But more important: there is no increase in pCO2 at the upwelling zones other than the increase caused by the increase in the atmosphere:
http://www.umeoce.maine.edu/docu/Fujii-JO-2009.pdf
Moreover, to dwarf the human input, the increase in pCO2 difference should be at least a fourfold, to give a fourfold outflux, for which is not the slightest indication. Not in the pCO2(oceans), not in the residence time, not in the δ13C trends…
“But more important: there is no increase in pCO2 at the upwelling zones other than the increase caused by the increase in the atmosphere:”
Which came first, the chicken or the egg? You are begging the question.
If the difference between the pCO2 of the oceans and the pCO2 in the atmosphere didn’t change over time, then the CO2 influx didn’t change. Thus no extra CO2 influx to increase the CO2 level in the atmosphere…
Non sequitur. The pCO2 on both sides is actively regulated to zero – that is what Henry’s Law is all about – so no change between the two is virtually tautological.
The pCO2 on both sides is actively regulated to drive the difference to zero.
[Snip. Invalid email address. To continue posting, use a valid email address, per site policy. ~ Mod.]
Please… Run along, Edward. Ferdinand and I are having a discussion.
[Snip. Invalid email address. To continue posting, use a valid email address. Simple. ~ Mod.]
Sorry Mr “Mod”
…
The email address is valid.
….
I’ll use a different “valid” email address if you like….
Just post your email address, and I’ll send an email to it with one of my “valid” email addresses.
[Reply: Use a valid email address in your posts. The one you are using is not valid. It comes back as: “Bad”. You may check it yourself with the site that Anthony uses: http://verify-email.org ~ mod.]
Edward Richardson,
Well you’ve had a couple of emails you’ve tried to use here, such as the “c_u_later”, “haxelgrouse” and a couple of names here too, such as “Chuck” and Mr “C_U_LATE_R” aka chucK”, all coming from the same place in Connecticut.
This “squeak” one you are using now doesn’t work, it comes up invalid in three different email validators.
So, since a valid email address is required, and you’ve been warned about that as a violation of our commenting policy, and since you’ve used three different names now in an attempt to shape shift, I’m declaring that you have officially reached troll status.
That means now all your comments, no matter which persona they come from, will be held for moderation. If they don’t have a valid email address, they go to the bit bucket.
Mr. Mod
…
You better get a different email checker.
…
I tested three of my email addresses.
1) My personal
2) My business
3) My brother’s
…
All came back “bad” but I use them every day.
REPLY: Funny that, some email addresses you use here are gmail, some are yahoo, all check bad with three different email verification services. That’s not a coincidence, that’s you.
No further comments of yours will be allowed. – Anthony
OK Mr Watts, this one passes your test
REPLY: this email address passes the test, but you just tried to change your name again. -you’ll remain on moderation until such time you stop such shenanigans. There won’t be any notice if you further violate our comments policy, you’ll simply be banned. -Anthony
richardscourtney
August 27, 2014 at 1:19 am
Ferdinand, you have repeatedly made that false assertion to me and I have repeatedly told you it is wrong over the years. This time I don’t need to explain why Henry’s Law is not applicable because others have already done it in this thread.
And I have repeatedly answered that Henry’s Law is applicable, as good for a static as for a dynamic system. Some longer answers to Glassman and Bart are under moderation, so will appear soon…
It is a pity to hear of your health conditions, our old generation of skeptics unfortunately is thinning out, but as long as we can have a good fight over ideas, we are staying alive (at least mentally)…
“And I have repeatedly answered that Henry’s Law is applicable, as good for a static as for a dynamic system. Some longer answers to Glassman and Bart are under moderation, so will appear soon…”
Heavens be praised!!!! Let’s hope the moderation is swift!
Putting aside an Engelbeen response to Bart I’ve only been waiting about 4 years to get a truly coherent rebuttal by Engelbeen to Jeff Glassman’s discussion of the ‘CO2 problem’, in particular the δ13C problem here:
http://www.rocketscientistsjournal.com/2010/03/sgw.html
The response should be comprehensive, particularly including addressing the issue whether the atmosphere is well-mixed with respect to nCO2 and ACO2 (or not).
I draw Engelbeen’s attention to the fact that, ever since there has been a suite of reasonable coverage global CO2 measuring stations the discrepancy between the mean NH and SH atmospheric pCO2 levels has been steadily increasing and similar effects apply for pδ13C. Perhaps more importantly there are also marked regional gradients – especially in the SH and especially as the Southern Pole is approached.
Well mixed? I don’t think so. Particularly not in the water.
Thus I have also never been comfortable with Engelbeen’s assumption that Henry’s Law rules the global carbon mass flow model. The mass flow model must include the temperature-dependent flux of CO2 to and from the ocean to modulate the natural exchanges of heat and gases. In my view this also means the flux should take into account the reverse temperature-dependent uptake rate of oceanic cyanobacteria – particularly in those regions where cyanobacterial primary productivity is very high e.g. (you guessed it) – in the 40S band surrounding Antarctica i.e. surrounding the Southern Pole
Engelbeen’s assumption the mass flow is controlled by purely abiotic/chemothermodynamic factors is wrong. For example, it essentially assumes that the sinking flux of biologically fixed carbon is zero. It also takes no account of flux effects of aerobic decomposition in the ocean. Now I wonder what the cumulative mass of ignored peer reviewed literature is involved there? I’m amazed when people assume the oceans are biological deserts. By the same token I am also not impressed with arguments which use CO2 levels from a single polar ice core during LIA and MWP
Remember we are talking about an oceanic system here carrying, on average, about 100,000 cells/mL of Prochlorococcus and about 10,000 cells/mL of Synechococcus in the upper 20 m or so. In effect, the role of the biotic factors are partly why Jeff Glassman can suggest the δ13C for fossil fuel would have had to be considerably heavier, ~ -13.7‰ instead of -29.4‰, for the increase in atmospheric CO2 to have been caused by man.
Sorry to be skipping around. This is a big complex system – more complex that Engelbeen or IPCC can imagine it seems.
But, please, do go ahead – in the spirit of having a good fight over ideas, staying alive (at least mentally)…please make the day (asap) of this poor tired old, nearly retired….. isotope geochemist.
Steve Short:
Thanks for that.
You may want to search WUWT for the many interactions of Ferdinand and me disputing all the matters you raise. In particular, as an “isotope geochemist” you may be interested in my rebuttals of Ferdinand’s assertions concerning indications of 13C:12C isotope changes.
Richard
Steve, I have had several discussions with Glassman in the past. He is a master in misinterpretation of what others say or mean and his answers are extensive siting non-relevant items where the real answer is lacking or drown in the mass…
But to answer your “not well mixed” problem, here the yearly average increases of several stations for CO2 1959-2004 (needs an update):
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends.jpg
The difference between stations from the North Pole down to the South pole over the past years is less than 5 ppmv, or less than 2% of full scale:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends_1995_2004.jpg
I call that well mixed, taking into account that about 20% of all CO2 goes in and out of the atmosphere over the seasons.
As the ITCZ limits the NH-SH atmosphere exchanges to about 10%/year, the lag of the SH increase shows that the source is in the NH.
The same for the δ13C trends:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/d13c_trends.jpg
As aCO2 is readily mixed within the rest of the atmosphere, the influence of human emissions (of which 90% in the NH) is quite well mixed in the atmosphere all over the whole world, as the δ13C trends show.
The oceans are a different story: except for the ocean’s surface which simply follows the CO2 and δ13C trends of the atmosphere, in the deep oceans there is hardly any influence measurable except at downwelling places.
About the influence of Henry’s Law on the in/out fluxes between atmosphere and (deep) oceans:
The flux rate between oceans and atmosphere depends of two items: partial CO2 pressure difference and wind speed. Assuming the latter didn’t change that much over the past century, the flux rate is directly proportional to the pCO2 difference, see:
http://www.pmel.noaa.gov/pubs/outstand/feel2331/maps.shtml
The largest pCO2 difference is at the ocean sink and upwelling places, where the ocean flows bypass the ocean surface and go directly into the deep or return from the deep. That is less than 5% of ocean surface each way, where the atmosphere – deep ocean exchanges are concentrated.
From the δ13C and δ14C balances we have an estimated ~40 GtC/year deep ocean-atmosphere CO2 exchange. The absolute figure is not important, but can be used as a base for the calculations.
The pCO2 difference at the North Atlantic sink place is about 250 μatm. atmosphere higher than ocean. At the main upwelling places in the Equatorial Pacific, the difference is 350 μatm, ocean higher than atmosphere. Both generate the ~40 GtC/year fluxes (there is a slight difference of ~3 GtC more sink than source, but that is not of importance here).
What is the influence of temperature? Any increase of seawater temperature increases the pCO2(aqua) with about 17 μatm/K. That makes that with a worldwide increase in temperature of 1 K, the pCO2 difference, and thus the inflow at the upwelling places increases with 17 μatm thus from 350 to 367 μatm and thus the influx increases from ~40 GtC/year to ~41.9 GtC/year.
The opposite happens at the sink places: less pCO2 difference, from 250 reduced to 233 μatm, thus the outflux is reduced from ~40 GtC/year to ~37.3 GtC/year.
Net result: an initial increase of CO2 in the atmosphere with near 5 GtC or ~2.5 ppmv in the first year.
As the 2.5 ppmv extra after the first year opposes the pCO2 changes caused by the temperature increase, the inflow/outflow difference is reduced the second year etc… until the original fluxes are restored, which is when the atmosphere reaches the 17 μatm (~ppmv) extra in the atmosphere:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/upwelling_temp.jpg
Consequencse:
– Any CO2 influx change from deep ocean upwelling (like from MWP or LIA waters) will be halved in effect in the atmosphere by the changes in downwelling.
– A constant CO2 increase caused by only a constant temperature difference is impossible, as that will be met with decreased releases and increased uptake from the increasing pCO2 in the atmosphere.
Forgot to answer (the rest was already long enough):
Engelbeen’s assumption the mass flow is controlled by purely abiotic/chemothermodynamic factors is wrong.
There are many works on the NPP (net primary production) of biotics in the oceans, including the influence of temperature, but the main influence is from upwelling nutritients. They tried to rise the NPP with iron seeding, which indeed caused algal blooms, but the net result in CO2 sink rate was quite modest.
Anyway, most of the ~40 GtC deep ocean-atmosphere exchange rate is inorganic and what sinks as organics is dispersed in the large deep oceans reservoir and hardly influences DIC in the deep.
The pCO2(aqua) measurements at the main source/sink places include the biotic use of CO2, thus that is accounted for.
why Jeff Glassman can suggest the δ13C for fossil fuel would have had to be considerably heavier, ~ -13.7‰ instead of -29.4‰, for the increase in atmospheric CO2 to have been caused by man.
Again a misinterpretation by Jeff (and by Richard):
If there was no exchange with other reservoirs, the drop in δ13C would be much faster than measured. But there are exchanges with
– vegetation: not so important, as most uptake comes back as release in the next season/years. Only the ~1 GtC/year more permanent uptake gives an increase in δ13C.
– ocean surface: not so important, as that is only 10% of the change in the atmosphere and slightly increases the δ13C level in the atmosphere (~2‰).
– deep oceans: important, as what goes into the deep oceans is the current isotopic composition (minus the isotopic shift at the boundary), but what comes out is the composition of the deep oceans from waters 500-1500 years old (minus the isotopic shift at the boundary). That allows us to estimate the deep ocean-atmosphere exchanges:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/deep_ocean_air_zero.jpg
which is ~40 GtC/year. A similar exchange rate can be deduced from the 14C bomb spike decay rate.
Steve, the temperature correction factor is used for all on line ship’s measurements (over 2 million nowadays) all over the world oceans. Not my fault. But I have no problems using other (better?) factors.
I never said that CO2/bi/carbonate is well mixed in the oceans. CO2 and δ13C are well mixed in the atmosphere, that is what you asked for. And I have addressed the “problem” of the discrepancy of the δ13C drop over time as questioned by Glassman and Richard.
And the uptake/release of CO2 is directly proportional to the pCO2 difference between oceans and atmosphere. That includes the biological changes in the waters measured on many places of the earth.
Thus if the further discussion is about my – restricted – knowledge of biological processes in the oceans, then I have little to add. If the discussion is about the result of increased temperatures, then we can look at the probabilities.
All we know is that the short term variability (seasons to 2-3 years) is 4-5 ppmv/K for all processes on earth combined (but dominated by vegetation), while the long term variability (from a lot of ice cores) is 8 ppmv/K (dominated by the oceans).
As vegetation in general increases its uptake with higher temperatures (more uptake and more area), the 8 ppmv/K must be the lower bound for the oceans. Can you agree on that?
“The flux rate between oceans and atmosphere depends of two items: partial CO2 pressure difference and wind speed. Assuming the latter didn’t change that much over the past century, the flux rate is directly proportional to the pCO2 difference.”
The evidence is that wind speeds have been increasing over the oceans (but not, or the reverse, over the continents) over the last 40 – 50 years. See reviews by people like Farquhar and Roderick etc.
“What is the influence of temperature? Any increase of seawater temperature increases the pCO2(aqua) with about 17 μatm/K. That makes that with a worldwide increase in temperature of 1 K, the pCO2 difference, and thus the inflow at the upwelling places increases with 17 μatm thus from 350 to 367 μatm and thus the influx increases from ~40 GtC/year to ~41.9 GtC/year.”
I just ran the classic standard Nordstrom seawater composition through USGS PHREEQC with the default USGS database. At 14 C the log pCO2(aqua) is -3.41 and at 15 C it is -3.41 (to 2 significant figures). Thus at 20 C pCO2(aqua) is 389.0 μatm and at 15 C pCO2(aqua) it is also 389.0 μatm. Given the number of significant figures, this implies a difference of ~<4.4 μatm or effectively <4.4 μatm/K. The French BRGM database gave exactly the same results. Concerned, I ran the model again using the Lawrence Livermore database (just in case USGS and BRGM are wrong….). At 14 C the log pCO2(aqua) is -3.49 and at 25 C it is -3.48. Thus at 14 C pCO2(aqua) is 323.6 μatm and at 15 C pCO2(aqua) is 331.1 μatm. This is still a difference of only 7.5 (±4.4) μatm/K. I also checked at lower temperatures ( 9 – 10 C) and at higher temperatures (19 -20 C) and can't get values anywhere near your quoted 17 μatm/K using any reputable chemothermodynamic database.
Given the average temperature of the oceans is around 14.5 C it seems to me (1) your quoted value of pCO2(aqua) change with temperature of 17 μatm/K is quite wrong. This is going to have profound effects on your estimate of the change of annual influx for a 1 K change in temperature – even before (2) we get into discussion of the consequences of a true lower value at the same or lower temperatures e.g in the Antarctic Circumpolar Current in the presence of high cyanobacterial primary productivity. Looks to me like your world view is a bit shaky.
I just run this calculation again with a full Pitzer-optimized 2014 EQ3/6 database model at 17 C which seems to be the best consensual value for the mean temperature of the ocean surface (the 14.5 C value was a literature grab value trying to take account of upwelling). The actual temperature is not critical. This model gives a value around 4.0 – 4.4 μatm/K (depending upon some very minor assumptions) for the temperature dependence of pCO2(aqua) – only a quarter of your oft-quoted (above) and model-critical 7 μatm/K value.
Just a correction of a typo my previous final sentence: “This model gives a value around 4.0 – 4.4 μatm/K (depending upon some very minor assumptions) for the temperature dependence of pCO2(aqua) – only a quarter of your oft-quoted (above) and model-critical 17 μatm/K value.” (not 7 as I wrote). Please also note my model includes the all important borate which contributes nearly a quarter of the buffering of seawater above pH 7.5 and exhibits the correct Revelle factor.
Steve, my figures came from the direct measurements of seawater pCO2 where the measurements are corrected for the difference in temperature between measuring device and seawater inlet temperature. See chapter 2e of:
http://www.ldeo.columbia.edu/res/pi/CO2/carbondioxide/text/LMG06_8_data_report.doc
But in fact it doesn’t matter that much: even if it was halve the 17 μatm/K, that only halves the change in fluxes and the target in the atmosphere to restore the original fluxes. The principle remains the same: any change in ocean temperature will be met with a change in CO2 level in the atmosphere which restores the original fluxes.
If the wind speed changed over time, that only changes the fluxes (in both directions?) which influences the speed at which a new equilibrium is reached, but that doesn’t change the equilibrium itself (except if there is a disequilibrium in wind speed changes between source and sink places).
The change in atmospheric CO2 levels at equilibrium equals the average change in pCO2 of seawater. If the change in seawater pCO2 is less than 17 μatm/K, then the effect of ocean temperature on atmospheric CO2 levels is also smaller.
“my figures came from the direct measurements of seawater pCO2 where the measurements are corrected for the difference in temperature between measuring device and seawater inlet temperature.”
That is a 2014 report on a set of measurements conducted in 2006 on a ship. The practical measurements are complicated and easily subject to bias. They are also exclusively location dependent as you acknowledge – rather ironic in the context of your reliance on that a well mixed system. More importantly, given that as you claim Henry’s Law is well known and applies it makes much more sense to use critically reviewed databases which include the current best estimates of the Law’s parameters and of all important buffering reactions equilibria in the water. My approach is utterly commonplace.
“But in fact it doesn’t matter that much: even if it was halve the 17 μatm/K, that only halves the change in fluxes and the target in the atmosphere to restore the original fluxes.”
I am saying that using the above modeling methodology the correct value is only 25 – 44% of your value, probably nearer 25% but certainly not 50%.
“The principle remains the same: any change in ocean temperature will be met with a change in CO2 level in the atmosphere which restores the original fluxes.”
“The change in atmospheric CO2 levels at equilibrium equals the average change in pCO2 of seawater. If the change in seawater pCO2 is less than 17 μatm/K, then the effect of ocean temperature on atmospheric CO2 levels is also smaller.”
These are simply truisms any good quality college lecturer would say thus used by you used only to impress the naive. So what? Why state them? As you claim to be I too am relatively uninterested in people who employ misinterpretation of what others say or mean and/or provide answers which are extensive siting non-relevant items where the real answer is lacking or drowned in the mass…Please note I am not padding my text with student level truisms and would appreciate you doing the same.
I am making the valid points that (1) it is clear you are using a single significant overestimate of the mean abiotic temperature dependence of pCO3(aq) of seawater at the mean ocean temerature (17.6 C), and (2) you are also unaware of the modern trend in oceanic wind speeds.
Furthermore I contend that the biotic influence of phytoplankton is significant and raises a host of significant technical issues you have not properly considered (beyond a surficial scan of the literature leading to dogmatism). To take just one example what about marine surface films?
The biotic significance is increased by a lower abiotic temperature dependence of pCO3(aq) of seawater than you claimed. The NPP of biotics in the oceans, including a powerful influence of temperature – more significant that the abiotic, is most significant in the great circumpolar bands – particularly the southern band and particularly since the Drake Passage opened up 40 odd million years ago.
Satellite retrievals indicate increases in oceanic wind speed averaging 0.008 m s-1 a-1 for 1987–2006.
Wentz, F. J., L. Ricciardulli, K. Hilburn, and C. Mears (2007) Science, 317, 233–235
Sorry, used the wrong reply… see above
Data collected from ships and satellites has frequently been used to estimate trends in surface wind speed. Although these data sets consistently show an increase in global average wind speed over recent decades, the magnitude of this increase varies depending on the data source used. Observations of the ocean surface by satellites, namely altimeter and SSM/I, provide reasonably long datasets with global coverage. These well calibrated and validated datasets are analysed for linear trends of regional mean monthly time series and mean time series for each calendar month over the period from 1991 to 2008. Differences between the resulting trends are investigated and discussed. The data indicate that the observed global trend is not uniformly distributed and can be linked to a significant positive trend in regional average time series across equatorial regions and the Southern Ocean.
Zieger et al (2014) Deep Sea Research Part I: Oceanographic Research Papers 01/2014;
Steve Short
I have been observing your debate with Ferdinand and have resisted the temptation to join in because it would be unreasonable to ‘gang up’ on him.
However, I write to draw attention to your writing
I think it needs to be clearly understood that Ferdinand is well aware of the argument that Henry’s Law is not applicable because it is overwhelmed by biological effects in the ocean: I have repeatedly put this point to him over the years including on WUWT. Indeed, it is what I meant when I wrote in this thread saying to him
However, to date the only responses I have had from him on this matter is argument by assertion.
Hence, I sincerely hope your conversation with Ferdinand will elucidate a better answer to the issue than I have obtained.
The issue has extreme importance.
The observed dynamics of the carbon cycle are mostly driven by biological activity. Thus, how atmospheric CO2 is observed to vary is determined by biological activity. And biological activity varies with temperature and it changes between glacial and interglacial conditions. Nobody knows anything about these variations on a global scale.
The entire recent rise in atmospheric CO2 may be – and probably is – a result of temperature rise from the Little Ice Age (LIA).
Richard
Steve and Richard,
The discussion was started to see if Henry’s law still is valid for a dynamic system.
I don’t see any reason that it wouldn’t be valid: as long as there is a pCO2 difference, there is a CO2 flux, in or out, depending of the sign of the difference, which is influenced by a change in temperature.
The height of the change is a matter of temperature at one side and biological activity on the other side.
In general temperature is dominant, as the higher outflux in the tropical oceans and the higher uptake in the polar waters shows.
Biological activity does modulate the pCO2 of the oceans to a certain extent, but the measured pCO2 levels show the main releases in the tropical waters and the main uptakes in polar waters.
Wind speed changes do modulate the fluxes, but in general that only changes the time constant to reach the new equilibrium. An asymmetric wind speed change between upwelling and downwelling places can shift the equilibrium somewhat, but seems not that important to me.
What if the temperature increases? One source, used for the correction of millions of field measurements gives 17 ppmv/K for the average seawater temperature, other sources give smaller values. The exact value thus may differ, but the main point still is that the pCO2 of seawater increases with increased temperature.
That gives an increase of CO2 in the atmosphere which is the same as the average increase of pCO2 in the oceans which restores the previous fluxes.
The historical increase rate was 8 ppmv/K over very long time frames. As that is mainly caused by the oceans (vegetation being a net sink at higher temperatures), that is the minimum change in pCO2 of the oceans.
Thus, Richard, the warming since the LIA of less than 1 K is good for maximum 8 ppmv extra CO2 in the atmosphere. That is all. The more that a huge release of CO2 out of the oceans would increase the δ13C ratio of the atmosphere, while we see a firm decrease…
Ferdinand
Please be assured that I am not ignoring your having written
You and I have argued this many, many times without resolution.
Having iterated how and why I think the issue is important, I want to ‘stand back’ in hope that you and Steve can do better than you and me at obtaining a conclusion.
Richard
I’m sorry; due to work matters I can’t resume this discussion for a week. In the meantime, I think it would be very useful if Ferdinand could provide a mathematical critique (from his perspective) of the math which Dr. Jeff Glassman wrote back in 2010 regarding what IPCC had written in AR4 i.e. in his Section III: Fingerprints Part A – with respect to the δ13C ‘problem’.
http://www.rocketscientistsjournal.com/2010/03/sgw.html
Ferdinand’s earlier comments on this rely on a long deep ocean/atmosphere exchange rate (quoted timescale 500 – 1500 years) but according to his graph also varying from <<20 GtC/yr in only 1920 – 1940 to over twice that, 40 GtC/yr in 2000. This means the deep ocean reservoir is what? Not well mixed? Varies widely in it's rate of exchange with the atmosphere over multi-decadal decadal timescales? Ferdinand also says the 'ocean surface reservoir' only changes atmospheric δ13C by about ~2‰ noting his use of the term 'vegetation' seems to leave out the 47% of the living global photosynthetic biomass residing in the oceans. This material is a can of worms riddled with circular argument. But most importantly, in this case they are largely distractions used as debating tools (which of course Ferdinand abhors).
Jeff Glassman accepted the IPCC Gsubzero and Rsubzero values and also used the Battle. M. et al (2000) δ13Csuba and δ13Csubf coefficients which yield the rsuba(1995) and rsubf values so, in effect, the constancy of the rate of exchange with the deep ocean is assumed (for say the 50 years prior 2003). IPCC AR4 also quoted Battle et al (2000) extensively so even IPCC at least did not resile from their parameters. Notably, Ferdinand supported this in the graph he posted and by noting the constancy of 'this value' (by definition the ~40 GtC/yr) had been 'verified' by the decay of the ~1962 originating 14C bomb pulse 50 years ago. Jeff also accepted that, in 2003, the average atmospheric was around 8.080 ‰ i.e. in agreement with the figure Ferdinand displayed. Of course this all still assumes a ACO2 'well mixed' scenario.
I have gone carefully over Jeff's math and I can't find a significant flaw – nor would I have expected them from the inventor of one of the fastest and most famous FFT algorithms. But I am always open to argument on this. Murray Salby employed a comparable (but I think less clever) argument for essentially a similar outcome. But it's strange how none of the academic reviewers of Salby's well respected textbook picked-up his flawed thinking then and more recently either. Battle, M. et al (2000) is a well respected paper with an extremely high citation index, was supported by IPCC AR4 and has not been found to be wanting logically. I think; not only has Jeff Glassman identified a significant problem but also Ferdinand is (interestingly) not only 'boxed in' by his own statements but has not provided a logical rebuttal. To me it appears Ferdinand did not even bother to look closely at Jeff's math. Why should we accord Ferdinand the same courtesy? I hope you are reading this, Jeff (;-)
Typos in 2nd to last sentence 3rd paragraph. Sentence should read:
Jeff also accepted that, in 2003, the average atmospheric δ13C was around -8.080 ‰ i.e. in agreement with the figure Ferdinand displayed.
Ferdinand, why do you even say silly stuff like:
“What if the temperature increases? One source, used for the correction of millions of field measurements gives 17 ppmv/K for the average seawater temperature, other sources give smaller values. The exact value thus may differ, but the main point still is that the pCO2 of seawater increases with increased temperature….
The historical increase rate was 8 ppmv/K over very long time frames. As that is mainly caused by the oceans (vegetation being a net sink at higher temperatures), that is the minimum change in pCO2 of the oceans.”
(1) The Henry’s Law parameters embedded in e.g. the UC Berkeley EQ3/6 chemothermodynamic database suggest the value of the rate of change of pCO2(aq) with temperature is about 4.2 – 4.4 ppmv/K for the CURRENT average seawater composition (Nordstrom/USGS). Conversely, the Lawrence Livermore National Laboratory chemothermodynamic database says the correct value is about 7.5 ppmv/K. Therefore why do you keep insisting it should (now) be about 17 ppmv/K? Your value appears to be based upon one single field study but also, I have found, a few relatively crude online calculators (such as students might use). The latter leave out a lot of key refinements such as incorporating e.g. Pitzer coefficients to account for the ionic strength effects on species’ activities and in solution (noting seawater has an ionic strength of 0.645 molar and is close to the limit of the extended Debye-Huckel Equation) and/or fails to include borate which contributes about a quarter of the buffering of seawater above pH 7.5.
(2) Don’t you realize that this ppmv/K is already a TEMPERATURE DEPENDENCY RATE (of pCO2(aq)) and of itself a relatively CONSTANT thermodynamic value? As such, this rate of change itself does not increase at all significantly with temperature e.g. an increase of the water temperature to say 20 C does not change that /K (/C) temperature dependency RATE significantly. The fact that; as you say: “the historical increase rate was 8 ppmv/K over very long time frames” simply confirms for me that the value of 7.5 embedded in the Lawrence Livermore National Laboratory chemothermodynamic database is most probably the most correct/best value (to one significant figure). There is no fundamental reason why the thermodynamic rate of increase of pCO2(aq) with temperature should double to 17 ppmv/K unless the average seawater major ion composition had changed SIGNIFICANTLY since say the LIA, which it is has most assuredly not.
So, in effect, by saying; “The historical increase rate was 8 ppmv/K over very long time frames……that is the minimum change in pCO2 of the oceans.” you are even implying that Henry’s Law has somehow changed since the LIA!!!!!
Phooey!!!
A ghostly glowing light bulb appeared over my head! The dreaded Revelle Factor!!! Thank you for that off the cuff comment Jeff – I just knew in my bones there would have to be some dirty little fiddle somewhere in there for Ferdinand to effectively change Henry’s Law between the LIA and now (cough, choke).
So we have the laughable situation where all the well meaning old fashioned chemists who labored over more than 60 years of laborious lab studies to develop the critically reviewed data for all the famous modern aqueous EQUILIBRIUM chemothermodynamic databases like EQ3/6, WATEQ4F, LLNL, MINTEQ etc., have been marginalised to the extent of banishment from the oceans by a bunch of post modernist creeps who have no qualms about inserting fiddle factors and even (get this) ‘dimensionless factors’ into a relatively simple set of equilibrium thermodynanic equations dealing with gas phase CO2, dissolved CO2, bicarbonate, carbonate, various ion pairs etc. Clearly, Revelle and Suess (1957) have a lot to answer-for!
Post David Archer, Takahashi , Feely etc., etc., ad nauseum, allow me to quote this gem:
Numerical studies have indicated that the steady-state ocean-atmosphere partitioning of carbon will change profoundly as emissions continue. In particular, the globally averaged Revelle buffer factor will first increase and then decrease at higher emissions. Furthermore, atmospheric carbon will initially grow exponentially with emission size, after which it will depend linearly on emissions at higher emission totals. In this article, we explain this behavior by means of an analytical theory based on simple carbonate chemistry. A cornerstone of the theory is a newly defined dimensionless factor, O. We show that the qualitative changes are connected with different regimes in ocean chemistry: if the air-sea partitioning of carbon is determined by the carbonate ion, then the Revelle factor increases with emissions, whereas the buffer factor decreases with emission size, when dissolved carbon dioxide determines the partitioning. Currently, the ocean carbonate chemistry is dominated by the carbonate ion response, but at high total emissions, the response of dissolved carbon dioxide takes on this role.
http://onlinelibrary.wiley.com/doi/10.1029/2009GB003726/full
Dimensionless factor???? Oh my gawd, will this crap never end?
Steve, here an answer to what Jeff Glassman wrote:
IPCC’s argument is that the decline in O2 matches the rise in CO2 and therefore the latter is from fossil fuel burning. Every molecule of CO2 created from burning in the atmosphere should consume one molecule of O2 decline, so the traces should be drawn identically scaled in parts per million
Again something Glassman interprets that never was said or implied by the IPCC.
– He is wrong about the oxygen use by fossil fuels: for coal it is 1:1 carbon:oxygen. For oil it is ~1:1.5 and for natural gas it is 1:2. Glassman forgot that hydrogen in several fossil fuels also uses oxygen. Thus the changes in oxygen and CO2 are not equal.
– The IPCC also adds that in the balance, vegetation can add or use oxygen, depending of how much CO2 is released or taken away by the biosphere as a whole. Since the 1990’s, the oxygen balance shows that the biosphere is a net sink for CO2, thus a net source of O2. That gives the following graph for the 1990-2000 period:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/bolingraph.gif
From IPCC TAR, Chapter 3, page 206, fig. 3.4
http://www.grida.no/climate/IPCC_tar/wg1/pdf/TAR-03.PDF
About the δ13C balance:
The result is a mismatch with IPCC’s data at year 2003 by a difference of 1.3‰, more than twice the range of measurements, which cover two decades.
As explained before, Jeff doesn’t take into account the “thinning” of the δ13C drop caused by human emissions from the deep ocean exchanges. He does mention them, but he doesn’t use that in his calculations.
Anyway:
– the oceans are not the cause of the δ13C drop over time, as any additional release of CO2 from the oceans would increase the δ13C level of the atmosphere.
– the biosphere is not the cause of the δ13C drop over time, as that is a net absorber of CO2, and thus preferentially of 12CO2, leaving relative more 13CO2 in the atmosphere.
– many other natural sources like volcanoes have δ13C levels higher than the atmosphere.
Thus in my informed opinion, the only cause of the δ13C drop is from human emissions…
The same for the pre-1950 nuclear bomb testing 14C decline: from 1890 on the carbon dating tables had to be corrected for the zero-14C releases from fossil fuels…
Therefore why do you keep insisting it should (now) be about 17 ppmv/K
Steve, it doesn’t interest me more than one minute if the factor is 17 ppmv/K or 4 ppmv/K. That is counting the dancing angles on a pin head. 17 ppmv/K is not my figure, if you have complaints, write Takahashi and others who use that figure for their measurements. But let’s agree on the ~8 ppmv/K change rate for a temperature change.
The only point of interest for me is that any increase of the global ocean surface temperature gives a limited increase of CO2 in the atmosphere. Which is what Bart and Salby deny: They insist that a sustained difference in temperature above a zero line gives a constant increase of CO2 in the atmosphere, which violates Henry’s law as good as for static as for dynamic systems.
I never said or implied that Henry’s law changed over time, which is simply impossible. But Henry’s law is only applicable to dissolved free CO2, not to bicarbonates or carbonates, be it that these influence the remaining dissolved CO2 in equilibrium, influenced by pH. That is where the Revelle factor comes in: the Revelle factor shows how much more CO2 dissolves in seawater than in fresh water for the same CO2 pressure in the atmosphere. That doesn’t influence the ratio as implied in Henry’s law, as that is about the same in fresh water as in seawater, but a lot more is getting into bicarbonates and carbonates in seawater. I don’t see how the Revelle factor would influence the pCO2 of seawater (besides marginally), as that is only related to free CO2 and salt(s) content.
Further you are misinterpreting my graph of δ13C thinning by deep ocean exchanges: the discrepancy in δ13C in the earlier years is from the biosphere: before 1990, the biosphere was a net contributor of (low δ13C) to the atmosphere, after 1990 a net absorber of CO2 (thus reducing the δ13C drop in the atmosphere). That is not accounted for in my calculations. That may decrease the real deep ocean-atmosphere exchange somewhat, but that doesn’t imply that the ~40 GtC/year exchange did change over time, only that the most recent figures are the most accurate, as the drop in δ13C is the fastest.
I do frequently use “vegetation” which implies vegetation in general, including ocean vegetation. That also includes the whole biosphere, CO2 users as well as CO2 producers, if the δ13C and O2 balances are involved, but I will be more careful to choose “biosphere” where appropriate.
The EPA’s Endangerment Finding rests on three “lines of evidence”: (1) physical understanding of climate (2) temperature records and (3) computer models:
TSD, Section 5, p. 47.
The absence of the hotspot, among many other deficiencies, proves they do not have a sufficient basic physical understanding. The hot spot predicted by theory is conclusively demonstrated not to exist by more than 50 years of balloon data and 35 years of satellite data. It’s not there. It’s dead, Jim. It’s joined the bleeding choir invisible.
Temperature records – it’s not warming; where it is warming it’s regional, not global; and it’s not setting new records. The instrumental record is too short, too incomplete and too corrupt to draw valid conclusions about causation.
Modeling: Betts is just another item in the overwhelming proof of the complete failure of the climate modeling enterprise.
EPA’s attribution of warming in is total BS because it rests on three invalid lines of evidence.
“… we can’t predict the long-term response of the climate to ongoing CO2 rise with great accuracy. It could be large, it could be small. We don’t know.)
But I thought that 93% of IPPC AR5 contributors are 95% CERTAIN of their warming catastrophe predictions/assumptions/knowledge-based guesstimates.
Over at Bishop Hills, Richard returns.
The final sentence of his comment “We are prodding an angry beast”, tells me never mind just another emotionally immature twit, paid to be a scientist.
Once again, good enough for government.
Anthony says to “Edward Richardson”:
…you’ve had a couple of emails you’ve tried to use here, such as the “c_u_later”, “haxelgrouse” and a couple of names here too, such as “Chuck” and Mr “C_U_LATE_R” aka chucK”, all coming from the same place in Connecticut.
I just knew this guy was playing games. So, he’s our old friend “H Grouse”, and “chuck”, among others.
That explains a lot. It explains why he argues incessantly, and never acknowledges that anyone has a legitimate point. ‘H Grouse’, heh. I should have known.